diff --git a/main/features/0793-unqualfied-dids-transition/index.html b/main/features/0793-unqualfied-dids-transition/index.html index 8eb178d5..4fb951aa 100644 --- a/main/features/0793-unqualfied-dids-transition/index.html +++ b/main/features/0793-unqualfied-dids-transition/index.html @@ -4586,7 +4586,7 @@

Aries RFC 0793: Unqualified D
  • Authors: Sam Curren
  • Status: ACCEPTED
  • Since: 2023-07-11
  • -
  • Status Note: In Step 1 - Target Deployment Date: 2024-02-28
  • +
  • Status Note: In Step 1 - Target Deployment Date: 2024-07-02
  • Supersedes:
  • Start Date: 2023-07-11
  • Tags: feature, community-update
  • @@ -4608,11 +4608,11 @@

    Summaryimplementations accordingly. -
  • Target Date for code update: 2023-12-17
  • -
  • Target Date fpr deployment update: 2024-02-28
  • +
  • Target Date for code update: 2024-05-01
  • +
  • Target Date for deployment update: 2024-07-02
  • Step 2: Agent builders using unqualified DIDs MUST no longer use new unqualified DIDs, and MUST use DID Rotation to rotate to a fully qualified DID.
  • Each agent builder SHOULD notify the community they have completed Step 2 by submitting a PR to update their entry in the implementations accordingly.
  • -
  • Target Date for finishing step 2: 2024-03-20
  • +
  • Target Date for finishing step 2: 2024-11-01
  • Step 3: Agent builders SHOULD update their deployments to remove all support for receiving unqualified DIDs from other agents.
  • The community coordination triggers between the steps above will be as follows:

    diff --git a/main/search/search_index.json b/main/search/search_index.json index 999c437a..53e7f73e 100644 --- a/main/search/search_index.json +++ b/main/search/search_index.json @@ -1 +1 @@ -{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome","text":"

    This repo holds Request for Comment (RFCs) for the Aries project. They describe important topics (not minor details) that we want to standardize across the Aries ecosystem.

    If you are here to learn about Aries, we recommend you use the the RFC Index for a current listing of all RFCs and their statuses.

    There are 2 types of Aries RFCs:

    RFCs are for developers building on Aries. They don't provide guidance on how Aries components implement features internally; individual Aries repos have design docs for that. Each Aries RFC includes an \"implementations\" section and all RFCs with a status greater than Proposed should have at least one listed implementation.

    "},{"location":"#rfc-lifecycle","title":"RFC Lifecycle","text":"

    RFCs go through a standard lifecycle.

    "},{"location":"#proposed","title":"PROPOSED","text":"

    To propose an RFC, use these instructions to raise a PR against the repo. Proposed RFCs are considered a \"work in progress\", even after they are merged. In other words, they haven't been endorsed by the community yet, but they seem like reasonable ideas worth exploring.

    "},{"location":"#demonstrated","title":"DEMONSTRATED","text":"

    Demonstrated RFCs have one or more implementations available, listed in the \"Implementations\" section of the RFC document. As with the PROPOSED status, demonstrated RFCs haven't been endorsed by the community, but the ideas put forth have been more thoroughly explored through the implementation(s). The demonstrated status is an optional step in the lifecycle. For protocol-related RFCs, work on protocol tests SHOULD begin in the test suite repo by the time this status is assigned.

    "},{"location":"#accepted","title":"ACCEPTED","text":"

    To get an RFC accepted, build consensus for your RFC on chat and in community meetings. If your RFC is a feature that's protocol- or decorator-related, it MUST have reasonable tests in the test suite repo, it MUST list the test suite in the protocol RFC's Implementations section, at least one other implementation must have passed the relevant portions of the test suite, and all implementations listed in this section of the RFC MUST hyperlink to their test results. An accepted RFC is incubating on a standards track; the community has decided to polish it and is exploring or pursuing implementation.

    "},{"location":"#adopted","title":"ADOPTED","text":"

    To get an RFC adopted, socialize and implement. An RFC gets this status once it has significant momentum--when implementations accumulate, or when the mental model it advocates has begun to permeate our discourse. In other words, adoption is acknowledgment of a de facto standard.

    To refine an RFC, propose changes to it through additional PRs. Typically these changes are driven by experience that accumulates during or after adoption. Minor refinements that just improve clarity can happen inline with lightweight review. Status is still ADOPTED.

    "},{"location":"#stalled","title":"STALLED","text":"

    An RFC is stalled when a proposed RFC makes no progress towards implementation such that it is extremely unlikely it will ever move forward. The stalled state differs from retired in that it is an RFC that has never been implemented or superseded. Like the retired state, it is (likely) an end state and the RFC will not proceed further. Such an RFC remains in the repository on the off chance it will ring a chord with others, be returned to the proposed state, and continue to evolve.

    "},{"location":"#retired","title":"RETIRED","text":"

    An RFC is retired when it is withdrawn from community consideration by its authors, when implementation seems permanently stalled, or when significant refinements require a superseding document. If a retired RFC has been superseded, its Superseded By field should contain a link to the newer spec, and the newer spec's Supersedes field should contain a link to the older spec. Permalinks are not broken.

    "},{"location":"#changing-an-rfc-status","title":"Changing an RFC Status","text":"

    See notes about this in Contributing.

    "},{"location":"#about","title":"About","text":""},{"location":"#license","title":"License","text":"

    This repository is licensed under an Apache 2 License. It is protected by a Developer Certificate of Origin on every commit. This means that any contributions you make must be licensed in an Apache-2-compatible way, and must be free from patent encumbrances or additional terms and conditions. By raising a PR, you certify that this is the case for your contribution.

    For more instructions about contributing, see Contributing.

    "},{"location":"#acknowledgement","title":"Acknowledgement","text":"

    The structure and a lot of the initial language of this repository was borrowed from Indy HIPEs, which borrowed it from Rust RFC. Their good work has made the setup of this repository much quicker and better than it otherwise would have been. If you are not familiar with the Rust community, you should check them out.

    "},{"location":"0000-template-protocol/","title":"Aries RFC 0000: Your Protocol 0.9","text":""},{"location":"0000-template-protocol/#summary","title":"Summary","text":"

    One paragraph explanation of the feature.

    If the RFC you are proposing is NOT a protocol, please use this template as a starting point.

    When completing this template and before submitting as a PR, please remove the template text in sections (other than Implementations). The implementations section should remain as is.

    "},{"location":"0000-template-protocol/#motivation","title":"Motivation","text":"

    Why are we doing this? What use cases does it support? What is the expected outcome?

    "},{"location":"0000-template-protocol/#tutorial","title":"Tutorial","text":""},{"location":"0000-template-protocol/#name-and-version","title":"Name and Version","text":"

    Name and Version

    Specify the official name of the protocol and its version, e.g., \"My Protocol 0.9\".

    Protocol names are often either lower_snake_case or kebob-case. The non-version components of the protocol named are matched exactly.

    URI: https://didcomm.org/lets_do_lunch/<version>/<messageType>

    Message types and protocols are identified with special URIs that match certain conventions. See Message Type and Protocol Identifier URIs for more details.

    The version of a protocol is declared carefully. See Semver Rules for Protocols for details.

    "},{"location":"0000-template-protocol/#key-concepts","title":"Key Concepts","text":"

    This is short--a paragraph or two. It defines terms and describes the flow of the interaction at a very high level. Key preconditions should be noted (e.g., \"You can't issue a credential until you have completed the connection protocol first\"), as well as ways the protocol can start and end, and what can go wrong. The section might also talk about timing constraints and other assumptions. After reading this section, a developer should know what problem your protocol solves, and should have a rough idea of how the protocol works in its simpler variants.

    "},{"location":"0000-template-protocol/#roles","title":"Roles","text":"

    See this note for definitions of the terms \"role\", \"participant\", and \"party\".

    Provides a formal name to each role in the protocol, says who and how many can play each role, and describes constraints associated with those roles (e.g., \"You can only issue a credential if you have a DID on the public ledger\"). The issue of qualification for roles can also be explored (e.g., \"The holder of the credential must be known to the issuer\").

    The formal names for each role are important because they are used when agents discover one another's capabilities; an agent doesn't just claim that it supports a protocol; it makes a claim about which roles in the protocol it supports. An agent that supports credential issuance and an agent that supports credential holding may have very different features, but they both use the credential-issuance protocol. By convention, role names use lower-kebab-case and are compared case-sensitively.

    "},{"location":"0000-template-protocol/#states","title":"States","text":"

    This section lists the possible states that exist for each role. It also enumerates the events (often but not always messages) that can occur, including errors, and what should happen to state as a result. A formal representation of this information is provided in a state machine matrix. It lists events as columns, and states as rows; a cell answers the question, \"If I am in state X (=row), and event Y (=column) occurs, what happens to my state?\" The Tic Tac Toe example is typical.

    Choreography Diagrams from BPMN are good artifacts here, as are PUML sequence diagrams and UML-style state machine diagrams. The matrix form is nice because it forces an exhaustive analysis of every possible event. The diagram styles are often simpler to create and consume, and the PUML and BPMN forms have the virtue that they can support line-by-line diffs when checked in with source code. However, they don't offer an easy way to see if all possible flows have been considered; what they may NOT describe isn't obvious. This--and the freedom from fancy tools--is why the matrix form is used in many early RFCs. We leave it up to the community to settle on whether it wants to strongly recommend specific diagram types.

    The formal names for each state are important, as they are used in acks and problem-reports). For example, a problem-report message declares which state the sender arrived at because of the problem. This helps other participants to react to errors with confidence. Formal state names are also used in the agent test suite, in log messages, and so forth.

    By convention, state names use lower-kebab-case. They are compared case-sensitively.

    State management in protocols is a deep topic. For more information, please see State Details and State Machines.

    "},{"location":"0000-template-protocol/#messages","title":"Messages","text":"

    This section describes each message in the protocol. It should also note the names and versions of messages from other message families that are adopted by the protocol (e.g., an ack or a problem-report). Typically this section is written as a narrative, showing each message type in the context of an end-to-end sample interaction. All possible fields may not appear; an exhaustive catalog is saved for the \"Reference\" section.

    Sample messages that are presented in the narrative should also be checked in next to the markdown of the RFC, in DIDComm Plaintext format.

    The message element of a message type URI are typically lower_camel_case or lower-kebab-case, matching the style of the protocol. JSON items in messages are lower_camel_case and inconsistency in the application of a style within a message is frowned upon by the community.

    "},{"location":"0000-template-protocol/#adopted-messages","title":"Adopted Messages","text":"

    Many protocols should use general-purpose messages such as ack and problem-report) at certain points in an interaction. This reuse is strongly encouraged because it helps us avoid defining redundant message types--and the code to handle them--over and over again (see DRY principle).

    However, using messages with generic values of @type (e.g., \"@type\": \"https://didcomm.org/notification/1.0/ack\") introduces a challenge for agents as they route messages to their internal routines for handling. We expect internal handlers to be organized around protocols, since a protocol is a discrete unit of business value as well as a unit of testing in our agent test suite. Early work on agents has gravitated towards pluggable, routable protocols as a unit of code encapsulation and dependency as well. Thus the natural routing question inside an agent, when it sees a message, is \"Which protocol handler should I route this message to, based on its @type?\" A generic ack can't be routed this way.

    Therefore, we allow a protocol to adopt messages into its namespace. This works very much like python's from module import symbol syntax. It changes the @type attribute of the adopted message. Suppose a rendezvous protocol is identified by the URI https://didcomm.org/rendezvous/2.0, and its definition announces that it has adopted generic 1.x ack messages. When such ack messages are sent, the @type should now use the alias defined inside the namespace of the rendezvous protocol:

    Adoption should be declared in an \"Adopted\" subsection of \"Messages\". When adoption is specified, it should include a minimum adopted version of the adopted message type: \"This protocol adopts ack with version >= 1.4\". All versions of the adopted message that share the same major number should be compatible, given the semver rules that apply to protocols.

    "},{"location":"0000-template-protocol/#constraints","title":"Constraints","text":"

    Many protocols have constraints that help parties build trust. For example, in buying a house, the protocol includes such things as commission paid to realtors to guarantee their incentives, title insurance, earnest money, and a phase of the process where a home inspection takes place. If you are documenting a protocol that has attributes like these, explain them here. If not, the section can be omitted.

    "},{"location":"0000-template-protocol/#reference","title":"Reference","text":"

    All of the sections of reference are optional. If none are needed, the \"Reference\" section can be deleted.

    "},{"location":"0000-template-protocol/#messages-details","title":"Messages Details","text":"

    Unless the \"Messages\" section under \"Tutorial\" covered everything that needs to be known about all message fields, this is where the data type, validation rules, and semantics of each field in each message type are details. Enumerating possible values, or providing ABNF or regexes is encouraged. Following conventions such as those for date- and time-related fields can save a lot of time here.

    Each message type should be associated with one or more roles in the protocol. That is, it should be clear which roles can send and receive which message types.

    If the \"Tutorial\" section covers everything about the messages, this section should be deleted.

    "},{"location":"0000-template-protocol/#examples","title":"Examples","text":"

    This section is optional. It can be used to show alternate flows through the protocol.

    "},{"location":"0000-template-protocol/#collateral","title":"Collateral","text":"

    This section is optional. It could be used to reference files, code, relevant standards, oracles, test suites, or other artifacts that would be useful to an implementer. In general, collateral should be checked in with the RFC.

    "},{"location":"0000-template-protocol/#localization","title":"Localization","text":"

    If communication in the protocol involves humans, then localization of message content may be relevant. Default settings for localization of all messages in the protocol can be specified in an l10n.json file described here and checked in with the RFC. See \"Decorators at Message Type Scope\" in the Localization RFC.

    "},{"location":"0000-template-protocol/#codes-catalog","title":"Codes Catalog","text":"

    If the protocol has a formally defined catalog of codes (e.g., for errors or for statuses), define them in this section. See \"Message Codes and Catalogs\" in the Localization RFC.

    "},{"location":"0000-template-protocol/#drawbacks","title":"Drawbacks","text":"

    Why should we not do this?

    "},{"location":"0000-template-protocol/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"0000-template-protocol/#prior-art","title":"Prior art","text":"

    Discuss prior art, both the good and the bad, in relation to this proposal. A few examples of what this can include are:

    This section is intended to encourage you as an author to think about the lessons from other implementers, provide readers of your proposal with a fuller picture. If there is no prior art, that is fine - your ideas are interesting to us whether they are brand new or if they are an adaptation from other communities.

    Note that while precedent set by other communities is some motivation, it does not on its own motivate an enhancement proposal here. Please also take into consideration that Aries sometimes intentionally diverges from common identity features.

    "},{"location":"0000-template-protocol/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"0000-template-protocol/#implementations","title":"Implementations","text":"

    NOTE: This section should remain in the RFC as is on first release. Remove this note and leave the rest of the text as is. Template text in all other sections should be removed before submitting your Pull Request.

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"0000-template/","title":"Title (Ex. 0000: RFC Topic)","text":""},{"location":"0000-template/#summary","title":"Summary","text":"

    One paragraph explanation of the feature.

    NOTE: If you are creating a protocol RFC, please use this template instead.

    "},{"location":"0000-template/#motivation","title":"Motivation","text":"

    Why are we doing this? What use cases does it support? What is the expected outcome?

    "},{"location":"0000-template/#tutorial","title":"Tutorial","text":"

    Explain the proposal as if it were already implemented and you were teaching it to another Aries contributor or Aries consumer. That generally means:

    Some enhancement proposals may be more aimed at contributors (e.g. for consensus internals); others may be more aimed at consumers.

    "},{"location":"0000-template/#reference","title":"Reference","text":"

    Provide guidance for implementers, procedures to inform testing, interface definitions, formal function prototypes, error codes, diagrams, and other technical details that might be looked up. Strive to guarantee that:

    "},{"location":"0000-template/#drawbacks","title":"Drawbacks","text":"

    Why should we not do this?

    "},{"location":"0000-template/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"0000-template/#prior-art","title":"Prior art","text":"

    Discuss prior art, both the good and the bad, in relation to this proposal. A few examples of what this can include are:

    This section is intended to encourage you as an author to think about the lessons from other implementers, provide readers of your proposal with a fuller picture. If there is no prior art, that is fine - your ideas are interesting to us whether they are brand new or if they are an adaptation from other communities.

    Note that while precedent set by other communities is some motivation, it does not on its own motivate an enhancement proposal here. Please also take into consideration that Aries sometimes intentionally diverges from common identity features.

    "},{"location":"0000-template/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"0000-template/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"LICENSE/","title":"License","text":"
                                 Apache License\n                       Version 2.0, January 2004\n                    http://www.apache.org/licenses/\n

    TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

    1. Definitions.

      \"License\" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.

      \"Licensor\" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.

      \"Legal Entity\" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.

      \"You\" (or \"Your\") shall mean an individual or Legal Entity exercising permissions granted by this License.

      \"Source\" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.

      \"Object\" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.

      \"Work\" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).

      \"Derivative Works\" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.

      \"Contribution\" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \"Not a Contribution.\"

      \"Contributor\" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.

    2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.

    3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.

    4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:

      (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices stating that You changed the files; and

      \u00a9 You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and

      (d) If the Work includes a \"NOTICE\" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.

      You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.

    5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.

    6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.

    7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.

    8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.

    9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.

    END OF TERMS AND CONDITIONS

    APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following\n  boilerplate notice, with the fields enclosed by brackets \"[]\"\n  replaced with your own identifying information. (Don't include\n  the brackets!)  The text should be enclosed in the appropriate\n  comment syntax for the file format. We also recommend that a\n  file or class name and description of purpose be included on the\n  same \"printed page\" as the copyright notice for easier\n  identification within third-party archives.\n

    Copyright [yyyy] [name of copyright owner]

    Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0\n

    Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

    "},{"location":"MAINTAINERS/","title":"Maintainers","text":""},{"location":"MAINTAINERS/#active-maintainers","title":"Active Maintainers","text":"Name Github LFID Daniel Hardman dhh1128 George Aristy llorllale Nathan George nage Stephen Curran swcurran Drummond Reed talltree Sam Curren TelegramSam"},{"location":"MAINTAINERS/#emeritus-maintainers","title":"Emeritus Maintainers","text":"Name Github LFID"},{"location":"MAINTAINERS/#becoming-a-maintainer","title":"Becoming a Maintainer","text":"

    The Aries community welcomes contributions. Contributors may progress to become a maintainer. To become a maintainer the following steps occur, roughly in order.

    "},{"location":"MAINTAINERS/#removing-maintainers","title":"Removing Maintainers","text":"

    Being a maintainer is not a status symbol or a title to be maintained indefinitely. It will occasionally be necessary and appropriate to move a maintainer to emeritus status. This can occur in the following situations:

    Like adding a maintainer the record and governance process for moving a maintainer to emeritus status is recorded in the github PR making that change.

    Returning to active status from emeritus status uses the same steps as adding a new maintainer. Note that the emeritus maintainer already has the 5 required significant changes as there is no contribution time horizon for those.

    "},{"location":"RFCindex/","title":"Aries RFCs by Status","text":""},{"location":"RFCindex/#adopted","title":"ADOPTED","text":""},{"location":"RFCindex/#accepted","title":"ACCEPTED","text":""},{"location":"RFCindex/#demonstrated","title":"DEMONSTRATED","text":""},{"location":"RFCindex/#proposed","title":"PROPOSED","text":""},{"location":"RFCindex/#stalled","title":"STALLED","text":""},{"location":"RFCindex/#retired","title":"RETIRED","text":"

    (This file is machine-generated; see code/generate_index.py.)

    "},{"location":"SECURITY/","title":"Hyperledger Security Policy","text":""},{"location":"SECURITY/#reporting-a-security-bug","title":"Reporting a Security Bug","text":"

    If you think you have discovered a security issue in any of the Hyperledger projects, we'd love to hear from you. We will take all security bugs seriously and if confirmed upon investigation we will patch it within a reasonable amount of time and release a public security bulletin discussing the impact and credit the discoverer.

    There are two ways to report a security bug. The easiest is to email a description of the flaw and any related information (e.g. reproduction steps, version) to security at hyperledger dot org.

    The other way is to file a confidential security bug in our JIRA bug tracking system. Be sure to set the \u201cSecurity Level\u201d to \u201cSecurity issue\u201d.

    The process by which the Hyperledger Security Team handles security bugs is documented further in our Defect Response page on our wiki.

    "},{"location":"contributing/","title":"Contributing","text":""},{"location":"contributing/#contributing","title":"Contributing","text":""},{"location":"contributing/#do-you-need-an-rfc","title":"Do you need an RFC?","text":"

    Use an RFC to advocate substantial changes to the Aries ecosystem, where those changes need to be understood by developers who use Aries. Minor changes are not RFC-worthy, and changes that are internal in nature, invisible to those consuming Aries, should be documented elsewhere.

    "},{"location":"contributing/#preparation","title":"Preparation","text":"

    Before writing an RFC, consider exploring the idea on the aries chat channel, on community calls (see the Hyperledger Community Calendar), or on aries@lists.hyperledger.org. Encouraging feedback from maintainers is a good sign that you're on the right track.

    "},{"location":"contributing/#how-to-propose-an-rfc","title":"How to propose an RFC","text":"

    Make sure that all of your commits satisfy the DCO requirements of the repo and conform to the license restrictions noted below.

    The RFC Maintainers will check to see if the process has been followed, and request any process changes before merging the PR.

    When the PR is merged, your RFC is now formally in the PROPOSED state.

    "},{"location":"contributing/#changing-an-rfc-status","title":"Changing an RFC Status","text":"

    The lifecycle of an RFC is driven by the author or current champion of the RFC. To move an RFC along in the lifecycle, submit a PR with the following characteristics:

    "},{"location":"contributing/#how-to-get-an-rfc-demonstrated","title":"How to get an RFC demonstrated","text":"

    If your RFC is a feature, it's common (though not strictly required) for it to go to a DEMONSTRATED state next. Write some code that embodies the concepts in the RFC. Publish the code. Then submit a PR that adds your early implementation to the Implementations section, and that changes the status to DEMONSTRATED. These PRs should be accepted immediately, as long as all unit tests pass.

    "},{"location":"contributing/#how-to-get-an-rfc-accepted","title":"How to get an RFC accepted","text":"

    After your RFC is merged and officially acquires the PROPOSED status, the RFC will receive feedback from the larger community, and the author should be prepared to revise it. Updates may be made via pull request, and those changes will be merged as long as the process is followed.

    When you believe that the RFC is mature enough (feedback is somewhat resolved, consensus is emerging, and implementation against it makes sense), submit a PR that changes the status to ACCEPTED. The status change PR will remain open until the maintainers agree on the status change.

    NOTE: contributors who used the Indy HIPE process prior to May 2019 should see the acceptance process substantially simplified under this approach. The bar for acceptance is not perfect consensus and all issues resolved; it's just general agreement that a doc is \"close enough\" that it makes sense to put it on a standards track where it can be improved as implementation teaches us what to tweak.

    "},{"location":"contributing/#how-to-get-an-rfc-adopted","title":"How to get an RFC adopted","text":"

    An accepted RFC is a standards-track document. It becomes an acknowledged standard when there is evidence that the community is deriving meaningful value from it. So:

    When you believe an RFC is a de facto standard, raise a PR that changes the status to ADOPTED. If the community is friendly to the idea, the doc will enter a two-week \"Final Comment Period\" (FCP), after which there will be a vote on disposition.

    "},{"location":"contributing/#intellectual-property","title":"Intellectual Property","text":"

    This repository is licensed under an Apache 2 License. It is protected by a Developer Certificate of Origin on every commit. This means that any contributions you make must be licensed in an Apache-2-compatible way, and must be free from patent encumbrances or additional terms and conditions. By raising a PR, you certify that this is the case for your contribution.

    "},{"location":"contributing/#signing-off-commits-dco","title":"Signing off commits (DCO)","text":"

    If you are here because you forgot to sign off your commits, fear not. Check out how to sign off previous commits

    We use developer certificate of origin (DCO) in all Hyperledger repositories, so to get your pull requests accepted, you must certify your commits by signing off on each commit.

    "},{"location":"contributing/#signing-off-your-current-commit","title":"Signing off your current commit","text":"

    The -s flag signs off the commit message with your name and email.

    "},{"location":"contributing/#how-to-sign-off-previous-commits","title":"How to Sign Off Previous Commits","text":"
    1. Use $ git log to see which commits need to be signed off. Any commits missing a line with Signed-off-by: Example Author <author.email@example.com> need to be re-signed.
    2. Go into interactive rebase mode using $ git rebase -i HEAD~X where X is the number of commits up to the most current commit you would like to see.
    3. You will see a list of the commits in a text file. On the line after each commit you need to sign off, add exec git commit --amend --no-edit -s with the lowercase -s adding a text signature in the commit body. Example that signs both commits:
    pick 12345 commit message\nexec git commit --amend --no-edit -s\npick 67890 commit message\nexec git commit --amend --no-edit -s\n
    1. If you need to re-sign a bunch of previous commits at once, find the earliest commit missing the sign off line using $ git log and use that the HASH of the commit before it in this command:

       $ git rebase --exec 'git commit --amend --no-edit -n -s' -i HASH.\n
      This will sign off every commit from most recent to right before the HASH.

    2. You will probably need to do a force push ($ git push -f) if you had previously pushed unsigned commits to remote.

    "},{"location":"github-issues/","title":"Submitting Issues","text":""},{"location":"github-issues/#github-issues","title":"Github Issues","text":"

    RFCs that are not on the brink of changing status are discussed through Github Issues. We generally use Issues to discuss changes that are controversial, and PRs to propose changes that are vetted. This keeps the PR backlog small.

    Any community member can open an issue; specify the RFC number in the issue title so the relationship is clear. For example, to open an issue on RFC 0025, an appropriate title for the issue might be:

    RFC 0025: Need better diagram in Reference section\n

    When the community feels that it's reasonable to suggest a formal status change for an RFC, best efforts are made to resolve all open issues against it. Then a PR is raised against the RFC's main README.md, where the status field in the header is updated. Discussion about the status change typically takes place in the comment stream for the PR, with issues being reserved for non-status-change topics.

    "},{"location":"tags/","title":"Tags on RFCs","text":"

    We categorize RFCs with tags to enrich searches. The meaning of tags is given below.

    "},{"location":"tags/#protocol","title":"protocol","text":"

    Defines one or more protocols that explain how messages are passed to accomplish a stateful interaction.

    "},{"location":"tags/#decorator","title":"decorator","text":"

    Defines one or more decorators that act as mixins to DIDComm messages. Decorators can be added to many different message types without explicitly declaring them in message schemas.

    "},{"location":"tags/#feature","title":"feature","text":"

    Defines a specific, concrete feature that agents might support.

    "},{"location":"tags/#concept","title":"concept","text":"

    Defines a general aspect of the Aries mental model, or a pattern that manifests in many different features.

    "},{"location":"tags/#community-update","title":"community-update","text":"

    An RFC that tracks a community-coordinated update, as described in RFC 0345. Such updates enable independently deployed, interoperable agents to remain interoperable throughout the transition.

    "},{"location":"tags/#credentials","title":"credentials","text":"

    Relates to verifiable credentials.

    "},{"location":"tags/#rich-schemas","title":"rich-schemas","text":"

    Relates to next-generation schemas, such as those used by https://schema.org, as used in verifiable credentials.

    "},{"location":"tags/#test-anomaly","title":"test-anomaly","text":"

    Violates some aspect of our policy on writing tests for protocols before allowing their status to progress beyond DEMONSTRATED. RFCs should only carry this tag temporarily, to grandfather something where test improvements are happening in the background. When this tag is applied to an RFC, unit tests run by our CI/CD pipeline will emit a warning rather than an error about missing tests, IFF each implementation that lacks tests formats its notes about test results like this:

    name of impl | [MISSING test results](/tags.md#test-anomaly)\n
    "},{"location":"aip2/0003-protocols/","title":"Aries RFC 0003: Protocols","text":""},{"location":"aip2/0003-protocols/#summary","title":"Summary","text":"

    Defines peer-to-peer application-level protocols in the context of interactions among agent-like things, and shows how they should be designed and documented.

    "},{"location":"aip2/0003-protocols/#table-of-contents","title":"Table of Contents","text":""},{"location":"aip2/0003-protocols/#motivation","title":"Motivation","text":"

    APIs in the style of Swagger are familiar to nearly all developers, and it's a common assumption that we should use them to solve the problems at hand in the decentralized identity space. However, to truly decentralize, we must think about interactions at a higher level of generalization. Protocols can model all APIs, but not the other way around. This matters. We need to explain why.

    We also need to show how a protocol is defined, so the analog to defining a Swagger API is demystified.

    "},{"location":"aip2/0003-protocols/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0003-protocols/#what-is-a-protocol","title":"What is a Protocol?","text":"

    A protocol is a recipe for a stateful interaction. Protocols are all around us, and are so ordinary that we take them for granted. Each of the following interactions is stateful, and has conventions that constitute a sort of \"recipe\":

    In the context of decentralized identity, protocols manifest at many different levels of the stack: at the lowest levels of networking, in cryptographic algorithms like Diffie Hellman, in the management of DIDs, in the conventions of DIDComm, and in higher-level interactions that solve problems for people with only minimal interest in the technology they're using. However, this RFC focuses on the last of these layers, where use cases and personas are transformed into ../../features with obvious social value like:

    When \"protocol\" is used in an Aries context without any qualifying adjective, it is referencing a recipe for a high-level interaction like these. Lower-level protocols are usually described more specifically and possibly with other verbiage: \"cryptographic algorithms\", \"DID management procedures\", \"DIDComm conventions\", \"transports\", and so forth. This helps us focus \"protocol\" on the place where application developers that consume Aries do most of the work that creates value.

    "},{"location":"aip2/0003-protocols/#relationship-to-apis","title":"Relationship to APIs","text":"

    The familiar world of web APIs is a world of protocols, but it comes with constraints antithetical to decentralized identity:

    Protocols impose none of these constraints. Web APIs can easily be modeled as protocols where the transport is HTTP and the payload is a message, and the Aries community actively does this. We are not opposed to APIs. We just want to describe and standardize the higher level abstraction so we don't have a web solution and a BlueTooth solution that are diverged for no good reason.

    "},{"location":"aip2/0003-protocols/#decentralized","title":"Decentralized","text":"

    As used in the agent/DIDComm world, protocols are decentralized. This means there is not an overseer for the protocol, guaranteeing information flow, enforcing behaviors, and ensuring a coherent view. It is a subtle but important divergence from API-centric approaches, where a server holds state against which all other parties (clients) operate. Instead, all parties are peers, and they interact by mutual consent and with a (hopefully) shared understanding of the rules and goals. Protocols are like a dance\u2014not one that's choreographed or directed, but one where the parties make dynamic decisions and react to them.

    "},{"location":"aip2/0003-protocols/#types-of-protocols","title":"Types of Protocols","text":"

    The simplest protocol style is notification. This style involves two parties, but it is one-way: the notifier emits a message, and the protocol ends when the notified receives it. The basic message protocol uses this style.

    Slightly more complex is the request-response protocol style. This style involve two parties, with the requester making the first move, and the responder completing the interaction. The Discover Features Protocol uses this style. Note that with protocols as Aries models them (and unlike an HTTP request), the request-response messages are asynchronous.

    However, more complex protocols exist. The Introduce Protocol involves three parties, not two. The issue credential protocol includes up to six message types (including ack and problem_report), two of which (proposal and offer) can be used to interactively negotiate details of the elements of the subsequent messages in the protocol.

    See this subsection for definitions of the terms \"role\", \"participant\", and \"party\".

    "},{"location":"aip2/0003-protocols/#agent-design","title":"Agent Design","text":"

    Protocols are the key unit of interoperable extensibility in agents and agent-like things. To add a new interoperable feature to an agent, give it the ability to handle a new protocol.

    When agents receive messages, they map the messages to a protocol handler and possibly to an interaction state that was previously persisted. This is the analog to routes, route handlers, and sessions in web APIs, and could actually be implemented as such if the transport for the protocol is HTTP. The protocol handler is code that knows the rules of a particular protocol; the interaction state tracks progress through an interaction. For more information, see the agents explainer\u2014RFC 0004 and the DIDComm explainer\u2014RFC 0005.

    "},{"location":"aip2/0003-protocols/#composable","title":"Composable","text":"

    Protocols are composable--meaning that you can build complex ones from simple ones. The protocol for asking someone to repeat their last sentence can be part of the protocol for ordering food at a restaurant. It's common to ask a potential driver's license holder to prove their street address before issuing the license. In protocol terms, this is nicely modeled as the present proof being invoked in the middle of an issue credential protocol.

    When we run one protocol inside another, we call the inner protocol a subprotocol, and the outer protocol a superprotocol. A given protocol may be a subprotocol in some contexts, and a standalone protocol in others. In some contexts, a protocol may be a subprotocol from one perspective, and a superprotocol from another (as when protocols are nested at least 3 deep).

    Commonly, protocols wait for subprotocols to complete, and then they continue. A good example of this is mentioned above\u2014starting an issue credential flow, but requiring the potential issuer and/or the potential holder to prove something to one another before completing the process.

    In other cases, a protocol B is not \"contained\" inside protocol A. Rather, A triggers B, then continues in parallel, without waiting for B to complete. This coprotocol relationship is analogous to relationship between coroutines in computer science. In the Introduce Protocol, the final step is to begin a connection protocol between the two introducees-- but the introduction coprotocol completes when the connect coprotocol starts, not when it completes.

    "},{"location":"aip2/0003-protocols/#message-types","title":"Message Types","text":"

    A protocol includes a number of message types that enable the execution of an instance of a protocol. Collectively, the message types of a protocol become the skeleton of its interface. Most of the message types are defined with the protocol, but several key message types, notably acks and problem reports are defined in separate RFCs and adopted into a protocol. This ensures that the structure of such messages is standardized, but used in the context of the protocol adopting the message types.

    "},{"location":"aip2/0003-protocols/#handling-unrecognized-items-in-messages","title":"Handling Unrecognized Items in Messages","text":"

    In the semver section of this document there is discussion of the handling of mismatches in minor versions supported and received. Notably, a recipient that supports a given minor version of a protocol less than that of a received protocol message should ignore any unrecognized fields in the message. Such handling of unrecognized data items applies more generally than just minor version mismatches. A recipient of a message from a supported major version of a protocol should ignore any unrecognized items in a received message, even if the supported and minor versions are the same. When items from the message are ignored, the recipient may want to send a warning problem-report message with code fields-ignored.

    "},{"location":"aip2/0003-protocols/#ingredients","title":"Ingredients","text":"

    A protocol has the following ingredients:

    "},{"location":"aip2/0003-protocols/#how-to-define-a-protocol","title":"How to Define a Protocol","text":"

    To define a protocol, write an RFC. Specific instructions for protocol RFCs, and a discussion about the theory behind detailed protocol ../../concepts, are given in the instructions for protocol RFCs and in the protocol RFC template.

    The tictactoe protocol is attached to this RFC as an example.

    "},{"location":"aip2/0003-protocols/#security-considerations","title":"Security Considerations","text":""},{"location":"aip2/0003-protocols/#replay-attacks","title":"Replay Attacks","text":"

    It should be noted that when defining a protocol that has domain specific requirements around preventing replay attacks, an @id property SHOULD be required. Given an @id field is most commonly set to be a UUID, it should provide randomness comparable to that of a nonce in preventing replay attacks. However, this means that care will be needed in processing of the @id field to make sure its value has not been used before. In some cases, nonces require being unpredictable as well. In this case, greater review should be taken as to how the @id field should be used in the domain specific protocol. In the event where the @id field is not adequate for preventing replay attacks, it's recommended that an additional nonce field be required by the domain specific protocol specification.

    "},{"location":"aip2/0003-protocols/#reference","title":"Reference","text":""},{"location":"aip2/0003-protocols/#message-type-and-protocol-identifier-uris","title":"Message Type and Protocol Identifier URIs","text":"

    Message types and protocols are identified with URIs that match certain conventions.

    "},{"location":"aip2/0003-protocols/#mturi","title":"MTURI","text":"

    A message type URI (MTURI) identifies message types unambiguously. Standardizing its format is important because it is parsed by agents that will map messages to handlers--basically, code will look at this string and say, \"Do I have something that can handle this message type inside protocol X version Y?\"

    When this analysis happens, strings should be compared for byte-wise equality in all segments except version. This means that case, unicode normalization, and punctuation differences all matter. It is thus best practice to avoid protocol and message names that differ only in subtle, easy-to-mistake ways.

    Comparison of the version segment of an MTURI or PIURI should follow semver rules and is discussed in the semver section of this document.

    The URI MUST be composed as follows:

    message-type-uri  = doc-uri delim protocol-name\n    \"/\" protocol-version \"/\" message-type-name\ndelim             = \"?\" / \"/\" / \"&\" / \":\" / \";\" / \"=\"\nprotocol-name     = identifier\nprotocol-version  = semver\nmessage-type-name = identifier\nidentifier        = alpha *(*(alphanum / \"_\" / \"-\" / \".\") alphanum)\n

    It can be loosely matched and parsed with the following regex:

        (.*?)([a-z0-9._-]+)/(\\d[^/]*)/([a-z0-9._-]+)$\n

    A match will have captures groups of (1) = doc-uri, (2) = protocol-name, (3) = protocol-version, and (4) = message-type-name.

    The goals of this URI are, in descending priority:

    The doc-uri portion is any URI that exposes documentation about protocols. A developer should be able to browse to that URI and use human intelligence to look up the named and versioned protocol. Optionally and preferably, the full URI may produce a page of documentation about the specific message type, with no human mediation involved.

    "},{"location":"aip2/0003-protocols/#piuri","title":"PIURI","text":"

    A shorter URI that follows the same conventions but lacks the message-type-name portion is called a protocol identifier URI (PIURI).

    protocol-identifier-uri  = doc-uri delim protocol-name\n    \"/\" semver\n

    Its loose matcher regex is:

        (.*?)([a-z0-9._-]+)/(\\d[^/]*)/?$\n

    The following are examples of valid MTURIs and PIURIs:

    "},{"location":"aip2/0003-protocols/#semver-rules-for-protocols","title":"Semver Rules for Protocols","text":"

    Semver rules apply to protocols, with the version of a protocol is expressed in the semver portion of its identifying URI. The \"ingredients\" of a protocol combine to form a public API in the semver sense. Core Aries protocols specify only major and minor elements in a version; the patch component is not used. Non-core protocols may choose to use the patch element.

    The major and minor versions of protocols match semver semantics:

    Within a given major version of a protocol, an agent should:

    This leads to the following received message handling rules:

    Note: The deprecation of the \"warning\" problem-reports in cases of minor version mismatches is because the recipient of the response can detect the mismatch by looking at the PIURI, making the \"warning\" unnecessary, and because the problem-report message may be received after (and definitely at a different time than) the response message, and so the warning is of very little value to the recipient. Recipients should still be aware that minor version mismatch warning problem-report messages may be received and handle them appropriately, likely by quietly ignoring them.

    As documented in the semver documentation, these requirements are not applied when major version 0 is used. In that case, minor version increments are considered breaking.

    Agents may support multiple major versions and select which major version to use when initiating an instance of the protocol.

    An agent should reject messages from protocols or unsupported protocol major versions with a problem-report message with code version-not-supported. Agents that receive such a problem-report message may use the discover ../../features protocol to resolve the mismatch.

    "},{"location":"aip2/0003-protocols/#semver-examples","title":"Semver Examples","text":""},{"location":"aip2/0003-protocols/#initiator","title":"Initiator","text":"

    Unless Alice's agent (the initiator of a protocol) knows from prior history that it should do something different, it should begin a protocol using the highest version number that it supports. For example, if A.1 supports versions 2.0 through 2.2 of protocol X, it should use 2.2 as the version in the message type of its first message.

    "},{"location":"aip2/0003-protocols/#recipient-rules","title":"Recipient Rules","text":"

    Agents for Bob (the recipient) should reject messages from protocols with major versions different from those they support. For major version 0, they should also reject protocols with minor versions they don't support, since semver stipulates that ../../features are not stable before 1.0. For example, if B.1 supports only versions 2.0 and 2.1 of protocol X, it should reject any messages from version 3 or version 1 or 0. In most cases, rejecting a message means sending a problem-report that the message is unsupported. The code field in such messages should be version-not-supported. Agents that receive such a problem-report can then use the Discover Features Protocol to resolve version problems.

    Recipient agents should accept messages that differ from their own supported version of a protocol only in the patch, prerelease, and/or build fields, whether these differences make the message earlier or later than the version the recipient prefers. These messages will be robustly compatible.

    For major version >= 1, recipients should also accept messages that differ only in that the message's minor version is earlier than their own preference. In such a case, the recipient should degrade gracefully to use the earlier version of the protocol. If the earlier version lacks important ../../features, the recipient may optionally choose to send, in addition to a response, a problem-report with code version-with-degraded-../../features.

    If a recipient supports protocol X version 1.0, it should tentatively accept messages with later minor versions (e.g., 1.2). Message types that differ in only in minor version are guaranteed to be compatible for the feature set of the earlier version. That is, a 1.0-capable agent can support 1.0 ../../features using a 1.2 message, though of course it will lose any ../../features that 1.2 added. Thus, accepting such a message could have two possible outcomes:

    1. The message at version 1.2 might look and behave exactly like it did at version 1.0, in which case the message will process without any trouble.

    2. The message might contain some fields that are unrecognized and need to be ignored.

    In case 2, it is best practice for the recipient to send a problem-report that is a warning, not an error, announcing that some fields could not be processed (code = fields-ignored-due-to-version-mismatch). Such a message is in addition to any response that the protocol demands of the recipient.

    If the recipient of a protocol's initial message generates a response, the response should use the latest major.minor protocol version that both parties support and know about. Generally, all messages after the first use only major.minor

    "},{"location":"aip2/0003-protocols/#state-details-and-state-machines","title":"State Details and State Machines","text":"

    While some protocols have only one sequence of states to manage, in most different roles perceive the interaction differently. The sequence of states for each role needs to be described with care in the RFC.

    "},{"location":"aip2/0003-protocols/#state-machines","title":"State Machines","text":"

    By convention, protocol state and sequence rules are described using the concept of state machines, and we encourage developers who implement protocols to build them that way.

    Among other benefits, this helps with error handling: when one agent sends a problem-report message to another, the message can make it crystal clear which state it has fallen back to as a result of the error.

    Many developers will have encountered a formal of definition of state machines as they wrote parsers or worked on other highly demanding tasks, and may worry that state machines are heavy and intimidating. But as they are used in Aries protocols, state machines are straightforward and elegant. They cleanly encapsulate logic that would otherwise be a bunch of conditionals scattered throughout agent code. The tictactoe example protocol example includes a complete state machine in less than 50 lines of python code, with tests.

    For an extended discussion of how state machines can be used, including in nested protocols, and with hooks that let custom processing happen at each point in a flow, see https://github.com/dhh1128/distributed-state-machine.

    "},{"location":"aip2/0003-protocols/#processing-points","title":"Processing Points","text":"

    A protocol definition describes key points in the flow where business logic can attach. Some of these processing points are obvious, because the protocol makes calls for decisions to be made. Others are implicit. Some examples include:

    "},{"location":"aip2/0003-protocols/#roles-participants-parties-and-controllers","title":"Roles, Participants, Parties, and Controllers","text":""},{"location":"aip2/0003-protocols/#roles","title":"Roles","text":"

    The roles in a protocol are the perspectives (responsibilities, privileges) that parties take in an interaction.

    This perspective is manifested in three general ways:

    Like parties, roles are normally known at the start of the protocol but this is not a requirement.

    In an auction protocol, there are only two roles\u2014auctioneer and bidder\u2014even though there may be many parties involved.

    "},{"location":"aip2/0003-protocols/#participants","title":"Participants","text":"

    The participants in a protocol are the agents that send and/or receive plaintext application-level messages that embody the protocol's interaction. Alice, Bob, and Carol may each have a cloud agent, a laptop, and a phone; if they engage in an introduction protocol using phones, then the agents on their phones are the participants. If the phones talk directly over Bluetooth, this is particularly clear--but even if the phones leverage push notifications and HTTP such that cloud agents help with routing, only the phone agents are participants, because only they maintain state for the interaction underway. (The cloud agents would be facilitators, and the laptops would be bystanders). When a protocol is complete, the participant agents know about the outcome; they may need to synchronize or replicate their state before other agents of the parties are aware.

    "},{"location":"aip2/0003-protocols/#parties","title":"Parties","text":"

    The parties to a protocol are the entities directly responsible for achieving the protocol's goals. When a protocol is high-level, parties are typically people or organizations; as protocols become lower-level, parties may be specific agents tasked with detail work through delegation.

    Imagine a situation where Alice wants a vacation. She engages with a travel agent named Bob. Together, they begin an \"arrange a vacation\" protocol. Alice is responsible for expressing her parameters and proving her willingness to pay; Bob is responsible for running a bunch of subprotocols to work out the details. Alice and Bob--not software agents they use--are parties to this high-level protocol, since they share responsibility for its goals.

    As soon as Alice has provided enough direction and hangs up the phone, Bob begins a sub-protocol with a hotel to book a room for Alice. This sub-protocol has related but different goals--it is about booking a particular hotel room, not about the vacation as a whole. We can see the difference when we consider that Bob could abandon the booking and choose a different hotel entirely, without affecting the overarching \"arrange a vacation\" protocol.

    With the change in goal, the parties have now changed, too. Bob and a hotel concierge are the ones responsible for making the \"book a hotel room\" protocol progress. Alice is an approver and indirect stakeholder, but she is not doing the work. (In RACI terms, Alice is an \"accountable\" or \"approving\" entity, but only Bob and the concierge are \"responsible\" parties.)

    Now, as part of the hotel reservation, Bob tells the concierge that the guest would like access to a waverunner to play in the ocean on day 2. The concierge engages in a sub-sub-protocol to reserve the waverunner. The goal of this sub-sub-protocol is to reserve the equipment, not to book a hotel or arrange a vacation. The parties to this sub-sub-protocol are the concierge and the person or automated system that manages waverunners.

    Often, parties are known at the start of a protocol; however, that is not a requirement. Some protocols might commence with some parties not yet known or assigned.

    For many protocols, there are only two parties, and they are in a pairwise relationship. Other protocols are more complex. Introductions involves three; an auction may involve many.

    Normally, the parties that are involved in a protocol also participate in the interaction but this is not always the case. Consider a gossip protocol, two parties may be talking about a third party. In this case, the third party would not even know that the protocol was happening and would definitely not participate.

    "},{"location":"aip2/0003-protocols/#controllers","title":"Controllers","text":"

    The controllers in a protocol are entities that make decisions. They may or may not be direct parties.

    Imagine a remote chess game between Bob and Carol, conducted with software agents. The chess protocol isn't technically about how to select a wise chess move; it's about communicating the moves so parties achieve the shared goal of running a game to completion. Yet choices about moves are clearly made as the protocol unfolds. These choices are made by controllers--Bob and Carol--while the agents responsible for the work of moving the game forward wait with the protocol suspended.

    In this case, Bob and Carol could be analyzed as parties to the protocol, as well as controllers. But in other cases, the ../../concepts are distinct. For example, in a protocol to issue credentials, the issuing institution might use an AI and/or business automation as a controller.

    "},{"location":"aip2/0003-protocols/#instructions-for-protocol-rfcs","title":"Instructions for Protocol RFCs","text":"

    A protocol RFC conforms to general RFC patterns, but includes some specific substructure.

    Please see the special protocol RFC template for details.

    "},{"location":"aip2/0003-protocols/#drawbacks","title":"Drawbacks","text":"

    This RFC creates some formalism around defining protocols. It doesn't go nearly as far as SOAP or CORBA/COM did, but it is slightly more demanding of a protocol author than the familiar world of RESTful Swagger/OpenAPI.

    The extra complexity is justified by the greater demands that agent-to-agent communications place on the protocol definition. See notes in Prior Art section for details.

    "},{"location":"aip2/0003-protocols/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Some of the simplest DIDComm protocols could be specified in a Swagger/OpenAPI style. This would give some nice tooling. However, not all fit into that mold. It may be desirable to create conversion tools that allow Swagger interop.

    "},{"location":"aip2/0003-protocols/#prior-art","title":"Prior art","text":""},{"location":"aip2/0003-protocols/#bpmn","title":"BPMN","text":"

    BPMN (Business Process Model and Notation) is a graphical language for modeling flows of all types (plus things less like our protocols as well). BPMN is a mature standard sponsored by OMG(Object Management Group). It has a nice tool ecosystem (such as this). It also has an XML file format, so the visual diagrams have a two-way transformation to and from formal written language. And it has a code generation mode, where BPMN can be used to drive executable behavior if diagrams are sufficiently detailed and sufficiently standard. (Since BPMN supports various extensions and is often used at various levels of formality, execution is not its most common application.)

    BPMN began with a focus on centralized processes (those driven by a business entity), with diagrams organized around the goal of the point-of-view entity and what they experience in the interaction. This is somewhat different from a DIDComm protocol where any given entity may experience the goal and the scope of interaction differently; the state machine for a home inspector in the \"buy a home\" protocol is quite different, and somewhat separable, from the state machine of the buyer, and that of the title insurance company.

    BPMN 2.0 introduced the notion of a choreography, which is much closer to the concept of an A2A protocol, and which has quite an elegant and intuitive visual representation. However, even a BPMN choreography doesn't have a way to discuss interactions with decorators, adoption of generic messages, and other A2A-specific concerns. Thus, we may lean on BPMN for some diagramming tasks, but it is not a substitute for the RFC definition procedure described here.

    "},{"location":"aip2/0003-protocols/#wsdl","title":"WSDL","text":"

    WSDL (Web Services Description Language) is a web-centric evolution of earlier, RPC-style interface definition languages like IDL in all its varieties and CORBA. These technologies describe a called interface, but they don't describe the caller, and they lack a formalism for capturing state changes, especiall by the caller. They are also out of favor in the programmer community at present, as being too heavy, too fragile, or poorly supported by current tools.

    "},{"location":"aip2/0003-protocols/#swagger-openapi","title":"Swagger / OpenAPI","text":"

    Swagger / OpenAPI overlaps with some of the concerns of protocol definition in agent-to-agent interactions. We like the tools and the convenience of the paradigm offered by OpenAPI, but where these two do not overlap, we have impedance.

    Agent-to-agent protocols must support more than 2 roles, or two roles that are peers, whereas RESTful web services assume just client and server--and only the server has a documented API.

    Agent-to-agent protocols are fundamentally asynchronous, whereas RESTful web services mostly assume synchronous request~response.

    Agent-to-agent protocols have complex considerations for diffuse trust, whereas RESTful web services centralize trust in the web server.

    Agent-to-agent protocols need to support transports beyond HTTP, whereas RESTful web services do not.

    Agent-to-agent protocols are nestable, while RESTful web services don't provide any special support for that construct.

    "},{"location":"aip2/0003-protocols/#other","title":"Other","text":""},{"location":"aip2/0003-protocols/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0003-protocols/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python several protocols, circa Feb 2019 Aries Framework - .NET several protocols, circa Feb 2019 Streetcred.id several protocols, circa Feb 2019 Aries Cloud Agent - Python numerous protocols plus extension mechanism for pluggable protocols Aries Static Agent - Python 2 or 3 protocols Aries Framework - Go DID Exchange Connect.Me mature but proprietary protocols; community protocols in process Verity mature but proprietary protocols; community protocols in process Aries Protocol Test Suite 2 or 3 core protocols; active work to implement all that are ACCEPTED, since this tests conformance of other agents Pico Labs implemented protocols: connections, trust_ping, basicmessage, routing"},{"location":"aip2/0003-protocols/roles-participants-etc/","title":"Roles participants etc","text":""},{"location":"aip2/0003-protocols/roles-participants-etc/#roles-participants-parties-and-controllers","title":"Roles, Participants, Parties, and Controllers","text":""},{"location":"aip2/0003-protocols/roles-participants-etc/#roles","title":"Roles","text":"

    The roles in a protocol are the perspectives (responsibilities, privileges) that parties take i an interaction.

    This perspective is manifested in three general ways:

    Like parties, roles are normally known at the start of the protocol but this is not a requirement.

    In an auction protocol, there are only two roles\u2014auctioneer and bidder\u2014even though there may be many parties involved.

    "},{"location":"aip2/0003-protocols/roles-participants-etc/#participants","title":"Participants","text":"

    The participants in a protocol are the agents that send and/or receive plaintext application-level messages that embody the protocol's interaction. Alice, Bob, and Carol may each have a cloud agent, a laptop, and a phone; if they engage in an introduction protocol using phones, then the agents on their phones are the participants. If the phones talk directly over Bluetooth, this is particularly clear--but even if the phones leverage push notifications and HTTP such that cloud agents help with routing, only the phone agents are participants, because only they maintain state for the interaction underway. (The cloud agents would be facilitators, and the laptops would be bystanders). When a protocol is complete, the participant agents know about the outcome; they may need to synchronize or replicate their state before other agents of the parties are aware.

    "},{"location":"aip2/0003-protocols/roles-participants-etc/#parties","title":"Parties","text":"

    The parties to a protocol are the entities directly responsible for achieving the protocol's goals. When a protocol is high-level, parties are typically people or organizations; as protocols become lower-level, parties may be specific agents tasked with detail work through delegation.

    Imagine a situation where Alice wants a vacation. She engages with a travel agent named Bob. Together, they begin an \"arrange a vacation\" protocol. Alice is responsible for expressing her parameters and proving her willingness to pay; Bob is responsible for running a bunch of subprotocols to work out the details. Alice and Bob--not software agents they use--are parties to this high-level protocol, since they share responsibility for its goals.

    As soon as Alice has provided enough direction and hangs up the phone, Bob begins a sub-protocol with a hotel to book a room for Alice. This sub-protocol has related but different goals--it is about booking a particular hotel room, not about the vacation as a whole. We can see the difference when we consider that Bob could abandon the booking and choose a different hotel entirely, without affecting the overarching \"arrange a vacation\" protocol.

    With the change in goal, the parties have now changed, too. Bob and a hotel concierge are the ones responsible for making the \"book a hotel room\" protocol progress. Alice is an approver and indirect stakeholder, but she is not doing the work. (In RACI terms, Alice is an \"accountable\" or \"approving\" entity, but only Bob and the concierge are \"responsible\" parties.)

    Now, as part of the hotel reservation, Bob tells the concierge that the guest would like access to a waverunner to play in the ocean on day 2. The concierge engages in a sub-sub-protocol to reserve the waverunner. The goal of this sub-sub-protocol is to reserve the equipment, not to book a hotel or arrange a vacation. The parties to this sub-sub-protocol are the concierge and the person or automated system that manages waverunners.

    Often, parties are known at the start of a protocol; however, that is not a requirement. Some protocols might commence with some parties not yet known or assigned.

    For many protocols, there are only two parties, and they are in a pairwise relationship. Other protocols are more complex. Introductions involves three; an auction may involve many.

    Normally, the parties that are involved in a protocol also participate in the interaction but this is not always the case. Consider a gossip protocol, two parties may be talking about a third party. In this case, the third party would not even know that the protocol was happening and would definitely not participate.

    "},{"location":"aip2/0003-protocols/roles-participants-etc/#controllers","title":"Controllers","text":"

    The controllers in a protocol are entities that make decisions. They may or may not be direct parties.

    Imagine a remote chess game between Bob and Carol, conducted with software agents. The chess protocol isn't technically about how to select a wise chess move; it's about communicating the moves so parties achieve the shared goal of running a game to completion. Yet choices about moves are clearly made as the protocol unfolds. These choices are made by controllers--Bob and Carol--while the agents responsible for the work of moving the game forward wait with the protocol suspended.

    In this case, Bob and Carol could be analyzed as parties to the protocol, as well as controllers. But in other cases, the concepts are distinct. For example, in a protocol to issue credentials, the issuing institution might use an AI and/or business automation as a controller.

    "},{"location":"aip2/0003-protocols/tictactoe/","title":"Tic Tac Toe Protocol 1.0","text":""},{"location":"aip2/0003-protocols/tictactoe/#summary","title":"Summary","text":"

    Describes a simple protocol, already familiar to most developers, as a way to demonstrate how all protocols should be documented.

    "},{"location":"aip2/0003-protocols/tictactoe/#motivation","title":"Motivation","text":"

    Playing tic-tac-toe is a good way to test whether agents are working properly, since it requires two parties to take turns and to communicate reliably about state. However, it is also pretty simple, and it has a low bar for trust (it's not dangerous to play tic-tac-toe with a malicious stranger). Thus, we expect agent tic-tac-toe to be a good way to test basic plumbing and to identify functional gaps. The game also provides a way of testing interactions with the human owners of agents, or of hooking up an agent AI.

    "},{"location":"aip2/0003-protocols/tictactoe/#tutorial","title":"Tutorial","text":"

    Tic-tac-toe is a simple game where players take turns placing Xs and Os in a 3x3 grid, attempting to capture 3 cells of the grid in a straight line.

    "},{"location":"aip2/0003-protocols/tictactoe/#name-and-version","title":"Name and Version","text":"

    This defines the tictactoe protocol, version 1.x, as identified by the following PIURI:

    did:sov:SLfEi9esrjzybysFxQZbfq;spec/tictactoe/1.0\n
    "},{"location":"aip2/0003-protocols/tictactoe/#key-concepts","title":"Key Concepts","text":"

    A tic-tac-toe game is an interaction where 2 parties take turns to make up to 9 moves. It starts when either party proposes the game, and ends when one of the parties wins, or when all all cells in the grid are occupied but nobody has won (a draw).

    Note: Optionally, a Tic-Tac-Toe game can be preceded by a Coin Flip Protocol to decide who goes first. This is not a high-value enhancement, but we add it for illustration purposes. If used, the choice-id field in the initial propose message of the Coin Flip should have the value did:sov:SLfEi9esrjzybysFxQZbfq;spec/tictactoe/1.0/who-goes-first, and the caller-wins and flipper-wins fields should contain the DIDs of the two players.

    Illegal moves and moving out of turn are errors that trigger a complaint from the other player. However, they do not scuttle the interaction. A game can also be abandoned in an unfinished state by either player, for any reason. Games can last any amount of time.

    About the Key Concepts section: Here we describe the flow at a very\nhigh level. We identify preconditions, ways the protocol can start\nand end, and what can go wrong. We also talk about timing\nconstraints and other assumptions.\n
    "},{"location":"aip2/0003-protocols/tictactoe/#roles","title":"Roles","text":"

    There are two parties in a tic-tac-toe game, but only one role, player. One player places 'X' for the duration of a game; the other places 'O'. There are no special requirements about who can be a player. The parties do not need to be trusted or even known to one another, either at the outset or as the game proceeds. No prior setup is required, other than an ability to communicate.

    About the Roles section: Here we name the roles in the protocol,\nsay who and how many can play each role, and describe constraints.\nWe also explore qualifications for roles.\n
    "},{"location":"aip2/0003-protocols/tictactoe/#states","title":"States","text":"

    The states of each player in the protocol evolve according to the following state machine:

    When a player is in the my-move state, possible valid events include send move (the normal case), send outcome (if the player decides to abandon the game), and receive outcome (if the other player decides to abandon). A receive move event could conceivably occur, too-- but it would be an error on the part of the other player, and would trigger a problem-report message as described above, leaving the state unchanged.

    In the their-move state, send move is an impossible event for a properly behaving player. All 3 of the other events could occur, causing a state transition.

    In the wrap-up state, the game is over, but communication with the outcome message has not yet occurred. The logical flow is send outcome, whereupon the player transitions to the done state.

    About the States section: Here we explain which states exist for each\nrole. We also enumerate the events that can occur, including messages,\nerrors, or events triggered by surrounding context, and what should\nhappen to state as a result. In this protocol, we only have one role,\nand thus only one state machine matrix. But in many protocols, each\nrole may have a different state machine.\n
    "},{"location":"aip2/0003-protocols/tictactoe/#messages","title":"Messages","text":"

    All messages in this protocol are part of the \"tictactoe 1.0\" message family uniquely identified by this DID reference: did:sov:SLfEi9esrjzybysFxQZbfq;spec/tictactoe/1.0

    NOTE 1: All the messages defined in a protocol should follow DIDComm best practices as far as how they name fields and define their data types and semantics. NOTE 2 about the \"DID Reference\" URI that appears here: DIDs can be resolved to a DID doc that contains an endpoint, to which everything after a semicolon can be appended. Thus, if this DID is publicly registered and its DID doc gives an endpoint of http://example.com, this URI would mean that anyone can find a formal definition of the protocol at http://example.com/spec/tictactoe/1.0. It is also possible to use a traditional URI here, such as http://example.com/spec/tictactoe/1.0. If that sort of URI is used, it is best practice for it to reference immutable content, as with a link to specific commit on github: https://github.com/hyperledger/aries-rfcs/blob/ab7a04f/concepts/0003-protocols/tictactoe/README.md#messages"},{"location":"aip2/0003-protocols/tictactoe/#move-message","title":"move message","text":"

    The protocol begins when one party sends a move message to the other. It looks like this:

    @id is required here, as it establishes a message thread that will govern the rest of the game.

    me tells which mark (X or O) the sender is placing. It is required.

    moves is optional in the first message of the interaction. If missing or empty, the sender of the first message is inviting the recipient to make the first move. If it contains a move, the sender is moving first.

    Moves are strings like \"X:B2\" that match the regular expression (?i)[XO]:[A-C][1-3]. They identify a mark to be placed (\"X\" or \"O\") and a position in the 3x3 grid. The grid's columns and rows are numbered like familiar spreadsheets, with columns A, B, and C, and rows 1, 2, and 3.

    comment is optional and probably not used much, but could be a way for players to razz one another or chat as they play. It follows the conventions of localized messages.

    Other decorators could be placed on tic-tac-toe messages, such as those to enable message timing to force players to make a move within a certain period of time.

    "},{"location":"aip2/0003-protocols/tictactoe/#subsequent-moves","title":"Subsequent Moves","text":"

    Once the initial move message has been sent, game play continues by each player taking turns sending responses, which are also move messages. With each new message the move array inside the message grows by one, ensuring that the players agree on the current accumulated state of the game. The me field is still required and must accurately reflect the role of the message sender; it thus alternates values between X and O.

    Subsequent messages in the game use the message threading mechanism where the @id of the first move becomes the ~thread.thid for the duration of the game.

    An evolving sequence of move messages might thus look like this, suppressing all fields except what's required:

    "},{"location":"aip2/0003-protocols/tictactoe/#messagemove-2","title":"Message/Move 2","text":"

    This is the first message in the thread that's sent by the player placing \"O\"; hence it has myindex = 0.

    "},{"location":"aip2/0003-protocols/tictactoe/#messagemove-3","title":"Message/Move 3","text":"

    This is the second message in the thread by the player placing \"X\"; hence it has myindex = 1.

    "},{"location":"aip2/0003-protocols/tictactoe/#messagemove-4","title":"Message/Move 4","text":"

    ...and so forth.

    Note that the order of the items in the moves array is NOT significant. The state of the game at any given point of time is fully captured by the moves, regardless of the order in which they were made.

    If a player makes an illegal move or another error occurs, the other player can complain using a problem-report message, with explain.@l10n.code set to one of the values defined in the Message Catalog section (see below).

    "},{"location":"aip2/0003-protocols/tictactoe/#outcome-message","title":"outcome message","text":"

    Game play ends when one player sends a move message that manages to mark 3 cells in a row. Thereupon, it is best practice, but not strictly required, for the other player to send an acknowledgement in the form of an outcome message.

    The moves and me fields from a move message can also, optionally, be included to further document state. The winner field is required. Its value may be \"X\", \"O\", or--in the case of a draw--\"none\".

    This outcome message can also be used to document an abandoned game, in which case winner is null, and comment can be used to explain why (e.g., timeout, loss of interest).

    About the Messages section: Here we explain the message types, but\nalso which roles send which messages, what sequencing rules apply,\nand how errors may occur during the flow. The message begins with\nan announcement of the identifier and version of the message\nfamily, and also enumerates error codes to be used with problem\nreports. This protocol is simple enough that we document the\ndatatypes and validation rules for fields inline in the narrative;\nin more complex protocols, we'd move that text into the Reference\n> Messages section instead.\n
    "},{"location":"aip2/0003-protocols/tictactoe/#constraints","title":"Constraints","text":"

    Players do not have to trust one another. Messages do not have to be authcrypted, although anoncrypted messages still have to have a path back to the sender to be useful.

    About the Constraints section: Many protocols have rules\nor mechanisms that help parties build trust. For example, in buying\na house, the protocol includes such things as commission paid to\nrealtors to guarantee their incentives, title insurance, earnest\nmoney, and a phase of the process where a home inspection takes\nplace. If you are documenting a protocol that has attributes like\nthese, explain them here.\n
    "},{"location":"aip2/0003-protocols/tictactoe/#reference","title":"Reference","text":"
    About the Reference section: If the Tutorial > Messages section\nsuppresses details, we would add a Messages section here to\nexhaustively describe each field. We could also include an\nExamples section to show variations on the main flow.\n
    "},{"location":"aip2/0003-protocols/tictactoe/#collateral","title":"Collateral","text":"

    A reference implementation of the logic of a game is provided with this RFC as python 3.x code. See game.py. There is also a simple hand-coded AI that can play the game when plugged into an agent (see ai.py), and a set of unit tests that prove correctness (see test_tictactoe.py).

    A full implementation of the state machine is provided as well; see state_machine.py and test_state_machine.py.

    The game can be played interactively by running python game.py.

    "},{"location":"aip2/0003-protocols/tictactoe/#localization","title":"Localization","text":"

    The only localizable field in this message family is comment on both move and outcome messages. It contains ad hoc text supplied by the sender, instead of a value selected from an enumeration and identified by code for use with message catalogs. This means the only approach to localize move or outcome messages is to submit comment fields to an automated translation service. Because the locale of tictactoe messages is not predefined, each message must be decorated with ~l10n.locale to make automated translation possible.

    There is one other way that localization is relevant to this protocol: in error messages. Errors are communicated through the general problem-report message type rather than through a special message type that's part of the tictactoe family. However, we define a catalog of tictactoe-specific error codes below to make this protocol's specific error strings localizable.

    Thus, all instances of this message family carry localization metadata in the form of an implicit ~l10n decorator that looks like this:

    This JSON fragment is checked in next to the narrative content of this RFC as ~l10n.json, for easy machine parsing.

    Individual messages can use the ~l10n decorator to supplement or override these settings.

    For more information about localization concepts, see the RFC about localized messages.

    "},{"location":"aip2/0003-protocols/tictactoe/#message-catalog","title":"Message Catalog","text":"

    To facilitate localization of error messages, all instances of this message family assume the following catalog in their ~l10n data:

    When referencing this catalog, please be sure you have the correct version. The official, immutable URL to this version of the catalog file is:

    https://github.com/hyperledger/indy-hipe/blob/fc7a6028/text/tictactoe-protocol/catalog.json\n

    This JSON fragment is checked in next to the narrative content of this RFC as catalog.json, for easy machine parsing. The catalog currently contains localized alternatives only for English. Other language contributions would be welcome.

    For more information, see the Message Catalog section of the localization HIPE.

    "},{"location":"aip2/0003-protocols/tictactoe/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Verity Commercially licensed enterprise agent, SaaS or on-prem. Pico Labs Open source TicTacToe for Pico Agents"},{"location":"aip2/0004-agents/","title":"Aries RFC 0004: Agents","text":""},{"location":"aip2/0004-agents/#summary","title":"Summary","text":"

    Provide a high-level introduction to the ../../concepts of agents in the self-sovereign identity ecosystem.

    "},{"location":"aip2/0004-agents/#tutorial","title":"Tutorial","text":"

    Managing an identity is complex. We need tools to help us.

    In the physical world, we often delegate complexity to trusted proxies that can help. We hire an accountant to do our taxes, a real estate agent to help us buy a house, and a talent agent to help us pitch an album to a recording studio.

    On the digital landscape, humans and organizations (and sometimes, things) cannot directly consume and emit bytes, store and manage data, or perform the crypto that self-sovereign identity demands. They need delegates--agents--to help. Agents are a vital dimension across which we exercise sovereignty over identity.

    "},{"location":"aip2/0004-agents/#essential-characteristics","title":"Essential Characteristics","text":"

    When we use the term \"agent\" in the SSI community, we more properly mean \"an agent of self-sovereign identity.\" This means something more specific than just a \"user agent\" or a \"software agent.\" Such an agent has three defining characteristics:

    1. It acts as a fiduciary on behalf of a single identity owner (or, for agents of things like IoT devices, pets, and similar things, a single controller).
    2. It holds cryptographic keys that uniquely embody its delegated authorization.
    3. It interacts using interoperable DIDComm protocols.

    These characteristics don't tie an agent to any particular blockchain. It is possible to implement agents without any use of blockchain at all (e.g., with peer DIDs), and some efforts to do so are quite active.

    "},{"location":"aip2/0004-agents/#canonical-examples","title":"Canonical Examples","text":"

    Three types of agents are especially common:

    1. A mobile app that Alice uses to manage credentials and to connect to others is an agent for Alice.
    2. A cloud-based service that Alice uses to expose a stable endpoint where other agents can talk to her is an agent for Alice.
    3. A server run by Faber College, allowing it to issue credentials to its students, is an agent for Faber.

    Depending on your perspective, you might describe these agents in various ways. #1 can correctly be called a \"mobile\" or \"edge\" or \"rich\" agent. #2 can be called a \"cloud\" or \"routing\" agent. #3 can be called an \"on-prem\" or \"edge\" or \"advanced\" agent. See Categorizing Agents for a discussion about why multiple labels are correct.

    Agents can be other things as well. They can big or small, complex or simple. They can interact and be packaged in various ways. They can be written in a host of programming languages. Some are more canonical than others. But all the ones we intend to interact with in the self-sovereign identity problem domain share the three essential characteristics described above.

    "},{"location":"aip2/0004-agents/#how-agents-talk","title":"How Agents Talk","text":"

    DID communication (DIDComm), and the protocols built atop it are each rich subjects unto themselves. Here, we will stay very high-level.

    Agents can use many different communication transports: HTTP(S) 1.x and 2.0, WebSockets, IRC, Bluetooth, AMQP, NFC, Signal, email, push notifications to mobile devices, ZMQ, and more. However, all A2A is message-based, and is secured by modern, best-practice public key cryptography. How messages flow over a transport may vary--but their security and privacy toolset, their links to the DIDs and DID Docs of identity owners, and the ways their messages are packaged and handled are standard.

    Agents connect to one another through a standard connection protocol, discover one another's endpoints and keys through standard DID Docs, discover one another's ../../features in a standard way, and maintain relationships in a standard way. All of these points of standardization are what makes them interoperable.

    Because agents speak so many different ways, and because many of them won't have a permanent, accessible point of presence on the network, they can't all be thought of as web servers with a Swagger-compatible API for request-response. The analog to an API construct in agent-land is protocols. These are patterns for stateful interactions. They specify things like, \"If you want to negotiate a sale with an agent, send it a message of type X. It will respond with a message of type Y or type Z, or with an error message of type W. Repeat until the negotiation finishes.\" Some interesting A2A protocols include the one where two parties connect to one another to build a relationship, the one where agents discover which protocols they each support, the one where credentials are issued, and the one where proof is requested and sent. Hundreds of other protocols are being defined.

    "},{"location":"aip2/0004-agents/#how-to-get-an-agent","title":"How to Get an Agent","text":"

    As the ecosystem for self-sovereign identity matures, the average person or organization will get an agent by downloading it from the app store, installing it with their OS package manager, or subscribing to it as a service. However, the availability of quality pre-packaged agents is still limited today.

    Agent providers are emerging in the marketplace, though. Some are governments, NGOs, or educational institutions that offer agents for free; others are for-profit ventures. If you'd like suggestions about ready-to-use agent offerings, please describe your use case in #aries on chat.hyperledger.org.

    There is also intense activity in the SSI community around building custom agents and the tools and processes that enable them. A significant amount of early work occurred in the Indy Agent Community with some of those efforts materializing in the indy-agent repo on github.com and other code bases. The indy-agent repo is now deprecated but is still valuable in demonstrating the basics of agents. With the introduction of Hyperledger Aries, agent efforts are migrating from the Indy Agent community.

    Hyperledger Aries provides a number of code bases ranging from agent frameworks to tools to aid in development to ready-to-use agents.

    "},{"location":"aip2/0004-agents/#how-to-write-an-agent","title":"How to Write an Agent","text":"

    This is one of the most common questions that Aries newcomers ask. It's a challenging one to answer, because it's so open-ended. It's sort of like someone asking, \"Can you give me a recipe for dinner?\" The obvious follow-up question would be, \"What type of dinner did you have in mind?\"

    Here are some thought questions to clarify intent:

    "},{"location":"aip2/0004-agents/#general-patterns","title":"General Patterns","text":"

    We said it's hard to provide a recipe for an agent without specifics. However, the majority of agents do have two things in common: they listen to and process A2A messages, and they use a wallet to manage keys, credentials, and other sensitive material. Unless you have uses cases that involve IoT, cron jobs, or web hooks, your agent is likely to fit this mold.

    The heart of such an agent is probably a messaging handling loop, with pluggable protocols to give it new capabilities, and pluggable transports to let it talk in different ways. The pseudocode for its main function might look like this:

    "},{"location":"aip2/0004-agents/#pseudocode-for-main","title":"Pseudocode for main()","text":"
    1  While not done:\n2      Get next message.\n3      Verify it (decrypt, identify sender, check signature...).\n3      Look at the type of the plaintext message.\n4      Find a plugged in protocol handler that matches that type.\n5      Give plaintext message and security metadata to handler.\n

    Line 2 can be done via standard HTTP dispatch, or by checking an email inbox, or in many other ways. Line 3 can be quite sophisticated--the sender will not be Alice, but rather one of the agents that she has authorized. Verification may involve consulting cached information and/or a blockchain where a DID and DID Doc are stored, among other things.

    The pseudocode for each protocol handler it loads might look like:

    "},{"location":"aip2/0004-agents/#pseudocode-for-protocol-handler","title":"Pseudocode for protocol handler","text":"
    1  Check authorization against metadata. Reject if needed.\n2  Read message header. Is it part of an ongoing interaction?\n3  If yes, load persisted state.\n4  Process the message and update interaction state.\n5  If a response is appropriate:\n6      Prepare response content.\n7      Ask my outbound comm module to package and send it.\n

    Line 4 is the workhorse. For example, if the interaction is about issuing credentials and this agent is doing the issuance, this would be where it looks up the material for the credential in internal databases, formats it appropriately, and records the fact that the credential has now been built. Line 6 might be where that credential is attached to an outgoing message for transmission to the recipient.

    The pseudocode for the outbound communication module might be:

    "},{"location":"aip2/0004-agents/#pseudocode-for-outbound","title":"Pseudocode for outbound","text":"
    1  Iterate through all pluggable transports to find best one to use\n     with the intended recipient.\n2  Figure out how to route the message over the selected transport.\n3  Serialize the message content and encrypt it appropriately.\n4  Send the message.\n

    Line 2 can be complex. It involves looking up one or more endpoints in the DID Doc of the recipient, and finding an intersection between transports they use, and transports the sender can speak. Line 3 requires the keys of the sender, which would normally be held in a wallet.

    If you are building this sort of code using Aries technology, you will certainly want to use Aries Agent SDK. This gives you a ready-made, highly secure wallet that can be adapted to many requirements. It also provides easy functions to serialize and encrypt. Many of the operations you need to do are demonstrated in the SDK's /doc/how-tos folder, or in its Getting Started Guide.

    "},{"location":"aip2/0004-agents/#how-to-learn-more","title":"How to Learn More","text":""},{"location":"aip2/0004-agents/#reference","title":"Reference","text":""},{"location":"aip2/0004-agents/#categorizing-agents","title":"Categorizing Agents","text":"

    Agents can be categorized in various ways, and these categories lead to terms you're likely to encounter in RFCs and other documentation. Understanding the categories will help the definitions make sense.

    "},{"location":"aip2/0004-agents/#by-trust","title":"By Trust","text":"

    A trustable agent runs in an environment that's under the direct control of its owner; the owner can trust it without incurring much risk. A semi-trustable agent runs in an environment where others besides the owner may have access, so giving it crucial secrets is less advisable. (An untrustable delegate should never be an agent, by definition, so we don't use that term.)

    Note that these distinctions highlight what is advisable, not how much trust the owner actually extends.

    "},{"location":"aip2/0004-agents/#by-location","title":"By Location","text":"

    Two related but deprecated terms are edge agent and cloud agent. You will probably hear these terms in the community or read them in docs. The problem with them is that they suggest location, but were formally defined to imply levels of trust. When they were chosen, location and levels of trust were seen as going together--you trust your edge more, and your cloud less. We've since realized that a trustable agent could exist in the cloud, if it is directly controlled by the owner, and a semi-trustable agent could be on-prem, if the owner's control is indirect. Thus we are trying to correct usage and make \"edge\" and \"cloud\" about location instead.

    "},{"location":"aip2/0004-agents/#by-platform","title":"By Platform","text":""},{"location":"aip2/0004-agents/#by-complexity","title":"By Complexity","text":"

    We can arrange agents on a continuum, from simple to complex. The simplest agents are static--they are preconfigured for a single relationship. Thin agents are somewhat fancier. Thick agents are fancier still, and rich agents exhibit the most sophistication and flexibility:

    A nice visualization of several dimensions of agent category has been built by Michael Herman:

    "},{"location":"aip2/0004-agents/#the-agent-ness-continuum","title":"The Agent-ness Continuum","text":"

    The tutorial above gives three essential characteristics of agents, and lists some canonical examples. This may make it feel like agent-ness is pretty binary. However, we've learned that reality is more fuzzy.

    Having a tight definition of an agent may not matter in all cases. However, it is important when we are trying to understand interoperability goals. We want agents to be able to interact with one another. Does that mean they must interact with every piece of software that is even marginally agent-like? Probably not.

    Some attributes that are not technically necessary in agents include:

    Agents that lack these characteristics can still be fully interoperable.

    Some interesting examples of less prototypical agents or agent-like things include:

    "},{"location":"aip2/0004-agents/#dif-hubs","title":"DIF Hubs","text":"

    A DIF Identity Hub is construct that resembles agents in some ways, but that focuses on the data-sharing aspects of identity. Currently DIF Hubs do not use the protocols known to the Aries community, and vice versa. However, there are efforts to bridge that gap.

    "},{"location":"aip2/0004-agents/#identity-wallets","title":"Identity Wallets","text":"

    \"Identity wallet\" is a term that's carefully defined in our ecosystem, and in strict, technical usage it maps to a concept much closer to \"database\" than \"agent\". This is because it is an inert storage container, not an active interacter. However, in casual usage, it may mean the software that uses a wallet to do identity work--in which case it is definitely an agent.

    "},{"location":"aip2/0004-agents/#crypto-wallets","title":"Crypto Wallets","text":"

    Cryptocurrency wallets are quite agent-like in that they hold keys and represent a user. However, they diverge from the agent definition in that they talk proprietary protocols to blockchains, rather than A2A to other agents.

    "},{"location":"aip2/0004-agents/#uport","title":"uPort","text":"

    The uPort app is an edge agent. Here, too, there are efforts to bridge a protocol gap.

    "},{"location":"aip2/0004-agents/#learning-machine","title":"Learning Machine","text":"

    The credential issuance technology offered by Learning Machine, and the app used to share those credentials, are agents of institutions and individuals, respectively. Again, there is a protocol gap to bridge.

    "},{"location":"aip2/0004-agents/#cron-jobs","title":"Cron Jobs","text":"

    A cron job that runs once a night at Faber, scanning a database and revoking credentials that have changes status during the day, is an agent for Faber. This is true even though it doesn't listen for incoming messages (it only talks revocation protocol to the ledger). In order to talk that protocol, it must hold keys delegated by Faber, and it is surely Faber's fiduciary.

    "},{"location":"aip2/0004-agents/#operating-systems","title":"Operating Systems","text":"

    The operating system on a laptop could be described as agent-like, in that it works for a single owner and may have a keystore. However, it doesn't talk A2A to other agents--at least not yet. (OSes that service multiple users fit the definition less.)

    "},{"location":"aip2/0004-agents/#devices","title":"Devices","text":"

    A device can be thought of as an agent (e.g., Alice's phone as an edge agent). However, strictly speaking, one device might run multiple agents, so this is only casually correct.

    "},{"location":"aip2/0004-agents/#sovrin-mainnet","title":"Sovrin MainNet","text":"

    The Sovrin MainNet can be thought of as an agent for the Sovrin community (but NOT the Sovrin Foundation, which codifies the rules but leaves operation of the network to its stewards). Certainly, the blockchain holds keys, uses A2A protocols, and acts in a fiduciary capacity toward the community to further its interests. The only challenge with this perspective is that the Sovrin community has a very fuzzy identity.

    "},{"location":"aip2/0004-agents/#validators","title":"Validators","text":"

    Validator nodes on a particular blockchain are agents of the stewards that operate them.

    "},{"location":"aip2/0004-agents/#digital-assistants","title":"Digital Assistants","text":"

    Digital assistants like Alexa and Google Home are somewhat agent-like. However, the Alexa in the home of the Jones family is probably not an agent for either the Jones family or Amazon. It accepts delegated work from anybody who talks to it (instead of a single controlling identity), and all current implementations are totally antithetical to the ethos of privacy and security required by self-sovereign identity. Although it interfaces with Amazon to download data and ../../features, it isn't Amazon's fiduciary, either. It doesn't hold keys that allow it to represent its owner. The protocols it uses are not interactions with other agents, but with non-agent entities. Perhaps agents and digtal assistants will converge in the future.

    "},{"location":"aip2/0004-agents/#doorbell","title":"Doorbell","text":"

    An doorbell that emits a simple signal each time it is pressed is not an agent. It doesn't represent a fiduciary or hold keys. (However, a fancy IoT doorbell that reports to Alice's mobile agent using an A2A protocol would be an agent.)

    "},{"location":"aip2/0004-agents/#microservices","title":"Microservices","text":"

    A microservice run by AcmeCorp to integrate with its vendors is not an agent for Acme's vendors. Depending on whether it holds keys and uses A2A protocols, it may or may not be an agent for Acme.

    "},{"location":"aip2/0004-agents/#human-delegates","title":"Human Delegates","text":"

    A human delegate who proves empowerment through keys might be thought of as an agent.

    "},{"location":"aip2/0004-agents/#paper","title":"Paper","text":"

    The keys for an agent can be stored on paper. This storage basically constitutes a wallet. It isn't an agent. However, it can be thought of as playing the role of an agent in some cases when designing backup and recovery solutions.

    "},{"location":"aip2/0004-agents/#prior-art","title":"Prior art","text":""},{"location":"aip2/0004-agents/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community Aries Framework - .NET .NET framework for building agents of all types Streetcred.id Commercial mobile and web app built using Aries Framework for .NET Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases. Aries Framework - Go For building agents, hubs and other DIDComm ../../features in GoLang. Connect.Me Free mobile app from Evernym. Installed via app store on iOS and Android. Verity Commercially licensed enterprise agent, SaaS or on-prem. Aries Protocol Test Suite Pico Labs Pico Agents protocols: connections, trust_ping, basicmessage, routing Rust Agent Rust implementation of a framework for building agents of all types"},{"location":"aip2/0005-didcomm/","title":"Aries RFC 0005: DID Communication","text":""},{"location":"aip2/0005-didcomm/#summary","title":"Summary","text":"

    Explain the basics of DID communication (DIDComm) at a high level, and link to other RFCs to promote deeper exploration.

    NOTE: The version of DIDComm collectively defined in Aries RFCs is known by the label \"DIDComm V1.\" A newer version of DIDComm (\"DIDComm V2\") is now being incubated at DIF. Many ../../concepts are the same between the two versions, but there are some differences in the details. For information about detecting V1 versus V2, see Detecting DIDComm Versions.

    "},{"location":"aip2/0005-didcomm/#motivation","title":"Motivation","text":"

    The DID communication between agents and agent-like things is a rich subject with a lot of tribal knowledge. Newcomers to the decentralized identity ecosystem tend to bring mental models that are subtly divergent from its paradigm. When they encounter dissonance, DIDComm becomes mysterious. We need a standard high-level reference.

    "},{"location":"aip2/0005-didcomm/#tutorial","title":"Tutorial","text":"

    This discussion assumes that you have a reasonable grasp on topics like self-sovereign identity, DIDs and DID docs, and agents. If you find yourself lost, please review that material for background and starting assumptions.

    Agent-like things have to interact with one another to get work done. How they talk in general is DIDComm, the subject of this RFC. The specific interactions enabled by DIDComm--connecting and maintaining relationships, issuing credentials, providing proof, etc.--are called protocols; they are described elsewhere.

    "},{"location":"aip2/0005-didcomm/#rough-overview","title":"Rough Overview","text":"

    A typical DIDComm interaction works like this:

    Imagine Alice wants to negotiate with Bob to sell something online, and that DIDComm, not direct human communication, is involved. This means Alice's agent and Bob's agent are going to exchange a series of messages. Alice may just press a button and be unaware of details, but underneath, her agent begins by preparing a plaintext JSON message about the proposed sale. (The particulars are irrelevant here, but would be described in the spec for a \"sell something\" protocol.) It then looks up Bob's DID Doc to access two key pieces of information: * An endpoint (web, email, etc) where messages can be delivered to Bob. * The public key that Bob's agent is using in the Alice:Bob relationship. Now Alice's agent uses Bob's public key to encrypt the plaintext so that only Bob's agent can read it, adding authentication with its own private key. The agent arranges delivery to Bob. This \"arranging\" can involve various hops and intermediaries. It can be complex. Bob's agent eventually receives and decrypts the message, authenticating its origin as Alice using her public key. It prepares its response and routes it back using a reciprocal process (plaintext -> lookup endpoint and public key for Alice -> encrypt with authentication -> arrange delivery).

    That's it.

    Well, mostly. The description is pretty good, if you squint, but it does not fit all DIDComm interactions:

    Before we provide more details, let's explore what drives the design of DIDComm.

    "},{"location":"aip2/0005-didcomm/#goals-and-ramifications","title":"Goals and Ramifications","text":"

    The DIDComm design attempts to be:

    1. Secure
    2. Private
    3. Interoperable
    4. Transport-agnostic
    5. Extensible

    As a list of buzz words, this may elicit nods rather than surprise. However, several items have deep ramifications.

    Taken together, Secure and Private require that the protocol be decentralized and maximally opaque to the surveillance economy.

    Interoperable means that DIDComm should work across programming languages, blockchains, vendors, OS/platforms, networks, legal jurisdictions, geos, cryptographies, and hardware--as well as across time. That's quite a list. It means that DIDComm intends something more than just compatibility within Aries; it aims to be a future-proof lingua franca of all self-sovereign interactions.

    Transport-agnostic means that it should be possible to use DIDComm over HTTP(S) 1.x and 2.0, WebSockets, IRC, Bluetooth, AMQP, NFC, Signal, email, push notifications to mobile devices, Ham radio, multicast, snail mail, carrier pigeon, and more.

    All software design involves tradeoffs. These goals, prioritized as shown, lead down an interesting path.

    "},{"location":"aip2/0005-didcomm/#message-based-asynchronous-and-simplex","title":"Message-Based, Asynchronous, and Simplex","text":"

    The dominant paradigm in mobile and web development today is duplex request-response. You call an API with certain inputs, and you get back a response with certain outputs over the same channel, shortly thereafter. This is the world of OpenAPI (Swagger), and it has many virtues.

    Unfortunately, many agents are not good analogs to web servers. They may be mobile devices that turn off at unpredictable intervals and that lack a stable connection to the network. They may need to work peer-to-peer, when the internet is not available. They may need to interact in time frames of hours or days, not with 30-second timeouts. They may not listen over the same channel that they use to talk.

    Because of this, the fundamental paradigm for DIDComm is message-based, asynchronous, and simplex. Agent X sends a message over channel A. Sometime later, it may receive a response from Agent Y over channel B. This is much closer to an email paradigm than a web paradigm.

    On top of this foundation, it is possible to build elegant, synchronous request-response interactions. All of us have interacted with a friend who's emailing or texting us in near-realtime. However, interoperability begins with a least-common-denominator assumption that's simpler.

    "},{"location":"aip2/0005-didcomm/#message-level-security-reciprocal-authentication","title":"Message-Level Security, Reciprocal Authentication","text":"

    The security and privacy goals, and the asynchronous+simplex design decision, break familiar web assumptions in another way. Servers are commonly run by institutions, and we authenticate them with certificates. People and things are usually authenticated to servers by some sort of login process quite different from certificates, and this authentication is cached in a session object that expires. Furthermore, web security is provided at the transport level (TLS); it is not an independent attribute of the messages themselves.

    In a partially disconnected world where a comm channel is not assumed to support duplex request-response, and where the security can't be ignored as a transport problem, traditional TLS, login, and expiring sessions are impractical. Furthermore, centralized servers and certificate authorities perpetuate a power and UX imbalance between servers and clients that doesn't fit with the peer-oriented DIDComm.

    DIDComm uses public key cryptography, not certificates from some parties and passwords from others. Its security guarantees are independent of the transport over which it flows. It is sessionless (though sessions can easily be built atop it). When authentication is required, all parties do it the same way.

    "},{"location":"aip2/0005-didcomm/#reference","title":"Reference","text":"

    The following RFCs profide additional information: * 0021: DIDComm Message Anatomy * 0020: Message Types * 0011: Decorators * 0008: Message ID and Threading * 0019: Encryption Envelope * 0025: Agent Transports

    "},{"location":"aip2/0005-didcomm/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community Aries Framework - .NET .NET framework for building agents of all types Streetcred.id Commercial mobile and web app built using Aries Framework - .NET Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases. Aries Framework - Go For building agents, hubs and other DIDComm ../../features in GoLang. Connect.Me Free mobile app from Evernym. Installed via app store on iOS and Android. Verity Commercially licensed enterprise agent, SaaS or on-prem. Aries Protocol Test Suite Pico Labs Pico Agents protocols: connections, trust_ping, basicmessage, routing"},{"location":"aip2/0008-message-id-and-threading/","title":"Aries RFC 0008: Message ID and Threading","text":""},{"location":"aip2/0008-message-id-and-threading/#summary","title":"Summary","text":"

    Definition of the message @id field and the ~thread decorator.

    "},{"location":"aip2/0008-message-id-and-threading/#motivation","title":"Motivation","text":"

    Referring to messages is useful in many interactions. A standard method of adding a message ID promotes good patterns in message families. When multiple messages are coordinated in a message flow, the threading pattern helps avoid having to re-roll the same spec for each message family that needs it.

    "},{"location":"aip2/0008-message-id-and-threading/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0008-message-id-and-threading/#message-ids","title":"Message IDs","text":"

    Message IDs are specified with the @id attribute, which comes from JSON-LD. The sender of the message is responsible for creating the message ID, and any message can be identified by the combination of the sender and the message ID. Message IDs should be considered to be opaque identifiers by any recipients.

    "},{"location":"aip2/0008-message-id-and-threading/#message-id-requirements","title":"Message ID Requirements","text":""},{"location":"aip2/0008-message-id-and-threading/#example","title":"Example","text":"
    {\n    \"@type\": \"did:example:12345...;spec/example_family/1.0/example_type\",\n    \"@id\": \"98fd8d72-80f6-4419-abc2-c65ea39d0f38\",\n    \"example_attribute\": \"stuff\"\n}\n

    The following was pulled from this document written by Daniel Hardman and stored in the Sovrin Foundation's protocol repository.

    "},{"location":"aip2/0008-message-id-and-threading/#threaded-messages","title":"Threaded Messages","text":"

    Message threading will be implemented as a decorator to messages, for example:

    {\n    \"@type\": \"did:example:12345...;spec/example_family/1.0/example_type\",\n    \"@id\": \"98fd8d72-80f6-4419-abc2-c65ea39d0f38\",\n    \"~thread\": {\n        \"thid\": \"98fd8d72-80f6-4419-abc2-c65ea39d0f38\",\n        \"pthid\": \"1e513ad4-48c9-444e-9e7e-5b8b45c5e325\",\n        \"sender_order\": 3,\n        \"received_orders\": {\"did:sov:abcxyz\":1},\n        \"goal_code\": \"aries.vc.issue\"\n    },\n    \"example_attribute\": \"example_value\"\n}\n

    The ~thread decorator is generally required on any type of response, since this is what connects it with the original request.

    While not recommended, the initial message of a new protocol instance MAY have an empty ({}) ~thread item. Aries agents receiving a message with an empty ~thread item MUST gracefully handle such a message.

    "},{"location":"aip2/0008-message-id-and-threading/#thread-object","title":"Thread object","text":"

    A thread object has the following fields discussed below:

    "},{"location":"aip2/0008-message-id-and-threading/#thread-id-thid","title":"Thread ID (thid)","text":"

    Because multiple interactions can happen simultaneously, it's important to differentiate between them. This is done with a Thread ID or thid.

    If the Thread object is defined and a thid is given, the Thread ID is the value given there. But if the Thread object is not defined in a message, the Thread ID is implicitly defined as the Message ID (@id) of the given message and that message is the first message of a new thread.

    "},{"location":"aip2/0008-message-id-and-threading/#sender-order-sender_order","title":"Sender Order (sender_order)","text":"

    It is desirable to know how messages within a thread should be ordered. However, it is very difficult to know with confidence the absolute ordering of events scattered across a distributed system. Alice and Bob may each send a message before receiving the other's response, but be unsure whether their message was composed before the other's. Timestamping cannot resolve an impasse. Therefore, there is no unified absolute ordering of all messages within a thread--but there is an ordering of all messages emitted by a each participant.

    In a given thread, the first message from each party has a sender_order value of 0, the second message sent from each party has a sender_order value of 1, and so forth. Note that both Alice and Bob use 0 and 1, without regard to whether the other party may be known to have used them. This gives a strong ordering with respect to each party's messages, and it means that any message can be uniquely identified in an interaction by its thid, the sender DID and/or key, and the sender_order.

    "},{"location":"aip2/0008-message-id-and-threading/#received-orders-received_orders","title":"Received Orders (received_orders)","text":"

    In an interaction, it may be useful for the recipient of a message to know if their last message was received. A received_orders value addresses this need, and could be included as a best practice to help detect missing messages.

    In the example above, if Alice is the sender, and Bob is identified by did:sov:abcxyz, then Alice is saying, \"Here's my message with index 3 (sender_order=3), and I'm sending it in response to your message 1 (received_orders: {<bob's DID>: 1}. Apparently Alice has been more chatty than Bob in this exchange.

    The received_orders field is plural to acknowledge the possibility of multiple parties. In pairwise interactions, this may seem odd. However, n-wise interactions are possible (e.g., in a doctor ~ hospital ~ patient n-wise relationship). Even in pairwise, multiple agents on either side may introduce other actors. This may happen even if an interaction is designed to be 2-party (e.g., an intermediate party emits an error unexpectedly).

    In an interaction with more parties, the received_orders object has a key/value pair for each actor/sender_order, where actor is a DID or a key for an agent:

    \"received_orders\": {\"did:sov:abcxyz\":1, \"did:sov:defghi\":14}\n

    Here, the received_orders fragment makes a claim about the last sender_order that the sender observed from did:sov:abcxyz and did:sov:defghi. The sender of this fragment is presumably some other DID, implying that 3 parties are participating. Any parties unnamed in received_orders have an undefined value for received_orders. This is NOT the same as saying that they have made no observable contribution to the thread. To make that claim, use the special value -1, as in:

    \"received_orders\": {\"did:sov:abcxyz\":1, \"did:sov:defghi\":14, \"did:sov:jklmno\":-1}\n
    "},{"location":"aip2/0008-message-id-and-threading/#example_1","title":"Example","text":"

    As an example, Alice is an issuer and she offers a credential to Bob.

    "},{"location":"aip2/0008-message-id-and-threading/#nested-interactions-parent-thread-id-or-pthid","title":"Nested interactions (Parent Thread ID or pthid)","text":"

    Sometimes there are interactions that need to occur with the same party, while an existing interaction is in-flight.

    When an interaction is nested within another, the initiator of a new interaction can include a Parent Thread ID (pthid). This signals to the other party that this is a thread that is branching off of an existing interaction.

    "},{"location":"aip2/0008-message-id-and-threading/#nested-example","title":"Nested Example","text":"

    As before, Alice is an issuer and she offers a credential to Bob. This time, she wants a bit more information before she is comfortable providing a credential.

    All of the steps are the same, except the two bolded steps that are part of a nested interaction.

    "},{"location":"aip2/0008-message-id-and-threading/#implicit-threads","title":"Implicit Threads","text":"

    Threads reference a Message ID as the origin of the thread. This allows any message to be the start of a thread, even if not originally intended. Any message without an explicit ~thread attribute can be considered to have the following ~thread attribute implicitly present.

    \"~thread\": {\n    \"thid\": <same as @id of the outer message>,\n    \"sender_order\": 0\n}\n
    "},{"location":"aip2/0008-message-id-and-threading/#implicit-replies","title":"Implicit Replies","text":"

    A message that contains a ~thread block with a thid different from the outer message @id, but no sender_order is considered an implicit reply. Implicit replies have a sender_order of 0 and an received_orders:{other:0}. Implicit replies should only be used when a further message thread is not anticipated. When further messages in the thread are expected, a full regular ~thread block should be used.

    Example Message with am Implicit Reply:

    {\n    \"@id\": \"<@id of outer message>\",\n    \"~thread\": {\n        \"thid\": \"<different than @id of outer message>\"\n    }\n}\n
    Effective Message with defaults in place:
    {\n    \"@id\": \"<@id of outer message>\",\n    \"~thread\": {\n        \"thid\": \"<different than @id of outer message>\"\n        \"sender_order\": 0,\n        \"received_orders\": { \"DID of sender\":0 }\n    }\n}\n

    "},{"location":"aip2/0008-message-id-and-threading/#reference","title":"Reference","text":""},{"location":"aip2/0008-message-id-and-threading/#drawbacks","title":"Drawbacks","text":"

    Why should we not do this?

    "},{"location":"aip2/0008-message-id-and-threading/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"aip2/0008-message-id-and-threading/#prior-art","title":"Prior art","text":"

    If you're aware of relevant prior-art, please add it here.

    "},{"location":"aip2/0008-message-id-and-threading/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0008-message-id-and-threading/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases. Connect.Me Free mobile app from Evernym. Installed via app store on iOS and Android. Verity Commercially licensed enterprise agent, SaaS or on-prem. Aries Protocol Test Suite"},{"location":"aip2/0011-decorators/","title":"Aries RFC 0011: Decorators","text":""},{"location":"aip2/0011-decorators/#summary","title":"Summary","text":"

    Explain how decorators work in DID communication.

    "},{"location":"aip2/0011-decorators/#motivation","title":"Motivation","text":"

    Certain semantic patterns manifest over and over again in communication. For example, all communication needs the pattern of testing the type of message received. The pattern of identifying a message and referencing it later is likely to be useful in a high percentage of all protocols that are ever written. A pattern that associates messages with debugging/tracing/timing metadata is equally relevant. And so forth.

    We need a way to convey metadata that embodies these patterns, without complicating schemas, bloating core definitions, managing complicated inheritance hierarchies, or confusing one another. It needs to be elegant, powerful, and adaptable.

    "},{"location":"aip2/0011-decorators/#tutorial","title":"Tutorial","text":"

    A decorator is an optional chunk of JSON that conveys metadata. Decorators are not declared in a core schema but rather supplementary to it. Decorators add semantic content broadly relevant to messaging in general, and not so much tied to the problem domain of a specific type of interaction.

    You can think of decorators as a sort of mixin for agent-to-agent messaging. This is not a perfect analogy, but it is a good one. Decorators in DIDComm also have some overlap (but not a direct congruence) with annotations in Java, attributes in C#, and both decorators and annotations in python.

    "},{"location":"aip2/0011-decorators/#simple-example","title":"Simple Example","text":"

    Imagine we are designing a protocol and associated messages to arrange meetings between two people. We might come up with a meeting_proposal message that looks like this:

    {\n  \"@id\": \"e2987006-a18a-4544-9596-5ad0d9390c8b\",\n  \"@type\": \"did:sov:8700e296a1458aad0d93;spec/meetings/1.0/proposal\",\n  \"proposed_time\": \"2019-12-23 17:00\",\n  \"proposed_place\": \"at the cathedral, Barf\u00fcsserplatz, Basel\",\n  \"comment\": \"Let's walk through the Christmas market.\"\n}\n

    Now we tackle the meeting_proposal_response messages. Maybe we start with something exceedingly simple, like:

    {\n  \"@id\": \"d9390ce2-8ba1-4544-9596-9870065ad08a\",\n  \"@type\": \"did:sov:8700e296a1458aad0d93;spec/meetings/1.0/response\",\n  \"agree\": true,\n  \"comment\": \"See you there!\"\n}\n

    But we quickly realize that the asynchronous nature of messaging will expose a gap in our message design: if Alice receives two meeting proposals from Bob at the same time, there is nothing to bind a response back to the specific proposal it addresses.

    We could extend the schema of our response so it contains an thread that references the @id of the original proposal. This would work. However, it does not satsify the DRY principle of software design, because when we tackle the protocol for negotiating a purchase between buyer and seller next week, we will need the same solution all over again. The result would be a proliferation of schemas that all address the same basic need for associating request and response. Worse, they might do it in different ways, cluttering the mental model for everyone and making the underlying patterns less obvious.

    What we want instead is a way to inject into any message the idea of a thread, such that we can easily associate responses with requests, errors with the messages that triggered them, and child interactions that branch off of the main one. This is the subject of the message threading RFC, and the solution is the ~thread decorator, which can be added to any response:

    {\n  \"@id\": \"d9390ce2-8ba1-4544-9596-9870065ad08a\",\n  \"@type\": \"did:sov:8700e296a1458aad0d93;spec/meetings/1.0/response\",\n  \"~thread\": {\"thid\": \"e2987006-a18a-4544-9596-5ad0d9390c8b\"},\n  \"agree\": true,\n  \"comment\": \"See you there!\"\n}\n
    This chunk of JSON is defined independent of any particular message schema, but is understood to be available in all DIDComm schemas.

    "},{"location":"aip2/0011-decorators/#basic-conventions","title":"Basic Conventions","text":"

    Decorators are defined in RFCs that document a general pattern such as message threading RFC or message localization. The documentation for a decorator explains its semantics and offers examples.

    Decorators are recognized by name. The name must begin with the ~ character (which is reserved in DIDComm messages for decorator use), and be a short, single-line string suitable for use as a JSON attribute name.

    Decorators may be simple key:value pairs \"~foo\": \"bar\". Or they may associate a key with a more complex structure:

    \"~thread\": {\n  \"thid\": \"e2987006-a18a-4544-9596-5ad0d9390c8b\",\n  \"pthid\": \"0c8be298-45a1-48a4-5996-d0d95a397006\",\n  \"sender_order\": 0\n}\n

    Decorators should be thought of as supplementary to the problem-domain-specific fields of a message, in that they describe general communication issues relevant to a broad array of message types. Entities that handle messages should treat all unrecognized fields as valid but meaningless, and decorators are no exception. Thus, software that doesn't recognize a decorator should ignore it.

    However, this does not mean that decorators are necessarily optional. Some messages may intend something tied so tightly to a decorator's semantics that the decorator effectively becomes required. An example of this is the relationship between a general error reporting mechanism and the ~thread decorator: it's not very helpful to report errors without the context that a thread provides.

    Because decorators are general by design and intent, we don't expect namespacing to be a major concern. The community agrees on decorators that everybody will recognize, and they acquire global scope upon acceptance. Their globalness is part of their utility. Effectively, decorator names are like reserved words in a shared public language of messages.

    Namespacing is also supported, as we may discover legitimate uses. When namespaces are desired, dotted name notation is used, as in ~mynamespace.mydecoratorname. We may elaborate this topic more in the future.

    Decorators are orthogonal to JSON-LD constructs in DIDComm messages.

    "},{"location":"aip2/0011-decorators/#versioning","title":"Versioning","text":"

    We hope that community-defined decorators are very stable. However, new fields (a non-breaking change) might need to be added to complex decorators; occasionally, more significant changes might be necessary as well. Therefore, decorators do support semver-style versioning, but in a form that allows details to be ignored unless or until they become important. The rules are:

    1. As with all other aspects of DIDComm messages, unrecognized fields in decorators must be ignored.
    2. Version information can be appended to the name of a decorator, as in ~mydecorator/1. Only a major version (never minor or patch) is used, since:
      • Minor version variations should not break decorator handling code.
      • The dot character . is reserved for namespacing within field names.
      • The extra complexity is not worth the small amount of value it might add.
    3. A decorator without a version is considered to be synonymous with version 1.0, and the version-less form is preferred. This allows version numbers to be added only in the uncommon cases where they are necessary.
    "},{"location":"aip2/0011-decorators/#decorator-scope","title":"Decorator Scope","text":"

    A decorator may be understood to decorate (add semantics) at several different scopes. The discussion thus far has focused on message decorators, and this is by far the most important scope to understand. But there are more possibilities.

    Suppose we wanted to decorate an individual field. This can be done with a field decorator, which is a sibling field to the field it decorates. The name of decorated field is combined with a decorator suffix, as follows:

    {\n  \"note\": \"Let's have a picnic.\",\n  \"note~l10n\": { ... }\n}\n
    In this example, taken from the localization pattern, note~l10n decorates note.

    Besides a single message or a single field, consider the following scopes as decorator targets:

    "},{"location":"aip2/0011-decorators/#reference","title":"Reference","text":"

    This section of this RFC will be kept up-to-date with a list of globally accepted decorators, and links to the RFCs that define them.

    "},{"location":"aip2/0011-decorators/#drawbacks","title":"Drawbacks","text":"

    By having fields that are meaningful yet not declared in core schemas, we run the risk that parsing and validation routines will fail to enforce details that are significant but invisible. We also accept the possibility that interop may look good on paper, but fail due to different understandings of important metadata.

    We believe this risk will take care of itself, for the most part, as real-life usage accumulates and decorators become a familiar and central part of the thinking for developers who work with agent-to-agent communication.

    "},{"location":"aip2/0011-decorators/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    There is ongoing work in the #indy-semantics channel on Rocket.Chat to explore the concept of overlays. These are layers of additional meaning that accumulate above a schema base. Decorators as described here are quite similar in intent. There are some subtle differences, though. The most interesting is that decorators as described here may be applied to things that are not schema-like (e.g., to a message family as a whole, or to a connection, not just to an individual message).

    We may be able to resolve these two worldviews, such that decorators are viewed as overlays and inherit some overlay goodness as a result. However, it is unlikely that decorators will change significantly in form or substance as a result. We thus believe the current mental model is already RFC-worthy, and represents a reasonable foundation for immediate use.

    "},{"location":"aip2/0011-decorators/#prior-art","title":"Prior art","text":"

    See references to similar ../../features in programming languages like Java, C#, and Python, mentiond above.

    See also this series of blog posts about semantic gaps and the need to manage intent in a declarative style: [ Lacunas Everywhere, Bridging the Lacuna Humana, Introducing Marks, Mountains, Molehills, and Markedness ]

    "},{"location":"aip2/0011-decorators/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0011-decorators/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries RFCs: RFC 0008, RFC 0017, RFC 0015, RFC 0023, RFC 0043, RFC 0056, RFC 0075 many implemented RFCs depend on decorators... Indy Cloud Agent - Python message threading Aries Framework - .NET message threading Streetcred.id message threading Aries Cloud Agent - Python message threading, attachments Aries Static Agent - Python message threading Aries Framework - Go message threading Connect.Me message threading Verity message threading Aries Protocol Test Suite message threading"},{"location":"aip2/0015-acks/","title":"Aries RFC 0015: ACKs","text":""},{"location":"aip2/0015-acks/#summary","title":"Summary","text":"

    Explains how one party can send acknowledgment messages (ACKs) to confirm receipt and clarify the status of complex processes.

    "},{"location":"aip2/0015-acks/#change-log","title":"Change log","text":""},{"location":"aip2/0015-acks/#motivation","title":"Motivation","text":"

    An acknowledgment or ACK is one of the most common procedures in protocols of all types. We need a flexible, powerful, and easy way to send such messages in agent-to-agent interactions.

    "},{"location":"aip2/0015-acks/#tutorial","title":"Tutorial","text":"

    Confirming a shared understanding matters whenever independent parties interact. We buy something on Amazon; moments later, our email client chimes to tell us of a new message with subject \"Thank you for your recent order.\" We verbally accept a new job, but don't rest easy until we've also emailed the signed offer letter back to our new boss. We change a password on an online account, and get a text at our recovery phone number so both parties know the change truly originated with the account's owner.

    When formal acknowledgments are missing, we get nervous. And rightfully so; most of us have a story of a package that was lost in the mail, or a web form that didn't submit the way we expected.

    Agents interact in very complex ways. They may use multiple transport mechanisms, across varied protocols, through long stretches of time. While we usually expect messages to arrive as sent, and to be processed as expected, a vital tool in the agent communication repertoire is the receipt of acknowledgments to confirm a shared understanding.

    "},{"location":"aip2/0015-acks/#implicit-acks","title":"Implicit ACKs","text":"

    Message threading includes a lightweight, automatic sort of ACK in the form of the ~thread.received_orders field. This allows Alice to report that she has received Bob's recent message that had ~thread.sender_order = N. We expect threading to be best practice in many use cases, and we expect interactions to often happen reliably enough and quickly enough that implicit ACKs provide high value. If you are considering ACKs but are not familiar with that mechanism, make sure you understand it, first. This RFC offers a supplement, not a replacement.

    "},{"location":"aip2/0015-acks/#explicit-acks","title":"Explicit ACKs","text":"

    Despite the goodness of implicit ACKs, there are many circumstances where a reply will not happen immediately. Explicit ACKs can be vital here.

    Explicit ACKS may also be vital at the end of an interaction, when work is finished: a credential has been issued, a proof has been received, a payment has been made. In such a flow, an implicit ACK meets the needs of the party who received the final message, but the other party may want explicit closure. Otherwise they can't know with confidence about the final outcome of the flow.

    Rather than inventing a new \"interaction has been completed successfully\" message for each protocol, an all-purpose ack message type is recommended. It looks like this:

    {\n  \"@type\": \"https://didcomm.org/notification/1.0/ack\",\n  \"@id\": \"06d474e0-20d3-4cbf-bea6-6ba7e1891240\",\n  \"status\": \"OK\",\n  \"~thread\": {\n    \"thid\": \"b271c889-a306-4737-81e6-6b2f2f8062ae\",\n    \"sender_order\": 4,\n    \"received_orders\": {\"did:sov:abcxyz\": 3}\n  }\n}\n

    It may also be appropriate to send an ack at other key points in an interaction (e.g., when a key rotation notice is received).

    "},{"location":"aip2/0015-acks/#adopting-acks","title":"Adopting acks","text":"

    As discussed in 0003: Protocols, a protocol can adopt the ack message into its own namespace. This allows the type of an ack to change from: https://didcomm.org/notification/1.0/ack to something like: https://didcomm.org/otherProtocol/2.0/ack. Thus, message routing logic can see the ack as part of the other protocol, and send it to the relevant handler--but still have all the standardization of generic acks.

    "},{"location":"aip2/0015-acks/#ack-status","title":"ack status","text":"

    The status field in an ack tells whether the ack is final or not with respect to the message being acknowledged. It has 2 predefined values: OK (which means an outcome has occurred, and it was positive); and PENDING, which acknowledges that no outcome is yet known.

    There is not an ack status of FAIL. In the case of a protocol failure a Report Problem message must be used to inform the other party(ies). For more details, see the next section.

    In addition, more advanced ack usage is possible. See the details in the Reference section.

    "},{"location":"aip2/0015-acks/#relationship-to-problem-report","title":"Relationship to problem-report","text":"

    Negative outcomes do not necessarily mean that something bad happened; perhaps Alice comes to hope that Bob rejects her offer to buy his house because she's found something better--and Bob does that, without any error occurring. This is not a FAIL in a problem sense; it's a FAIL in the sense that the offer to buy did not lead to the outcome Alice intended when she sent it.

    This raises the question of errors. Any time an unexpected problem arises, best practice is to report it to the sender of the message that triggered the problem. This is the subject of the problem reporting mechanism.

    A problem_report is inherently a sort of ACK. In fact, the ack message type and the problem_report message type are both members of the same notification message family. Both help a sender learn about status. Therefore, a requirement for an ack is that a status of FAIL be satisfied by a problem_report message.

    However, there is some subtlety in the use of the two types of messages. Some acks may be sent before a final outcome, so a final problem_report may not be enough. As well, an ack request may be sent after a previous ack or problem_report was lost in transit. Because of these caveats, developers whose code creates or consumes acks should be thoughtful about where the two message types overlap, and where they do not. Carelessness here is likely to cause subtle, hard-to-duplicate surprises from time to time.

    "},{"location":"aip2/0015-acks/#custom-acks","title":"Custom ACKs","text":"

    This mechanism cannot address all possible ACK use cases. Some ACKs may require custom data to be sent, and some acknowledgment schemes may be more sophisticated or fine-grained that the simple settings offered here. In such cases, developers should write their own ACK message type(s) and maybe their own decorators. However, reusing the field names and conventions in this RFC may still be desirable, if there is significant overlap in the ../../concepts.

    "},{"location":"aip2/0015-acks/#requesting-acks","title":"Requesting ACKs","text":"

    A decorator, ~please_ack, allows one agent to request an ad hoc ACK from another agent. This is described in the 0317-please-ack RFC.

    "},{"location":"aip2/0015-acks/#reference","title":"Reference","text":""},{"location":"aip2/0015-acks/#ack-message","title":"ack message","text":""},{"location":"aip2/0015-acks/#status","title":"status","text":"

    Required, values OK or PENDING. As discussed above, this tells whether the ack is final or not with respect to the message being acknowledged.

    "},{"location":"aip2/0015-acks/#threadthid","title":"~thread.thid","text":"

    Required. This links the ack back to the message that requested it.

    All other fields in an ack are present or absent per requirements of ordinary messages.

    "},{"location":"aip2/0015-acks/#drawbacks-and-alternatives","title":"Drawbacks and Alternatives","text":"

    None identified.

    "},{"location":"aip2/0015-acks/#prior-art","title":"Prior art","text":"

    See notes above about the implicit ACK mechanism in ~thread.received_orders.

    "},{"location":"aip2/0015-acks/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0015-acks/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes RFC 0036: Issue Credential Protocol ACKs are adopted by this protocol. RFC 0037: Present Proof Protocol ACKs are adopted by this protocol. RFC 0193: Coin Flip Protocol ACKs are adopted as a subprotocol. Aries Cloud Agent - Python Contributed by the Government of British Columbia."},{"location":"aip2/0017-attachments/","title":"Aries RFC 0017: Attachments","text":""},{"location":"aip2/0017-attachments/#summary","title":"Summary","text":"

    Explains the three canonical ways to attach data to an agent message.

    "},{"location":"aip2/0017-attachments/#motivation","title":"Motivation","text":"

    DIDComm messages use a structured format with a defined schema and a small inventory of scalar data types (string, number, date, etc). However, it will be quite common for messages to supplement formalized exchange with arbitrary data--images, documents, or types of media not yet invented.

    We need a way to \"attach\" such content to DIDComm messages. This method must be flexible, powerful, and usable without requiring new schema updates for every dynamic variation.

    "},{"location":"aip2/0017-attachments/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0017-attachments/#messages-versus-data","title":"Messages versus Data","text":"

    Before explaining how to associate data with a message, it is worth pondering exactly how these two categories of information differ. It is common for newcomers to DIDComm to argue that messages are just data, and vice versa. After all, any data can be transmitted over DIDComm; doesn't that turn it into a message? And any message can be saved; doesn't that make it data?

    While it is true that messages and data are highly related, some semantic differences matter:

    Some examples:

    The line between these two ../../concepts may not be perfectly crisp in all cases, and that is okay. It is clear enough, most of the time, to provide context for the central question of this RFC, which is:

    How do we send data along with messages?

    "},{"location":"aip2/0017-attachments/#3-ways","title":"3 Ways","text":"

    Data can be \"attached\" to DIDComm messages in 3 ways:

    1. Inlining
    2. Embedding
    3. Appending
    "},{"location":"aip2/0017-attachments/#inlining","title":"Inlining","text":"

    In inlining, data is directly assigned as the value paired with a JSON key in a DIDComm message. For example, a message about arranging a rendezvous may inline data about a location:

    This inlined data is in Google Maps pinning format. It has a meaning at rest, outside the message that conveys it, and the versioning of its structure may evolve independently of the versioning of the rendezvous protocol.

    Only JSON data can be inlined, since any other data format would break JSON format rules.

    "},{"location":"aip2/0017-attachments/#embedding","title":"Embedding","text":"

    In embedding, a JSON data structure called an attachment descriptor is assigned as the value paired with a JSON key in a DIDComm message. (Or, an array of attachment descriptors could be assigned.) By convention, the key name for such attachment fields ends with ~attach, making it a field-level decorator that can share common handling logic in agent code. The attachment descriptor structure describes the MIME type and other properties of the data, in much the same way that MIME headers and body describe and contain an attachment in an email message. Given an imaginary protocol that photographers could use to share their favorite photo with friends, the embedded data might manifest like this:

    Embedding is a less direct mechanism than inlining, because the data is no longer readable by a human inspecting the message; it is base64url-encoded instead. A benefit of this approach is that the data can be any MIME type instead of just JSON, and that the data comes with useful metadata that can facilitate saving it as a separate file.

    "},{"location":"aip2/0017-attachments/#appending","title":"Appending","text":"

    Appending is accomplished using the ~attach decorator, which can be added to any message to include arbitrary data. The decorator is an array of attachment descriptor structures (the same structure used for embedding). For example, a message that conveys evidence found at a crime scene might include the following decorator:

    "},{"location":"aip2/0017-attachments/#choosing-the-right-approach","title":"Choosing the right approach","text":"

    These methods for attaching sit along a continuum that is somewhat like the continuum between strong, statically typed languages versus dynamic, duck-typed languages in programming. The more strongly typed the attachments are, the more strongly bound the attachments are to the protocol that conveys them. Each choice has advantages and disadvantages.

    Inlined data is strongly typed; the schema for its associated message must specify the name of the data field, plus what type of data it contains. Its format is always some kind of JSON--often JSON-LD with a @type and/or @context field to provide greater clarity and some independence of versioning. Simple and small data is the best fit for inlining. As mentioned earlier, the Connection Protocol inlines a DID Doc in its connection_request and connection_response messages.

    Embedded data is still associated with a known field in the message schema, but it can have a broader set of possible formats. A credential exchange protocol might embed a credential in the final message that does credential issuance.

    Appended attachments are the most flexible but also the hardest to run through semantically sophisticated processing. They do not require any specific declaration in the schema of a message, although they can be referenced in fields defined by the schema via their nickname (see below). A protocol that needs to pass an arbitrary collection of artifacts without strong knowledge of their semantics might find this helpful, as in the example mentioned above, where scheduling a venue causes various human-usable payloads to be delivered.

    "},{"location":"aip2/0017-attachments/#ids-for-attachments","title":"IDs for attachments","text":"

    The @id field within an attachment descriptor is used to refer unambiguously to an appended (or less ideally, embedded) attachment, and works like an HTML anchor. It is resolved relative to the root @id of the message and only has to be unique within a message. For example, imagine a fictional message type that's used to apply for an art scholarship, that requires photos of art demonstrating techniques A, B, and C. We could have 3 different attachment descriptors--but what if the same work of art demonstrates both technique A and technique B? We don't want to attach the same photo twice...

    What we can do is stipulate that the datatype of A_pic, B_pic, and C_pic is an attachment reference, and that the references will point to appended attachments. A fragment of the result might look like this:

    Another example of nickname use appeared in the first example of appended attachments above, where the notes field refered to the @ids of the various attachments.

    This indirection offers several benefits:

    We could use this same technique with embedded attachments (that is, assign a nickname to an embedded attachment, and refer to that nickname in another field where attached data could be embedded), but this is not considered best practice. The reason is that it requires a field in the schema to have two possible data types--one a string that's a nickname reference, and one an attachment descriptor. Generally, we like fields to have a single datatype in a schema.

    "},{"location":"aip2/0017-attachments/#content-formats","title":"Content Formats","text":"

    There are multiple ways to include content in an attachment. Only one method should be used per attachment.

    "},{"location":"aip2/0017-attachments/#base64url","title":"base64url","text":"

    This content encoding is an obvious choice for any content different than JSON. You can embed content of any type using this method. Examples are plentiful throughout the document. Note that this encoding is always base64url encoding, not plain base64, and that padding is not required. Code that reads this encoding SHOULD tolerate the presence or absence of padding and base64 versus base64url encodings equally well, but code that writes this encoding SHOULD omit the padding to guarantee alignment with encoding rules in the JOSE (JW*) family of specs.

    "},{"location":"aip2/0017-attachments/#json","title":"json","text":"

    If you are embedding an attachment that is JSON, you can embed it directly in JSON format to make access easier, by replacing data.base64 with data.json, where the value assigned to data.json is the attached content:

    This is an overly trivial example of GeoJSON, but hopefully it illustrates the technique. In cases where there is no mime type to declare, it may be helpful to use JSON-LD's @type construct to clarify the specific flavor of JSON in the embedded attachment.

    "},{"location":"aip2/0017-attachments/#links","title":"links","text":"

    All examples discussed so far include an attachment by value--that is, the attachment's bytes are directly inlined in the message in some way. This is a useful mode of data delivery, but it is not the only mode.

    Another way that attachment data can be incorporated is by reference. For example, you can link to the content on a web server by replacing data.base64 or data.json with data.links in an attachment descriptor:

    When you provide such a link, you are creating a logical association between the message and an attachment that can be fetched separately. This makes it possible to send brief descriptors of attachments and to make the downloading of the heavy content optional (or parallelizable) for the recipient.

    The links field is plural (an array) to allow multiple locations to be offered for the same content. This allows an agent to fetch attachments using whichever mechanism(s) are best suited to its individual needs and capabilities.

    "},{"location":"aip2/0017-attachments/#supported-uri-types","title":"Supported URI Types","text":"

    The set of supported URI types in an attachment link is limited to:

    Additional URI types may be added via updates to this RFC.

    If an attachment link with an unsupported URI is received, the agent SHOULD respond with a Problem Report indicated the problem.

    An ecosystem (coordinating set of agents working in a specific business area) may agree to support other URI types within that ecosystem. As such, implementing a mechanism to easily add support for other attachment link URI types might be useful, but is not required.

    "},{"location":"aip2/0017-attachments/#signing-attachments","title":"Signing Attachments","text":"

    In some cases it may be desirable to sign an attachment in addition to or instead of signing the message as a whole. Consider a home-buying protocol; the home inspection needs to be signed even when it is removed from a messaging flow. Attachments may also be signed by a party separate from the sender of the message, or using a different signing key when the sender is performing key rotation.

    Embedded and appended attachments support signatures by the addition of a data.jws field containing a signature in JWS (RFC 7515) format with Detached Content. The payload of the JWS is the raw bytes of the attachment, appropriately base64url-encoded per JWS rules. If these raw bytes are incorporated by value in the DIDComm message, they are already base64url-encoded in data.base64 and are thus directly substitutable for the suppressed data.jws.payload field; if they are externally referenced, then the bytes must be fetched via the URI in data.links and base64url-encoded before the JWS can be fully reconstituted. Signatures over inlined JSON attachments are not currently defined as this depends upon a canonical serialization for the data.

    Sample JWS-signed attachment:

    {\n  \"@type\": \"https://didcomm.org/xhomebuy/1.0/home_insp\",\n  \"inspection_date\": \"2020-03-25\",\n  \"inspection_address\": \"123 Villa de Las Fuentes, Toledo, Spain\",\n  \"comment\": \"Here's that report you asked for.\",\n  \"report~attach\": {\n    \"mime-type\": \"application/pdf\",\n    \"filename\": \"Garcia-inspection-March-25.pdf\",\n    \"data\": {\n      \"base64\": \"eyJ0eXAiOiJKV1QiLA0KICJhbGciOiJIUzI1NiJ... (bytes omitted to shorten)\",\n      \"jws\": {\n        // payload: ...,  <-- omitted: refer to base64 content when validating\n        \"header\": {\n          \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n        },\n        \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n        \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n      }\n    }\n  }\n}\n

    Here, the JWS structure inlines a public key value in did:key format within the unprotected header's kid field. It may also use a DID URL to reference a key within a resolvable DIDDoc. Supported DID URLs should specify a timestamp and/or version for the containing document.

    The JWS protected header consists of at least the following parameter indicating an Edwards curve digital signature:

    {\n  \"alg\": \"EdDSA\"\n}\n

    Additional protected and unprotected header parameters may be included in the JWS and must be ignored by implementations if not specifically supported. Any registered header parameters defined by the JWS RFC must be used according to the specification if present.

    Multiple signatures may be included using the JWS General Serialization syntax. When a single signature is present, the Flattened Serialization syntax should be preferred. Because each JWS contains an unprotected header with the signing key information, the JWS Compact Serialization cannot be supported.

    "},{"location":"aip2/0017-attachments/#size-considerations","title":"Size Considerations","text":"

    DIDComm messages should be small, as a general rule. Just as it's a bad idea to send email messages with multi-GB attachments, it would be bad to send DIDComm messages with huge amounts of data inside them. Remember, a message is about advancing a protocol; usually that can be done without gigabytes or even megabytes of JSON fields. Remember as well that DIDComm messages may be sent over channels having size constraints tied to the transport--an HTTP POST or Bluetooth or NFC or AMQP payload of more than a few MB may be problematic.

    Size pressures in messaging are likely to come from attached data. A good rule of thumb might be to not make DIDComm messages bigger than email or MMS messages--whenever more data needs to be attached, use the inclusion-by-reference technique to allow the data to be fetched separately.

    "},{"location":"aip2/0017-attachments/#security-implications","title":"Security Implications","text":"

    Attachments are a notorious vector for malware and mischief with email. For this reason, agents that support attachments MUST perform input validation on attachments, and MUST NOT invoke risky actions on attachments until such validation has been performed. The status of input validation with respect to attachment data MUST be reflected in the Message Trust Context associated with the data's message.

    "},{"location":"aip2/0017-attachments/#privacy-implications","title":"Privacy Implications","text":"

    When attachments are inlined, they enjoy the same security and transmission guarantees as all agent communication. However, given the right context, a large inlined attachment may be recognizable by its size, even if it is carefully encrypted.

    If attachment content is fetched from an external source, then new complications arise. The security guarantees may change. Data streamed from a CDN may be observable in flight. URIs may be correlating. Content may not be immutable or tamper-resistant.

    However, these issues are not necessarily a problem. If a DIDComm message wants to attach a 4 GB ISO file of a linux distribution, it may be perfectly fine to do so in the clear. Downloading it is unlikely to introduce strong correlation, encryption is unnecessary, and the torrent itself prevents malicious modification.

    Code that handles attachments will need to use wise policy to decide whether attachments are presented in a form that meets its needs.

    "},{"location":"aip2/0017-attachments/#reference","title":"Reference","text":""},{"location":"aip2/0017-attachments/#attachment-descriptor-structure","title":"Attachment Descriptor structure","text":""},{"location":"aip2/0017-attachments/#drawbacks","title":"Drawbacks","text":"

    By providing 3 different choices, we impose additional complexity on agents that will receive messages. They have to handle attachments in 3 different modes.

    "},{"location":"aip2/0017-attachments/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Originally, we only proposed the most flexible method of attaching--appending. However, feedback from the community suggested that stronger binding to schema was desirable. Inlining was independently invented, and is suggested by JSON-LD anyway. Embedding without appending eliminates some valuable ../../features such as unnamed and undeclared ad-hoc attachments. So we ended up wanting to support all 3 modes.

    "},{"location":"aip2/0017-attachments/#prior-art","title":"Prior art","text":"

    Multipart MIME (see RFCs 822, 1341, and 2045) defines a mechanism somewhat like this. Since we are using JSON instead of email messages as the core model, we can't use these mechanisms directly. However, they are an inspiration for what we are showing here.

    "},{"location":"aip2/0017-attachments/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0017-attachments/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Cloud Agent - Python in credential exchange Streetcred.id Commercial mobile and web app built using Aries Framework - .NET"},{"location":"aip2/0019-encryption-envelope/","title":"Aries RFC 0019: Encryption Envelope","text":""},{"location":"aip2/0019-encryption-envelope/#summary","title":"Summary","text":"

    There are two layers of messages that combine to enable interoperable self-sovereign agent-to-agent communication. At the highest level are DIDComm Plaintext Messages - messages sent between identities to accomplish some shared goal (e.g., establishing a connection, issuing a verifiable credential, sharing a chat). DIDComm Plaintext Messages are delivered via the second, lower layer of messaging - DIDComm Encrypted Envelopes. A DIDComm Encrypted Envelope is a wrapper (envelope) around a plaintext message to permit secure sending and routing. A plaintext message going from its sender to its receiver passes through many agents, and an encryption envelope is used for each hop of the journey.

    This RFC describes the DIDComm Encrypted Envelope format and the pack() and unpack() functions that implement this format.

    "},{"location":"aip2/0019-encryption-envelope/#motivation","title":"Motivation","text":"

    Encryption envelopes use a standard format built on JSON Web Encryption - RFC 7516. This format is not captive to Aries; it requires no special Aries worldview or Aries dependencies to implement. Rather, it is a general-purpose solution to the question of how to encrypt, decrypt, and route messages as they pass over any transport(s). By documenting the format here, we hope to provide a point of interoperability for developers of agents inside and outside the Aries ecosystem.

    We also document how Aries implements its support for the DIDComm Encrypted Envelope format through the pack() and unpack() functions. For developers of Aries, this is a sort of design doc; for those who want to implement the format in other tech stacks, it may be a useful reference.

    "},{"location":"aip2/0019-encryption-envelope/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0019-encryption-envelope/#assumptions","title":"Assumptions","text":"

    We assume that each sending agent knows:

    The assumptions can be made because either the message is being sent to an agent within the sending agent's domain and so the sender knows the internal configuration of agents, or the message is being sent outside the sending agent's domain and interoperability requirements are in force to define the sending agent's behaviour.

    "},{"location":"aip2/0019-encryption-envelope/#example-scenario","title":"Example Scenario","text":"

    The example of Alice and Bob's sovereign domains is used for illustrative purposes in defining this RFC.

    In the diagram above:

    For the purposes of this discussion we are defining the Encryption Envelope agent message flow to be:

    1 \u2192 2 \u2192 8 \u2192 9 \u2192 3 \u2192 4

    However, that flow is just one of several that could match this configuration. What we know for sure is that:

    "},{"location":"aip2/0019-encryption-envelope/#encrypted-envelopes","title":"Encrypted Envelopes","text":"

    An encrypted envelope is used to transport any plaintext message from one agent directly to another. In our example message flow above, there are five encrypted envelopes sent, one for each hop in the flow. The process to send an encrypted envelope consists of the following steps:

    This is repeated with each hop, but the encrypted envelopes are nested, such that the plaintext is never visible until it reaches its final recipient.

    "},{"location":"aip2/0019-encryption-envelope/#implementation","title":"Implementation","text":"

    We will describe the pack and unpack algorithms, and their output, in terms of Aries' initial implementation, which may evolve over time. Other implementations could be built, but they would need to emit and consume similar inputs and outputs.

    The data structures emitted and consumed by these algorithms are described in a formal schema.

    "},{"location":"aip2/0019-encryption-envelope/#authcrypt-mode-vs-anoncrypt-mode","title":"Authcrypt mode vs. Anoncrypt mode","text":"

    When packing and unpacking are done in a way that the sender is anonymous, we say that we are in anoncrypt mode. When the sender is revealed, we are in authcrypt mode. Authcrypt mode reveals the sender to the recipient only; it is not the same as a non-repudiable signature. See the RFC about non-repudiable signatures, and this discussion about the theory of non-repudiation.

    "},{"location":"aip2/0019-encryption-envelope/#pack-message","title":"Pack Message","text":""},{"location":"aip2/0019-encryption-envelope/#pack_message-interface","title":"pack_message() interface","text":"

    packed_message = pack_message(wallet_handle, message, receiver_verkeys, sender_verkey)

    "},{"location":"aip2/0019-encryption-envelope/#pack_message-params","title":"pack_message() Params:","text":""},{"location":"aip2/0019-encryption-envelope/#pack_message-return-value-authcrypt-mode","title":"pack_message() return value (Authcrypt mode)","text":"

    This is an example of an outputted message encrypting for two verkeys using Authcrypt.

    {\n    \"protected\": \"eyJlbmMiOiJ4Y2hhY2hhMjBwb2x5MTMwNV9pZXRmIiwidHlwIjoiSldNLzEuMCIsImFsZyI6IkF1dGhjcnlwdCIsInJlY2lwaWVudHMiOlt7ImVuY3J5cHRlZF9rZXkiOiJMNVhEaEgxNVBtX3ZIeFNlcmFZOGVPVEc2UmZjRTJOUTNFVGVWQy03RWlEWnl6cFJKZDhGVzBhNnFlNEpmdUF6IiwiaGVhZGVyIjp7ImtpZCI6IkdKMVN6b1d6YXZRWWZOTDlYa2FKZHJRZWpmenRONFhxZHNpVjRjdDNMWEtMIiwiaXYiOiJhOEltaW5zdFhIaTU0X0otSmU1SVdsT2NOZ1N3RDlUQiIsInNlbmRlciI6ImZ0aW13aWlZUkc3clJRYlhnSjEzQzVhVEVRSXJzV0RJX2JzeERxaVdiVGxWU0tQbXc2NDE4dnozSG1NbGVsTThBdVNpS2xhTENtUkRJNHNERlNnWkljQVZYbzEzNFY4bzhsRm9WMUJkREk3ZmRLT1p6ckticUNpeEtKaz0ifX0seyJlbmNyeXB0ZWRfa2V5IjoiZUFNaUQ2R0RtT3R6UkVoSS1UVjA1X1JoaXBweThqd09BdTVELTJJZFZPSmdJOC1ON1FOU3VsWXlDb1dpRTE2WSIsImhlYWRlciI6eyJraWQiOiJIS1RBaVlNOGNFMmtLQzlLYU5NWkxZajRHUzh1V0NZTUJ4UDJpMVk5Mnp1bSIsIml2IjoiRDR0TnRIZDJyczY1RUdfQTRHQi1vMC05QmdMeERNZkgiLCJzZW5kZXIiOiJzSjdwaXU0VUR1TF9vMnBYYi1KX0pBcHhzYUZyeGlUbWdwWmpsdFdqWUZUVWlyNGI4TVdtRGR0enAwT25UZUhMSzltRnJoSDRHVkExd1Z0bm9rVUtvZ0NkTldIc2NhclFzY1FDUlBaREtyVzZib2Z0d0g4X0VZR1RMMFE9In19XX0=\",\n    \"iv\": \"ZqOrBZiA-RdFMhy2\",\n    \"ciphertext\": \"K7KxkeYGtQpbi-gNuLObS8w724mIDP7IyGV_aN5AscnGumFd-SvBhW2WRIcOyHQmYa-wJX0MSGOJgc8FYw5UOQgtPAIMbSwVgq-8rF2hIniZMgdQBKxT_jGZS06kSHDy9UEYcDOswtoLgLp8YPU7HmScKHSpwYY3vPZQzgSS_n7Oa3o_jYiRKZF0Gemamue0e2iJ9xQIOPodsxLXxkPrvvdEIM0fJFrpbeuiKpMk\",\n    \"tag\": \"kAuPl8mwb0FFVyip1omEhQ==\"\n}\n

    The base64URL encoded protected decodes to this:

    {\n    \"enc\": \"xchacha20poly1305_ietf\",\n    \"typ\": \"JWM/1.0\",\n    \"alg\": \"Authcrypt\",\n    \"recipients\": [\n        {\n            \"encrypted_key\": \"L5XDhH15Pm_vHxSeraY8eOTG6RfcE2NQ3ETeVC-7EiDZyzpRJd8FW0a6qe4JfuAz\",\n            \"header\": {\n                \"kid\": \"GJ1SzoWzavQYfNL9XkaJdrQejfztN4XqdsiV4ct3LXKL\",\n                \"iv\": \"a8IminstXHi54_J-Je5IWlOcNgSwD9TB\",\n                \"sender\": \"ftimwiiYRG7rRQbXgJ13C5aTEQIrsWDI_bsxDqiWbTlVSKPmw6418vz3HmMlelM8AuSiKlaLCmRDI4sDFSgZIcAVXo134V8o8lFoV1BdDI7fdKOZzrKbqCixKJk=\"\n            }\n        },\n        {\n            \"encrypted_key\": \"eAMiD6GDmOtzREhI-TV05_Rhippy8jwOAu5D-2IdVOJgI8-N7QNSulYyCoWiE16Y\",\n            \"header\": {\n                \"kid\": \"HKTAiYM8cE2kKC9KaNMZLYj4GS8uWCYMBxP2i1Y92zum\",\n                \"iv\": \"D4tNtHd2rs65EG_A4GB-o0-9BgLxDMfH\",\n                \"sender\": \"sJ7piu4UDuL_o2pXb-J_JApxsaFrxiTmgpZjltWjYFTUir4b8MWmDdtzp0OnTeHLK9mFrhH4GVA1wVtnokUKogCdNWHscarQscQCRPZDKrW6boftwH8_EYGTL0Q=\"\n            }\n        }\n    ]\n}\n

    "},{"location":"aip2/0019-encryption-envelope/#pack-output-format-authcrypt-mode","title":"pack output format (Authcrypt mode)","text":"
        {\n        \"protected\": \"b64URLencoded({\n            \"enc\": \"xchachapoly1305_ietf\",\n            \"typ\": \"JWM/1.0\",\n            \"alg\": \"Authcrypt\",\n            \"recipients\": [\n                {\n                    \"encrypted_key\": base64URLencode(libsodium.crypto_box(my_key, their_vk, cek, cek_iv))\n                    \"header\": {\n                          \"kid\": \"base58encode(recipient_verkey)\",\n                           \"sender\" : base64URLencode(libsodium.crypto_box_seal(their_vk, base58encode(sender_vk)),\n                            \"iv\" : base64URLencode(cek_iv)\n                }\n            },\n            ],\n        })\",\n        \"iv\": <b64URLencode(iv)>,\n        \"ciphertext\": b64URLencode(encrypt_detached({'@type'...}, protected_value_encoded, iv, cek),\n        \"tag\": <b64URLencode(tag)>\n    }\n
    "},{"location":"aip2/0019-encryption-envelope/#authcrypt-pack-algorithm","title":"Authcrypt pack algorithm","text":"
    1. generate a content encryption key (symmetrical encryption key)
    2. encrypt the CEK for each recipient's public key using Authcrypt (steps below)
      1. set encrypted_key value to base64URLencode(libsodium.crypto_box(my_key, their_vk, cek, cek_iv))
        • Note it this step we're encrypting the cek, so it can be decrypted by the recipient
      2. set sender value to base64URLencode(libsodium.crypto_box_seal(their_vk, sender_vk_string))
        • Note in this step we're encrypting the sender_verkey to protect sender anonymity
      3. base64URLencode(cek_iv) and set to iv value in the header
        • Note the cek_iv in the header is used for the encrypted_key where as iv is for ciphertext
    3. base64URLencode the protected value
    4. encrypt the message using libsodium.crypto_aead_chacha20poly1305_ietf_encrypt_detached(message, protected_value_encoded, iv, cek) this is the ciphertext.
    5. base64URLencode the iv, ciphertext, and tag then serialize the format into the output format listed above.

    For a reference implementation, see https://github.com/hyperledger/indy-sdk/blob/master/libindy/src/commands/crypto.rs

    "},{"location":"aip2/0019-encryption-envelope/#pack_message-return-value-anoncrypt-mode","title":"pack_message() return value (Anoncrypt mode)","text":"

    This is an example of an outputted message encrypted for two verkeys using Anoncrypt.

    {\n    \"protected\": \"eyJlbmMiOiJ4Y2hhY2hhMjBwb2x5MTMwNV9pZXRmIiwidHlwIjoiSldNLzEuMCIsImFsZyI6IkFub25jcnlwdCIsInJlY2lwaWVudHMiOlt7ImVuY3J5cHRlZF9rZXkiOiJYQ044VjU3UTF0Z2F1TFcxemdqMVdRWlEwV0RWMFF3eUVaRk5Od0Y2RG1pSTQ5Q0s1czU4ZHNWMGRfTlpLLVNNTnFlMGlGWGdYRnZIcG9jOGt1VmlTTV9LNWxycGJNU3RqN0NSUHNrdmJTOD0iLCJoZWFkZXIiOnsia2lkIjoiR0oxU3pvV3phdlFZZk5MOVhrYUpkclFlamZ6dE40WHFkc2lWNGN0M0xYS0wifX0seyJlbmNyeXB0ZWRfa2V5IjoiaG5PZUwwWTl4T3ZjeTVvRmd0ZDFSVm05ZDczLTB1R1dOSkN0RzRsS3N3dlljV3pTbkRsaGJidmppSFVDWDVtTU5ZdWxpbGdDTUZRdmt2clJEbkpJM0U2WmpPMXFSWnVDUXY0eVQtdzZvaUE9IiwiaGVhZGVyIjp7ImtpZCI6IjJHWG11Q04ySkN4U3FNUlZmdEJITHhWSktTTDViWHl6TThEc1B6R3FRb05qIn19XX0=\",\n    \"iv\": \"M1GneQLepxfDbios\",\n    \"ciphertext\": \"iOLSKIxqn_kCZ7Xo7iKQ9rjM4DYqWIM16_vUeb1XDsmFTKjmvjR0u2mWFA48ovX5yVtUd9YKx86rDVDLs1xgz91Q4VLt9dHMOfzqv5DwmAFbbc9Q5wHhFwBvutUx5-lDZJFzoMQHlSAGFSBrvuApDXXt8fs96IJv3PsL145Qt27WLu05nxhkzUZz8lXfERHwAC8FYAjfvN8Fy2UwXTVdHqAOyI5fdKqfvykGs6fV\",\n    \"tag\": \"gL-lfmD-MnNj9Pr6TfzgLA==\"\n}\n

    The protected data decodes to this:

    {\n    \"enc\": \"xchacha20poly1305_ietf\",\n    \"typ\": \"JWM/1.0\",\n    \"alg\": \"Anoncrypt\",\n    \"recipients\": [\n        {\n            \"encrypted_key\": \"XCN8V57Q1tgauLW1zgj1WQZQ0WDV0QwyEZFNNwF6DmiI49CK5s58dsV0d_NZK-SMNqe0iFXgXFvHpoc8kuViSM_K5lrpbMStj7CRPskvbS8=\",\n            \"header\": {\n                \"kid\": \"GJ1SzoWzavQYfNL9XkaJdrQejfztN4XqdsiV4ct3LXKL\"\n            }\n        },\n        {\n            \"encrypted_key\": \"hnOeL0Y9xOvcy5oFgtd1RVm9d73-0uGWNJCtG4lKswvYcWzSnDlhbbvjiHUCX5mMNYulilgCMFQvkvrRDnJI3E6ZjO1qRZuCQv4yT-w6oiA=\",\n            \"header\": {\n                \"kid\": \"2GXmuCN2JCxSqMRVftBHLxVJKSL5bXyzM8DsPzGqQoNj\"\n            }\n        }\n    ]\n}\n
    "},{"location":"aip2/0019-encryption-envelope/#pack-output-format-anoncrypt-mode","title":"pack output format (Anoncrypt mode)","text":"
        {\n         \"protected\": \"b64URLencoded({\n            \"enc\": \"xchachapoly1305_ietf\",\n            \"typ\": \"JWM/1.0\",\n            \"alg\": \"Anoncrypt\",\n            \"recipients\": [\n                {\n                    \"encrypted_key\": base64URLencode(libsodium.crypto_box_seal(their_vk, cek)),\n                    \"header\": {\n                        \"kid\": base58encode(recipient_verkey),\n                    }\n                },\n            ],\n         })\",\n         \"iv\": b64URLencode(iv),\n         \"ciphertext\": b64URLencode(encrypt_detached({'@type'...}, protected_value_encoded, iv, cek),\n         \"tag\": b64URLencode(tag)\n    }\n
    "},{"location":"aip2/0019-encryption-envelope/#anoncrypt-pack-algorithm","title":"Anoncrypt pack algorithm","text":"
    1. generate a content encryption key (symmetrical encryption key)
    2. encrypt the CEK for each recipient's public key using Anoncrypt (steps below)
      1. set encrypted_key value to base64URLencode(libsodium.crypto_box_seal(their_vk, cek))
        • Note it this step we're encrypting the cek, so it can be decrypted by the recipient
    3. base64URLencode the protected value
    4. encrypt the message using libsodium.crypto_aead_chacha20poly1305_ietf_encrypt_detached(message, protected_value_encoded, iv, cek) this is the ciphertext.
    5. base64URLencode the iv, ciphertext, and tag then serialize the format into the output format listed above.

    For a reference implementation, see https://github.com/hyperledger/indy-sdk/blob/master/libindy/src/commands/crypto.rs

    "},{"location":"aip2/0019-encryption-envelope/#unpack-message","title":"Unpack Message","text":""},{"location":"aip2/0019-encryption-envelope/#unpack_message-interface","title":"unpack_message() interface","text":"

    unpacked_message = unpack_message(wallet_handle, jwe)

    "},{"location":"aip2/0019-encryption-envelope/#unpack_message-params","title":"unpack_message() Params","text":""},{"location":"aip2/0019-encryption-envelope/#unpack-algorithm","title":"Unpack Algorithm","text":"
    1. seralize data, so it can be used
      • For example, in rust-lang this has to be seralized as a struct.
    2. Lookup the kid for each recipient in the wallet to see if the wallet possesses a private key associated with the public key listed
    3. Check if a sender field is used.
      • If a sender is included use auth_decrypt to decrypt the encrypted_key by doing the following:
        1. decrypt sender verkey using libsodium.crypto_box_seal_open(my_private_key, base64URLdecode(sender))
        2. decrypt cek using libsodium.crypto_box_open(my_private_key, sender_verkey, encrypted_key, cek_iv)
        3. decrypt ciphertext using libsodium.crypto_aead_chacha20poly1305_ietf_open_detached(base64URLdecode(ciphertext_bytes), base64URLdecode(protected_data_as_bytes), base64URLdecode(nonce), cek)
        4. return message, recipient_verkey and sender_verkey following the authcrypt format listed below
      • If a sender is NOT included use anon_decrypt to decrypt the encrypted_key by doing the following:
        1. decrypt encrypted_key using libsodium.crypto_box_seal_open(my_private_key, encrypted_key)
        2. decrypt ciphertext using libsodium.crypto_aead_chacha20poly1305_ietf_open_detached(base64URLdecode(ciphertext_bytes), base64URLdecode(protected_data_as_bytes), base64URLdecode(nonce), cek)
        3. return message and recipient_verkey following the anoncrypt format listed below

    NOTE: In the unpack algorithm, the base64url decode implementation used MUST correctly decode padded and unpadded base64URL encoded data.

    For a reference unpack implementation, see https://github.com/hyperledger/indy-sdk/blob/master/libindy/src/commands/crypto.rs

    "},{"location":"aip2/0019-encryption-envelope/#unpack_message-return-values-authcrypt-mode","title":"unpack_message() return values (authcrypt mode)","text":"
    {\n    \"message\": \"{ \\\"@id\\\": \\\"123456780\\\",\\\"@type\\\":\\\"https://didcomm.org/basicmessage/1.0/message\\\",\\\"sent_time\\\": \\\"2019-01-15 18:42:01Z\\\",\\\"content\\\": \\\"Your hovercraft is full of eels.\\\"}\",\n    \"recipient_verkey\": \"HKTAiYM8cE2kKC9KaNMZLYj4GS8uWCYMBxP2i1Y92zum\",\n    \"sender_verkey\": \"DWwLsbKCRAbYtfYnQNmzfKV7ofVhMBi6T4o3d2SCxVuX\"\n}\n
    "},{"location":"aip2/0019-encryption-envelope/#unpack_message-return-values-anoncrypt-mode","title":"unpack_message() return values (anoncrypt mode)","text":"
    {\n    \"message\": \"{ \\\"@id\\\": \\\"123456780\\\",\\\"@type\\\":\\\"https://didcomm.org/basicmessage/1.0/message\\\",\\\"sent_time\\\": \\\"2019-01-15 18:42:01Z\\\",\\\"content\\\": \\\"Your hovercraft is full of eels.\\\"}\",\n    \"recipient_verkey\": \"2GXmuCN2JCxSqMRVftBHLxVJKSL5bXyzM8DsPzGqQoNj\"\n}\n
    "},{"location":"aip2/0019-encryption-envelope/#additional-notes","title":"Additional Notes","text":""},{"location":"aip2/0019-encryption-envelope/#drawbacks","title":"Drawbacks","text":"

    The current implementation of the pack() message is currently Hyperledger Aries specific. It is based on common crypto libraries (NaCl), but the wrappers are not commonly used outside of Aries. There's currently work being done to fine alignment on a cross-ecosystem interoperable protocol, but this hasn't been achieved yet. This work will hopefully bridge this gap.

    "},{"location":"aip2/0019-encryption-envelope/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    As the JWE standard currently stands, it does not follow this format. We're actively working with the lead writer of the JWE spec to find alignment and are hopeful the changes needed can be added.

    We've also looked at using the Message Layer Security (MLS) specification. This specification shows promise for adoption later on with more maturity. Additionally because they aren't hiding metadata related to the sender (Sender Anonymity), we would need to see some changes made to the specification before we could adopt this spec.

    "},{"location":"aip2/0019-encryption-envelope/#prior-art","title":"Prior art","text":"

    The JWE family of encryption methods.

    "},{"location":"aip2/0019-encryption-envelope/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0019-encryption-envelope/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community Aries Framework - .NET .NET framework for building agents of all types Streetcred.id Commercial mobile and web app built using Aries Framework - .NET Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases. Aries Framework - Go For building agents, hubs and other DIDComm ../../features in GoLang. Aries Protocol Test Suite"},{"location":"aip2/0019-encryption-envelope/schema/","title":"Schema","text":"

    This spec is according JSON Schema v0.7

    {\n    \"id\": \"https://github.com/hyperledger/indy-agent/wiremessage.json\",\n    \"$schema\": \"http://json-schema.org/draft-07/schema#\",\n    \"title\": \"Json Web Message format\",\n    \"type\": \"object\",\n    \"required\": [\"ciphertext\", \"iv\", \"protected\", \"tag\"],\n    \"properties\": {\n        \"protected\": {\n            \"type\": \"object\",\n            \"description\": \"Additional authenticated message data base64URL encoded, so it can be verified by the recipient using the tag\",\n            \"required\": [\"enc\", \"typ\", \"alg\", \"recipients\"],\n            \"properties\": {\n                \"enc\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"xchacha20poly1305_ietf\"],\n                    \"description\": \"The authenticated encryption algorithm used to encrypt the ciphertext\"\n                },\n                \"typ\": { \n                    \"type\": \"string\",\n                    \"description\": \"The message type. Ex: JWM/1.0\"\n                },\n                \"alg\": {\n                    \"type\": \"string\",\n                    \"enum\": [ \"authcrypt\", \"anoncrypt\"]\n                },\n                \"recipients\": {\n                    \"type\": \"array\",\n                    \"description\": \"A list of the recipients who the message is encrypted for\",\n                    \"items\": {\n                        \"type\": \"object\",\n                        \"required\": [\"encrypted_key\", \"header\"],\n                        \"properties\": {\n                            \"encrypted_key\": {\n                                \"type\": \"string\",\n                                \"description\": \"The key used for encrypting the ciphertext. This is also referred to as a cek\"\n                            },\n                            \"header\": {\n                                \"type\": \"object\",\n                                \"required\": [\"kid\"],\n                                \"description\": \"The recipient to whom this message will be sent\",\n                                \"properties\": {\n                                    \"kid\": {\n                                        \"type\": \"string\",\n                                        \"description\": \"base58 encoded verkey of the recipient.\"\n                                    }\n                                }\n                            }\n                        }\n                    }\n                 },     \n            },\n        },\n        \"iv\": {\n            \"type\": \"string\",\n            \"description\": \"base64 URL encoded nonce used to encrypt ciphertext\"\n        },\n        \"ciphertext\": {\n            \"type\": \"string\",\n            \"description\": \"base64 URL encoded authenticated encrypted message\"\n        },\n        \"tag\": {\n            \"type\": \"string\",\n            \"description\": \"Integrity checksum/tag base64URL encoded to check ciphertext, protected, and iv\"\n        }\n    }\n}\n

    "},{"location":"aip2/0020-message-types/","title":"Aries RFC 0020: Message Types","text":""},{"location":"aip2/0020-message-types/#summary","title":"Summary","text":"

    Define structure of message type strings used in agent to agent communication, describe their resolution to documentation URIs, and offer guidelines for protocol specifications.

    "},{"location":"aip2/0020-message-types/#motivation","title":"Motivation","text":"

    A clear convention to follow for agent developers is necessary for interoperability and continued progress as a community.

    "},{"location":"aip2/0020-message-types/#tutorial","title":"Tutorial","text":"

    A \"Message Type\" is a required attribute of all communications sent between parties. The message type instructs the receiving agent how to interpret the content and what content to expect as part of a given message.

    Types are specified within a message using the @type attribute:

    {\n    \"@type\": \"<message type string>\",\n    // other attributes\n}\n

    Message types are URIs that may resolve to developer documentation for the message type, as described in Protocol URIs. We recommend that message type URIs be HTTP URLs.

    "},{"location":"aip2/0020-message-types/#aries-core-message-namespace","title":"Aries Core Message Namespace","text":"

    https://didcomm.org/ is used to namespace protocols defined by the community as \"core protocols\" or protocols that agents should minimally support.

    The didcomm.org DNS entry is currently controlled by the Decentralized Identity Foundation (DIF) based on their role in standardizing the DIDComm Messaging specification.

    "},{"location":"aip2/0020-message-types/#protocols","title":"Protocols","text":"

    Protocols provide a logical grouping for message types. These protocols, along with each type belonging to that protocol, are to be defined in future RFCs or through means appropriate to subprojects.

    "},{"location":"aip2/0020-message-types/#protocol-versioning","title":"Protocol Versioning","text":"

    Version numbering should essentially follow Semantic Versioning 2.0.0, excluding patch version number. To summarize, a change in the major protocol version number indicates a breaking change while the minor protocol version number indicates non-breaking additions.

    "},{"location":"aip2/0020-message-types/#message-type-design-guidelines","title":"Message Type Design Guidelines","text":"

    These guidelines are guidelines on purpose. There will be situations where a good design will have to choose between conflicting points, or ignore all of them. The goal should always be clear and good design.

    "},{"location":"aip2/0020-message-types/#respect-reserved-attribute-names","title":"Respect Reserved Attribute Names","text":"

    Reserved attributes are prefixed with an @ sign, such as @type. Don't use this prefix for an attribute, even if use of that specific attribute is undefined.

    "},{"location":"aip2/0020-message-types/#avoid-ambiguous-attribute-names","title":"Avoid ambiguous attribute names","text":"

    Data, id, and package, are often terrible names. Adjust the name to enhance meaning. For example, use message_id instead of id.

    "},{"location":"aip2/0020-message-types/#avoid-names-with-special-characters","title":"Avoid names with special characters","text":"

    Technically, attribute names can be any valid json key (except prefixed with @, as mentioned above). Practically, you should avoid using special characters, including those that need to be escaped. Underscores and dashes [_,-] are totally acceptable, but you should avoid quotation marks, punctuation, and other symbols.

    "},{"location":"aip2/0020-message-types/#use-attributes-consistently-within-a-protocol","title":"Use attributes consistently within a protocol","text":"

    Be consistent with attribute names between the different types within a protocol. Only use the same attribute name for the same data. If the attribute values are similar, but not exactly the same, adjust the name to indicate the difference.

    "},{"location":"aip2/0020-message-types/#nest-attributes-only-when-useful","title":"Nest Attributes only when useful","text":"

    Attributes do not need to be nested under a top level attribute, but can be to organize related attributes. Nesting all message attributes under one top level attribute is usually not a good idea.

    "},{"location":"aip2/0020-message-types/#design-examples","title":"Design Examples","text":""},{"location":"aip2/0020-message-types/#example-1","title":"Example 1","text":"
    {\n    \"@type\": \"did:example:00000;spec/pizzaplace/1.0/pizzaorder\",\n    \"content\": {\n        \"id\": 15,\n        \"name\": \"combo\",\n        \"prepaid?\": true,\n        \"ingredients\": [\"pepperoni\", \"bell peppers\", \"anchovies\"]\n    }\n}\n

    Suggestions: Ambiguous names, unnecessary nesting, symbols in names.

    "},{"location":"aip2/0020-message-types/#example-1-fixed","title":"Example 1 Fixed","text":"
    {\n    \"@type\": \"did:example:00000;spec/pizzaplace/1.0/pizzaorder\",\n    \"table_id\": 15,\n    \"pizza_name\": \"combo\",\n    \"prepaid\": true,\n    \"ingredients\": [\"pepperoni\", \"bell peppers\", \"anchovies\"]\n}\n
    "},{"location":"aip2/0020-message-types/#reference","title":"Reference","text":""},{"location":"aip2/0020-message-types/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community Aries Framework - .NET .NET framework for building agents of all types Streetcred.id Commercial mobile and web app built using Aries Framework - .NET Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases. Aries Framework - Go For building agents, hubs and other DIDComm ../../features in GoLang. Connect.Me Free mobile app from Evernym. Installed via app store on iOS and Android. Verity Commercially licensed enterprise agent, SaaS or on-prem."},{"location":"aip2/0023-did-exchange/","title":"Aries RFC 0023: DID Exchange Protocol 1.0","text":""},{"location":"aip2/0023-did-exchange/#summary","title":"Summary","text":"

    This RFC describes the protocol to exchange DIDs between agents when establishing a DID based relationship.

    "},{"location":"aip2/0023-did-exchange/#motivation","title":"Motivation","text":"

    Aries agent developers want to create agents that are able to establish relationships with each other and exchange secure information using keys and endpoints in DID Documents. For this to happen there must be a clear protocol to exchange DIDs.

    "},{"location":"aip2/0023-did-exchange/#tutorial","title":"Tutorial","text":"

    We will explain how DIDs are exchanged, with the roles, states, and messages required.

    "},{"location":"aip2/0023-did-exchange/#roles","title":"Roles","text":"

    The DID Exchange Protocol uses two roles: requester and responder.

    The requester is the party that initiates this protocol after receiving an invitation message (using RFC 0434 Out of Band) or by using an implied invitation from a public DID. For example, a verifier might get the DID of the issuer of a credential they are verifying, and use information in the DIDDoc for that DID as the basis for initiating an instance of this protocol.

    Since the requester receiving an explicit invitation may not have an Aries agent, it is desirable, but not strictly, required that sender of the invitation (who has the responder role in this protocol) have the ability to help the requester with the process and/or costs associated with acquiring an agent capable of participating in the ecosystem. For example, the sender of an invitation may often be sponsoring institutions.

    The responder, who is the sender of an explicit invitation or the publisher of a DID with an implicit invitation, must have an agent capable of interacting with other agents via DIDComm.

    In cases where both parties already possess SSI capabilities, deciding who plays the role of requester and responder might be a casual matter of whose phone is handier.

    "},{"location":"aip2/0023-did-exchange/#states","title":"States","text":""},{"location":"aip2/0023-did-exchange/#requester","title":"Requester","text":"

    The requester goes through the following states per the State Machine Tables below

    "},{"location":"aip2/0023-did-exchange/#responder","title":"Responder","text":"

    The responder goes through the following states per the State Machine Tables below

    "},{"location":"aip2/0023-did-exchange/#state-machine-tables","title":"State Machine Tables","text":"

    The following are the requester and responder state machines.

    The invitation-sent and invitation-received are technically outside this protocol, but are useful to show in the state machine as the invitation is the trigger to start the protocol and is referenced from the protocol as the parent thread (pthid). This is discussed in more detail below.

    The abandoned and completed states are terminal states and there is no expectation that the protocol can be continued (or even referenced) after reaching those states.

    "},{"location":"aip2/0023-did-exchange/#errors","title":"Errors","text":"

    After receiving an explicit invitation, the requester may send a problem-report to the responder using the information in the invitation to either restart the invitation process (returning to the start state) or to abandon the protocol. The problem-report may be an adopted Out of Band protocol message or an adopted DID Exchange protocol message, depending on where in the processing of the invitation the error was detected.

    During the request / response part of the protocol, there are two protocol-specific error messages possible: one for an active rejection and one for an unknown error. These errors are sent using a problem_report message type specific to the DID Exchange Protocol. These errors do not transition the protocol to the abandoned state. The following list details problem-codes that may be sent in these cases:

    request_not_accepted - The error indicates that the request message has been rejected for a reason listed in the error_report. Typical reasons include not accepting the method of the provided DID, unknown endpoint protocols, etc. The request can be resent after the appropriate corrections have been made.

    request_processing_error - This error is sent when the responder was processing the request with the intent to accept the request, but some processing error occurred. This error indicates that the request should be resent as-is.

    response_not_accepted - The error indicates that the response has been rejected for a reason listed in the error_report. Typical reasons include not accepting the method of the provided DID, unknown endpoint protocols, invalid signature, etc. The response can be resent after the appropriate corrections have been made.

    response_processing_error - This error is sent when the requester was processing the response with the intent to accept the response, but some processing error occurred. This error indicates that the response should be resent as-is.

    If other errors occur, the corresponding party may send a problem-report to inform the other party they are abandoning the protocol.

    No errors are sent in timeout situations. If the requester or responder wishes to retract the messages they sent, they record so locally and return a request_not_accepted or response_not_accepted error when the other party sends a request or response.

    "},{"location":"aip2/0023-did-exchange/#error-message-example","title":"Error Message Example","text":"
    {\n  \"@type\": \"https://didcomm.org/didexchange/1.0/problem_report\",\n  \"@id\": \"5678876542345\",\n  \"~thread\": { \"thid\": \"<@id of message related to problem>\" },\n  \"~l10n\": { \"locale\": \"en\"},\n  \"problem-code\": \"request_not_accepted\", // matches codes listed above\n  \"explain\": \"Unsupported DID method for provided DID.\"\n}\n
    "},{"location":"aip2/0023-did-exchange/#error-message-attributes","title":"Error Message Attributes","text":""},{"location":"aip2/0023-did-exchange/#flow-overview","title":"Flow Overview","text":""},{"location":"aip2/0023-did-exchange/#implicit-and-explicit-invitations","title":"Implicit and Explicit Invitations","text":"

    The DID Exchange Protocol is preceded by - either knowledge of a resolvable DID (an implicit invitation) - or by a out-of-band/%VER/invitation message from the Out Of Band Protocols RFC.

    The information needed to construct the request message to start the protocol is used - either from the resolved DID Document - or the service element of the handshake_protocols attribute of the invitation.

    "},{"location":"aip2/0023-did-exchange/#1-exchange-request","title":"1. Exchange Request","text":"

    The request message is used to communicate the DID document of the requester to the responder using the provisional service information present in the (implicit or explicit) invitation.

    The requester may provision a new DID according to the DID method spec. For a Peer DID, this involves creating a matching peer DID and key. The newly provisioned DID and DID Doc is presented in the request message as follows:

    "},{"location":"aip2/0023-did-exchange/#request-message-example","title":"Request Message Example","text":"
    {\n  \"@id\": \"5678876542345\",\n  \"@type\": \"https://didcomm.org/didexchange/1.0/request\",\n  \"~thread\": { \n      \"thid\": \"5678876542345\",\n      \"pthid\": \"<id of invitation>\"\n  },\n  \"label\": \"Bob\",\n  \"goal_code\": \"aries.rel.build\",\n  \"goal\": \"To create a relationship\",\n  \"did\": \"B.did@B:A\",\n  \"did_doc~attach\": {\n      \"@id\": \"d2ab6f2b-5646-4de3-8c02-762f553ab804\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n         \"base64\": \"eyJ0eXAiOiJKV1Qi... (bytes omitted)\",\n         \"jws\": {\n            \"header\": {\n               \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n            },\n            \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n            \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n            }\n      }\n   }\n}\n
    "},{"location":"aip2/0023-did-exchange/#request-message-attributes","title":"Request Message Attributes","text":"

    The label property was intended to be declared as an optional property, but was added to the RFC as a required property. If an agent wishes to not use a label in the request, an empty string (\"\") or the set value Unspecified may be used to indicate a non-value. This approach ensures existing AIP 2.0 implementations do not break.

    "},{"location":"aip2/0023-did-exchange/#correlating-requests-to-invitations","title":"Correlating requests to invitations","text":"

    An invitation is presented in one of two forms:

    When a request responds to an explicit invitation, its ~thread.pthid MUST be equal to the @id property of the invitation as described in the out-of-band RFC.

    When a request responds to an implicit invitation, its ~thread.pthid MUST contain a DID URL that resolves to the specific service on a DID document that contains the invitation.

    "},{"location":"aip2/0023-did-exchange/#example-referencing-an-explicit-invitation","title":"Example Referencing an Explicit Invitation","text":"
    {\n  \"@id\": \"a46cdd0f-a2ca-4d12-afbf-2e78a6f1f3ef\",\n  \"@type\": \"https://didcomm.org/didexchange/1.0/request\",\n  \"~thread\": { \n      \"thid\": \"a46cdd0f-a2ca-4d12-afbf-2e78a6f1f3ef\",\n      \"pthid\": \"032fbd19-f6fd-48c5-9197-ba9a47040470\" \n  },\n  \"label\": \"Bob\",\n  \"goal_code\": \"aries.rel.build\",\n  \"goal\": \"To create a relationship\",\n  \"did\": \"B.did@B:A\",\n  \"did_doc~attach\": {\n      \"@id\": \"d2ab6f2b-5646-4de3-8c02-762f553ab804\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n         \"base64\": \"eyJ0eXAiOiJKV1Qi... (bytes omitted)\",\n         \"jws\": {\n            \"header\": {\n               \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n            },\n            \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n            \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n            }\n      }\n   }\n}\n
    "},{"location":"aip2/0023-did-exchange/#example-referencing-an-implicit-invitation","title":"Example Referencing an Implicit Invitation","text":"
    {\n  \"@id\": \"a46cdd0f-a2ca-4d12-afbf-2e78a6f1f3ef\",\n  \"@type\": \"https://didcomm.org/didexchange/1.0/request\",\n  \"~thread\": { \n      \"thid\": \"a46cdd0f-a2ca-4d12-afbf-2e78a6f1f3ef\",\n      \"pthid\": \"did:example:21tDAKCERh95uGgKbJNHYp#didcomm\" \n  },\n  \"label\": \"Bob\",\n  \"goal_code\": \"aries.rel.build\",\n  \"goal\": \"To create a relationship\",\n  \"did\": \"B.did@B:A\",\n  \"did_doc~attach\": {\n      \"@id\": \"d2ab6f2b-5646-4de3-8c02-762f553ab804\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n         \"base64\": \"eyJ0eXAiOiJKV1Qi... (bytes omitted)\",\n         \"jws\": {\n            \"header\": {\n               \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n            },\n            \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n            \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n            }\n      }\n   }\n}\n
    "},{"location":"aip2/0023-did-exchange/#request-transmission","title":"Request Transmission","text":"

    The request message is encoded according to the standards of the Encryption Envelope, using the recipientKeys present in the invitation.

    If the routingKeys attribute was present and non-empty in the invitation, each key must be used to wrap the message in a forward request, then encoded in an Encryption Envelope. This processing is in order of the keys in the list, with the last key in the list being the one for which the serviceEndpoint possesses the private key.

    The message is then transmitted to the serviceEndpoint.

    The requester is in the request-sent state. When received, the responder is in the request-received state.

    "},{"location":"aip2/0023-did-exchange/#request-processing","title":"Request processing","text":"

    After receiving the exchange request, the responder evaluates the provided DID and DID Doc according to the DID Method Spec.

    The responder should check the information presented with the keys used in the wire-level message transmission to ensure they match.

    The responder MAY look up the corresponding invitation identified in the request's ~thread.pthid to determine whether it should accept this exchange request.

    If the responder wishes to continue the exchange, they will persist the received information in their wallet. They will then either update the provisional service information to rotate the key, or provision a new DID entirely. The choice here will depend on the nature of the DID used in the invitation.

    The responder will then craft an exchange response using the newly updated or provisioned information.

    "},{"location":"aip2/0023-did-exchange/#request-errors","title":"Request Errors","text":"

    See Error Section above for message format details.

    "},{"location":"aip2/0023-did-exchange/#request-rejected","title":"Request Rejected","text":"

    Possible reasons:

    "},{"location":"aip2/0023-did-exchange/#request-processing-error","title":"Request Processing Error","text":""},{"location":"aip2/0023-did-exchange/#2-exchange-response","title":"2. Exchange Response","text":"

    The exchange response message is used to complete the exchange. This message is required in the flow, as it updates the provisional information presented in the invitation.

    "},{"location":"aip2/0023-did-exchange/#response-message-example","title":"Response Message Example","text":"
    {\n  \"@type\": \"https://didcomm.org/didexchange/1.0/response\",\n  \"@id\": \"12345678900987654321\",\n  \"~thread\": {\n    \"thid\": \"<The Thread ID is the Message ID (@id) of the first message in the thread>\"\n  },\n  \"did\": \"B.did@B:A\",\n  \"did_doc~attach\": {\n      \"@id\": \"d2ab6f2b-5646-4de3-8c02-762f553ab804\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n         \"base64\": \"eyJ0eXAiOiJKV1Qi... (bytes omitted)\",\n         \"jws\": {\n            \"header\": {\n               \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n            },\n            \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n            \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n            }\n      }\n   }\n}\n

    The invitation's recipientKeys should be dedicated to envelopes authenticated encryption throughout the exchange. These keys are usually defined in the KeyAgreement DID verification relationship.

    "},{"location":"aip2/0023-did-exchange/#response-message-attributes","title":"Response Message Attributes","text":"

    In addition to a new DID, the associated DID Doc might contain a new endpoint. This new DID and endpoint are to be used going forward in the relationship.

    "},{"location":"aip2/0023-did-exchange/#response-transmission","title":"Response Transmission","text":"

    The message should be packaged in the encrypted envelope format, using the keys from the request, and the new keys presented in the internal did doc.

    When the message is sent, the responder are now in the response-sent state. On receipt, the requester is in the response-received state.

    "},{"location":"aip2/0023-did-exchange/#response-processing","title":"Response Processing","text":"

    When the requester receives the response message, they will decrypt the authenticated envelope which confirms the source's authenticity. After decryption validation, they will update their wallet with the new information, and use that information in sending the complete message.

    "},{"location":"aip2/0023-did-exchange/#response-errors","title":"Response Errors","text":"

    See Error Section above for message format details.

    "},{"location":"aip2/0023-did-exchange/#response-rejected","title":"Response Rejected","text":"

    Possible reasons:

    "},{"location":"aip2/0023-did-exchange/#response-processing-error","title":"Response Processing Error","text":""},{"location":"aip2/0023-did-exchange/#3-exchange-complete","title":"3. Exchange Complete","text":"

    The exchange complete message is used to confirm the exchange to the responder. This message is required in the flow, as it marks the exchange complete. The responder may then invoke any protocols desired based on the context expressed via the pthid in the DID Exchange protocol.

    "},{"location":"aip2/0023-did-exchange/#complete-message-example","title":"Complete Message Example","text":"
    {\n  \"@type\": \"https://didcomm.org/didexchange/1.0/complete\",\n  \"@id\": \"12345678900987654321\",\n  \"~thread\": {\n    \"thid\": \"<The Thread ID is the Message ID (@id) of the first message in the thread>\",\n    \"pthid\": \"<pthid used in request message>\"\n  }\n}\n

    The pthid is required in this message, and must be identical to the pthid used in the request message.

    After a complete message is sent, the requester is in the completed terminal state. Receipt of the message puts the responder into the completed state.

    "},{"location":"aip2/0023-did-exchange/#complete-errors","title":"Complete Errors","text":"

    See Error Section above for message format details.

    "},{"location":"aip2/0023-did-exchange/#complete-rejected","title":"Complete Rejected","text":"

    This is unlikely to occur with other than an unknown processing error (covered below), so no possible reasons are listed. As experience is gained with the protocol, possible reasons may be added.

    "},{"location":"aip2/0023-did-exchange/#complete-processing-error","title":"Complete Processing Error","text":""},{"location":"aip2/0023-did-exchange/#next-steps","title":"Next Steps","text":"

    The exchange between the requester and the responder has been completed. This relationship has no trust associated with it. The next step should be to increase the trust to a sufficient level for the purpose of the relationship, such as through an exchange of proofs.

    "},{"location":"aip2/0023-did-exchange/#peer-did-maintenance","title":"Peer DID Maintenance","text":"

    When Peer DIDs are used in an exchange, it is likely that both the requester and responder will want to perform some relationship maintenance such as key rotations. Future RFC updates will add these maintenance ../../features.

    "},{"location":"aip2/0023-did-exchange/#reference","title":"Reference","text":""},{"location":"aip2/0023-did-exchange/#drawbacks","title":"Drawbacks","text":"

    N/A at this time

    "},{"location":"aip2/0023-did-exchange/#prior-art","title":"Prior art","text":""},{"location":"aip2/0023-did-exchange/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0023-did-exchange/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Trinsic.id Commercial mobile and web app built using Aries Framework - .NET MISSING test results"},{"location":"aip2/0025-didcomm-transports/","title":"Aries RFC 0025: DIDComm Transports","text":""},{"location":"aip2/0025-didcomm-transports/#summary","title":"Summary","text":"

    This RFC Details how different transports are to be used for Agent Messaging.

    "},{"location":"aip2/0025-didcomm-transports/#motivation","title":"Motivation","text":"

    Agent Messaging is designed to be transport independent, including message encryption and agent message format. Each transport does have unique ../../features, and we need to standardize how the transport ../../features are (or are not) applied.

    "},{"location":"aip2/0025-didcomm-transports/#reference","title":"Reference","text":"

    Standardized transport methods are detailed here.

    "},{"location":"aip2/0025-didcomm-transports/#https","title":"HTTP(S)","text":"

    HTTP(S) is the first, and most used transport for DID Communication that has received heavy attention.

    While it is recognized that all DIDComm messages are secured through strong encryption, making HTTPS somewhat redundant, it will likely cause issues with mobile clients because venders (Apple and Google) are limiting application access to the HTTP protocol. For example, on iOS 9 or above where [ATS])(https://developer.apple.com/documentation/bundleresources/information_property_list/nsapptransportsecurity) is in effect, any URLs using HTTP must have an exception hard coded in the application prior to uploading to the iTunes Store. This makes DIDComm unreliable as the agent initiating the the request provides an endpoint for communication that the mobile client must use. If the agent provides a URL using the HTTP protocol it will likely be unusable due to low level operating system limitations.

    As a best practice, when HTTP is used in situations where a mobile client (iOS or Android) may be involved it is highly recommended to use the HTTPS protocol, specifically TLS 1.2 or above.

    Other important notes on the subject of using HTTP(S) include:

    "},{"location":"aip2/0025-didcomm-transports/#known-implementations","title":"Known Implementations","text":"

    Aries Cloud Agent - Python Aries Framework - .NET

    "},{"location":"aip2/0025-didcomm-transports/#websocket","title":"Websocket","text":"

    Websockets are an efficient way to transmit multiple messages without the overhead of individual requests.

    "},{"location":"aip2/0025-didcomm-transports/#known-implementations_1","title":"Known Implementations","text":"

    Aries Cloud Agent - Python Aries Framework - .NET

    "},{"location":"aip2/0025-didcomm-transports/#xmpp","title":"XMPP","text":"

    XMPP is an effective transport for incoming DID-Communication messages directly to mobile agents, like smartphones.

    "},{"location":"aip2/0025-didcomm-transports/#known-implementations_2","title":"Known Implementations","text":"

    XMPP is implemented in the Openfire Server open source project. Integration with DID Communication agents is work-in-progress.

    "},{"location":"aip2/0025-didcomm-transports/#other-transports","title":"Other Transports","text":"

    Other transports may be used for Agent messaging. As they are developed, this RFC should be updated with appropriate standards for the transport method. A PR should be raised against this doc to facilitate discussion of the proposed additions and/or updates. New transports should highlight the common elements of the transport (such as an HTTP response code for the HTTP transport) and how they should be applied.

    "},{"location":"aip2/0025-didcomm-transports/#message-routing","title":"Message Routing","text":"

    The transports described here are used between two agents. In the case of message routing, a message will travel across multiple agent connections. Each intermediate agent (see Mediators and Relays) may use a different transport. These transport details are not made known to the sender, who only knows the keys of Mediators and the first endpoint of the route.

    "},{"location":"aip2/0025-didcomm-transports/#message-context","title":"Message Context","text":"

    The transport used from a previous agent can be recorded in the message trust context. This is particularly true of controlled network environments, where the transport may have additional security considerations not applicable on the public internet. The transport recorded in the message context only records the last transport used, and not any previous routing steps as described in the Message Routing section of this document.

    "},{"location":"aip2/0025-didcomm-transports/#transport-testing","title":"Transport Testing","text":"

    Transports which operate on IP based networks can be tested by an Agent Test Suite through a transport adapter. Some transports may be more difficult to test in a general sense, and may need specialized testing frameworks. An agent with a transport not yet supported by any testing suites may have non-transport testing performed by use of a routing agent.

    "},{"location":"aip2/0025-didcomm-transports/#drawbacks","title":"Drawbacks","text":"

    Setting transport standards may prevent some uses of each transport method.

    "},{"location":"aip2/0025-didcomm-transports/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"aip2/0025-didcomm-transports/#prior-art","title":"Prior art","text":"

    Several agent implementations already exist that follow similar conventions.

    "},{"location":"aip2/0025-didcomm-transports/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0025-didcomm-transports/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"aip2/0035-report-problem/","title":"Aries RFC 0035: Report Problem Protocol 1.0","text":""},{"location":"aip2/0035-report-problem/#summary","title":"Summary","text":"

    Describes how to report errors and warnings in a powerful, interoperable way. All implementations of SSI agent or hub technology SHOULD implement this RFC.

    "},{"location":"aip2/0035-report-problem/#motivation","title":"Motivation","text":"

    Effective reporting of errors and warnings is difficult in any system, and particularly so in decentralized systems such as remotely collaborating agents. We need to surface problems, and their supporting context, to people who want to know about them (and perhaps separately, to people who can actually fix them). This is especially challenging when a problem is detected well after and well away from its cause, and when multiple parties may need to cooperate on a solution.

    Interoperability is perhaps more crucial with problem reporting than with any other aspect of DIDComm, since an agent written by one developer MUST be able to understand an error reported by an entirely different team. Notice how different this is from normal enterprise software development, where developers only need to worry about understanding their own errors.

    The goal of this RFC is to provide agents with tools and techniques possible to address these challenges. It makes two key contributions:

    "},{"location":"aip2/0035-report-problem/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0035-report-problem/#error-vs-warning-vs-problem","title":"\"Error\" vs. \"Warning\" vs. \"Problem\"","text":"

    The distinction between \"error\" and \"warning\" is often thought of as one of severity -- errors are really bad, and warnings are only somewhat bad. This is reinforced by the way logging platforms assign numeric constants to ERROR vs. WARN log events, and by the way compilers let warnings be suppressed but refuse to ignore errors.

    However, any cybersecurity professional will tell you that warnings sometimes signal deep and scary problems that should not be ignored, and most veteran programmers can tell war stories that reinforce this wisdom. A deeper analysis of warnings reveals that what truly differentiates them from errors is not their lesser severity, but rather their greater ambiguity. Warnings are problems that require human judgment to evaluate, whereas errors are unambiguously bad.

    The mechanism for reporting problems in DIDComm cannot make a simplistic assumption that all agents are configured to run with a particular verbosity or debug level. Each agent must let other agents decide for themselves, based on policy or user preference, what do do about various issues. For this reason, we use the generic term \"problem\" instead of the more specific and semantically opinionated term \"error\" (or \"warning\") to describe the general situation we're addressing. \"Problem\" includes any deviation from the so-called \"happy path\" of an interaction. This could include situations where the severity is unknown and must be evaluated by a human, as well as surprising events (e.g., a decision by a human to alter the basis for in-flight messaging by moving from one device to another).

    "},{"location":"aip2/0035-report-problem/#specific-challenges","title":"Specific Challenges","text":"

    All of the following challenges need to be addressed.

    1. Report problems to external parties interacting with us. For example, AliceCorp has to be able to tell Bob that it can\u2019t issue the credential he requested because his payment didn\u2019t go through.
    2. Report problems to other entities inside our own domain. For example, AliceCorp\u2019s agent #1 has to be able to report to AliceCorp agent #2 that it is out of disk space.
    3. Report in a way that provides human beings with useful context and guidance to troubleshoot. Most developers know of cases where error reporting was technically correct but completely useless. Bad communication about problems is one of the most common causes of UX debacles. Humans using agents will speak different languages, have differing degrees of technical competence, and have different software and hardware resources. They may lack context about what their agents are doing, such as when a DIDComm interaction occurs as a result of scheduled or policy-driven actions. This makes context and guidance crucial.
    4. Map a problem backward in time, space, and circumstances, so when it is studied, its original context is available. This is particularly difficult in DIDComm, which is transport-agnostic and inherently asynchronous, and which takes place on an inconsistently connected digital landscape.
    5. Support localization using techniques in the l10n RFC.
    6. Provide consistent, locale-independent problem codes, not just localized text, so problems can be researched in knowledge bases, on Stack Overflow, and in other internet forums, regardless of the natural language in which a message displays. This also helps meaning remain stable as wording is tweaked.
    7. Provide a registry of well known problem codes that are carefully defined and localized, to maximize shared understanding. Maintaining an exhaustive list of all possible things that can go wrong with all possible agents in all possible interactions is completely unrealistic. However, it may be possible to maintain a curated subset. While we can't enumerate everything that can go wrong in a financial transaction, a code for \"insufficient funds\" might have near-universal usefulness. Compare the posix error inventory in errorno.h.
    8. Facilitate automated problem handling by agents, not just manual handling by humans. Perfect automation may be impossible, but high levels of automation should be doable.
    9. Clarify how the problem affects an in-progress interaction. Does a failure to process payment reset the interaction to the very beginning of the protocol, or just back to the previous step, where payment was requested? This requires problems to be matched in a formal way to the state machine of a protocol underway.
    "},{"location":"aip2/0035-report-problem/#the-report-problem-protocol","title":"The report-problem protocol","text":"

    Reporting problems uses a simple one-step notification protocol. Its official PIURI is:

    https://didcomm.org/report-problem/1.0\n

    The protocol includes the standard notifier and notified roles. It defines a single message type problem-report, introduced here. It also adopts the ack message from the ACK 1.0 protocol, to accommodate the possibility that the ~please_ack decorator may be used on the notification.

    A problem-report communicates about a problem when an agent-to-agent message is possible and a recipient for the problem report is known. This covers, for example, cases where a Sender's message gets to an intended Recipient, but the Recipient is unable to process the message for some reason and wants to notify the Sender. It may also be relevant in cases where the recipient of the problem-report is not a message Sender. Of course, a reporting technique that depends on message delivery doesn't apply when the error reporter can't identify or communicate with the proper recipient.

    "},{"location":"aip2/0035-report-problem/#the-problem-report-message-type","title":"The problem-report message type","text":"

    Only description.code is required, but a maximally verbose problem-report could contain all of the following:

    {\n  \"@type\"            : \"https://didcomm.org/report-problem/1.0/problem-report\",\n  \"@id\"              : \"an identifier that can be used to discuss this error message\",\n  \"~thread\"          : \"info about the threading context in which the error occurred (if any)\",\n  \"description\"      : { \"en\": \"localized message\", \"code\": \"symbolic-name-for-error\" },\n  \"problem_items\"    : [ {\"<item descrip>\": \"value\"} ],\n  \"who_retries\"      : \"enum: you | me | both | none\",\n  \"fix_hint\"         : { \"en\": \"localized error-instance-specific hint of how to fix issue\"},\n  \"impact\"           : \"enum: message | thread | connection\",\n  \"where\"            : \"enum: you | me | other - enum: cloud | edge | wire | agency | ..\",\n  \"noticed_time\"     : \"<time>\",\n  \"tracking_uri\"     : \"\",\n  \"escalation_uri\"   : \"\"\n}\n
    "},{"location":"aip2/0035-report-problem/#field-reference","title":"Field Reference","text":"

    Some fields will be relevant and useful in many use cases, but not always. Including empty or null fields is discouraged; best practice is to include as many fields as you can fill with useful data, and to omit the others.

    @id: An identifier for this message, as described in the message threading RFC. This decorator is STRONGLY recommended, because enables a dialog about the problem itself in a branched thread (e.g., suggest a retry, report a resolution, ask for more information).

    ~thread: A thread decorator that places the problem-report into a thread context. If the problem was triggered in the processing of a message, then the triggering message is the head of a new thread of which the problem report is the second member (~thread.sender_order = 0). In such cases, the ~thread.pthid (parent thread id) here would be the @id of the triggering message. If the problem-report is unrelated to a message, the thread decorator is mostly redundant, as ~thread.thid must equal @id.

    description: Contains human-readable, localized alternative string(s) that explain the problem. It is highly recommended that the message follow use the guidance in the l10n RFC, allowing the error to be searched on the web and documented formally.

    description.code: Required. Contains the code that indicates the problem being communicated. Codes are described in protocol RFCs and other relevant places. New Codes SHOULD follow the Problem Code naming convention detailed in the DIDComm v2 spec.

    problem_items: A list of one or more key/value pairs that are parameters about the problem. Some examples might be:

    All items should have in common the fact that they exemplify the problem described by the code (e.g., each is an invalid param, or each is an unresponsive URL, or each is an unrecognized crypto algorithm, etc).

    Each item in the list must be a tagged pair (a JSON {key:value}, where the key names the parameter or item, and the value is the actual problem text/number/value. For example, to report that two different endpoints listed in party B\u2019s DID Doc failed to respond when they were contacted, the code might contain \"endpoint-not-responding\", and the problem_items property might contain:

    [\n  {\"endpoint1\": \"http://agency.com/main/endpoint\"},\n  {\"endpoint2\": \"http://failover.agency.com/main/endpoint\"}\n]\n

    who_retries: value is the string \"you\", the string \"me\", the string \"both\", or the string \"none\". This property tells whether a problem is considered permanent and who the sender of the problem report believes should have the responsibility to resolve it by retrying. Rules about how many times to retry, and who does the retry, and under what circumstances, are not enforceable and not expressed in the message text. This property is thus not a strong commitment to retry--only a recommendation of who should retry, with the assumption that retries will often occur if they make sense.

    [TODO: figure out how to identify parties > 2 in n-wise interaction]

    fix_hint: Contains human-readable, localized suggestions about how to fix this instance of the problem. If present, this should be viewed as overriding general hints found in a message catalog.

    impact: A string describing the breadth of impact of the problem. An enumerated type:

    where: A string that describes where the error happened, from the perspective of the reporter, and that uses the \"you\" or \"me\" or \"other\" prefix, followed by a suffix like \"cloud\", \"edge\", \"wire\", \"agency\", etc.

    noticed_time: Standard time entry (ISO-8601 UTC with at least day precision and up to millisecond precision) of when the problem was detected.

    [TODO: should we refer to timestamps in a standard way (\"date\"? \"time\"? \"timestamp\"? \"when\"?)]

    tracking_uri: Provides a URI that allows the recipient to track the status of the error. For example, if the error is related to a service that is down, the URI could be used to monitor the status of the service, so its return to operational status could be automatically discovered.

    escalation_uri: Provides a URI where additional help on the issue can be received. For example, this might be a \"mailto\" and email address for the Help Desk associated with a currently down service.

    "},{"location":"aip2/0035-report-problem/#sample","title":"Sample","text":"
    {\n  \"@type\": \"https://didcomm.org/notification/1.0/problem-report\",\n  \"@id\": \"7c9de639-c51c-4d60-ab95-103fa613c805\",\n  \"~thread\": {\n    \"pthid\": \"1e513ad4-48c9-444e-9e7e-5b8b45c5e325\",\n    \"sender_order\": 1\n  },\n  \"~l10n\"            : {\"catalog\": \"https://didcomm.org/error-codes\"},\n  \"description\"      : \"Unable to find a route to the specified recipient.\",\n  \"description~l10n\" : {\"code\": \"cant-find-route\" },\n  \"problem_items\"    : [\n      { \"recipient\": \"did:sov:C805sNYhMrjHiqZDTUASHg\" }\n  ],\n  \"who_retries\"      : \"you\",\n  \"impact\"           : \"message\",\n  \"noticed_time\"     : \"2019-05-27 18:23:06Z\"\n}\n
    "},{"location":"aip2/0035-report-problem/#categorized-examples-of-errors-and-current-best-practice-handling","title":"Categorized Examples of Errors and (current) Best Practice Handling","text":"

    The following is a categorization of a number of examples of errors and (current) Best Practice handling for those types of errors. The new problem-report message type is used for some of these categories, but not all.

    "},{"location":"aip2/0035-report-problem/#unknown-error","title":"Unknown Error","text":"

    Errors of a known error code will be processed according to the understanding of what the code means. Support of a protocol includes support and proper processing of the error codes detailed within that protocol.

    Any unknown error code that starts with w. in the DIDComm v2 style may be considered a warning, and the flow of the active protocol SHOULD continue. All other unknown error codes SHOULD be considered to be an end to the active protocol.

    "},{"location":"aip2/0035-report-problem/#error-while-processing-a-received-message","title":"Error While Processing a Received Message","text":"

    An Agent Message sent by a Sender and received by its intended Recipient cannot be processed.

    "},{"location":"aip2/0035-report-problem/#examples","title":"Examples:","text":""},{"location":"aip2/0035-report-problem/#recommended-handling","title":"Recommended Handling","text":"

    The Recipient should send the Sender a problem-report Agent Message detailing the issue.

    The last example deserves an additional comment about whether there should be a response sent at all. Particularly in cases where trust in the message sender is low (e.g. when establishing the connection), an Agent may not want to send any response to a rejected message as even a negative response could reveal correlatable information. That said, if a response is provided, the problem-report message type should be used.

    "},{"location":"aip2/0035-report-problem/#error-while-routing-a-message","title":"Error While Routing A Message","text":"

    An Agent in the routing flow of getting a message from a Sender to the Agent Message Recipient cannot route the message.

    "},{"location":"aip2/0035-report-problem/#examples_1","title":"Examples:","text":""},{"location":"aip2/0035-report-problem/#recommended-handling_1","title":"Recommended Handling","text":"

    If the Sender is known to the Agent having the problem, send a problem-report Agent Message detailing at least that a blocking issue occurred, and if relevant (such as in the first example), some details about the issue. If the message is valid, and the problem is related to a lack of resources (e.g. the second issue), also send a problem-report message to an escalation point within the domain.

    Alternatively, the capabilities described in 0034: Message Tracing could be used to inform others of the fact that an issue occurred.

    "},{"location":"aip2/0035-report-problem/#messages-triggered-about-a-transaction","title":"Messages Triggered about a Transaction","text":""},{"location":"aip2/0035-report-problem/#examples_2","title":"Examples:","text":""},{"location":"aip2/0035-report-problem/#recommended-handling_2","title":"Recommended Handling","text":"

    These types of error scenarios represent a gray error in handling between using the generic problem-report message format, or a message type that is part of the current transaction's message family. For example, the \"Your credential has been revoked\" might well be included as a part of the (TBD) standard Credentials Exchange message family. The \"more information\" example might be a generic error across a number of message families and so should trigger a problem-report) or, might be specific to the ongoing thread (e.g. Credential Exchange) and so be better handled by a defined message within that thread and that message family.

    The current advice on which to use in a given scenario is to consider how the recipient will handle the message. If the handler will need to process the response in a specific way for the transaction, then a message family-specific message type should be used. If the error is cross-cutting such that a common handler can be used across transaction contexts, then a generic problem-report should be used.

    \"Current advice\" implies that as we gain more experience with Agent To Agent messaging, the recommendations could get more precise.

    "},{"location":"aip2/0035-report-problem/#messaging-channel-settings","title":"Messaging Channel Settings","text":""},{"location":"aip2/0035-report-problem/#examples_3","title":"Examples","text":""},{"location":"aip2/0035-report-problem/#recommended-handling_3","title":"Recommended Handling","text":"

    These types of messages might or might not be triggered during the receipt and processing of a message, but either way, they are unrelated to the message and are really about the communication channel between the entities. In such cases, the recommended approach is to use a (TBD) standard message family to notify and rectify the issue (e.g. change the attributes of a connection). The definition of that message family is outside the scope of this RFC.

    "},{"location":"aip2/0035-report-problem/#timeouts","title":"Timeouts","text":"

    A special generic class of errors that deserves mention is the timeout, where a Sender sends out a message and does not receive back a response in a given time. In a distributed environment such as Agent to Agent messaging, these are particularly likely - and particularly difficult to handle gracefully. The potential reasons for timeouts are numerous:

    "},{"location":"aip2/0035-report-problem/#recommended-handling_4","title":"Recommended Handling","text":"

    Appropriate timeout handling is extremely contextual, with two key parameters driving the handling - the length of the waiting period before triggering the timeout and the response to a triggered timeout.

    The time to wait for a response should be dynamic by at least type of message, and ideally learned through experience. Messages requiring human interaction should have an inherently longer timeout period than a message expected to be handled automatically. Beyond that, it would be good for Agents to track response times by message type (and perhaps other parameters) and adjust timeouts to match observed patterns.

    When a timeout is received there are three possible responses, handled automatically or based on feedback from the user:

    An automated \"wait longer\" response might be used when first interacting with a particular message type or identity, as the response cadence is learned.

    If the decision is to retry, it would be good to have support in areas covered by other RFCs. First, it would be helpful (and perhaps necessary) for the threading decorator to support the concept of retries, so that a Recipient would know when a message is a retry of an already sent message. Next, on \"forward\" message types, Agents might want to know that a message was a retry such that they can consider refreshing DIDDoc/encryption key cache before sending the message along. It could also be helpful for a retry to interact with the Tracing facility so that more information could be gathered about why messages are not getting to their destination.

    Excessive retrying can exacerbate an existing system issue. If the reason for the timeout is because there is a \"too many messages to be processed\" situation, then sending retries simply makes the problem worse. As such, a reasonable backoff strategy should be used (e.g. exponentially increasing times between retries). As well, a strategy used at Uber is to flag and handle retries differently from regular messages. The analogy with Uber is not pure - that is a single-vendor system - but the notion of flagging retries such that retry messages can be handled differently is a good approach.

    "},{"location":"aip2/0035-report-problem/#caveat-problem-report-loops","title":"Caveat: Problem Report Loops","text":"

    Implementers should consider and mitigate the risk of an endless loop of error messages. For example:

    "},{"location":"aip2/0035-report-problem/#recommended-handling_5","title":"Recommended Handling","text":"

    How agents mitigate the risk of this problem is implementation specific, balancing loop-tracking overhead versus the likelihood of occurrence. For example, an agent implementation might have a counter on a connection object that is incremented when certain types of Problem Report messages are sent on that connection, and reset when any other message is sent. The agent could stop sending those types of Problem Report messages after the counter reaches a given value.

    "},{"location":"aip2/0035-report-problem/#reference","title":"Reference","text":"

    TBD

    "},{"location":"aip2/0035-report-problem/#drawbacks","title":"Drawbacks","text":"

    In many cases, a specific problem-report message is necessary, so formalizing the format of the message is also preferred over leaving it to individual implementations. There is no drawback to specifying that format now.

    As experience is gained with handling distributed errors, the recommendations provided in this RFC will have to evolve.

    "},{"location":"aip2/0035-report-problem/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    The error type specification mechanism builds on the same approach used by the message type specifications. It's possible that additional capabilities could be gained by making runtime use of the error type specification - e.g. for the broader internationalization of the error messages.

    The main alternative to a formally defined error type format is leaving it to individual implementations to handle error notifications, which will not lead to an effective solution.

    "},{"location":"aip2/0035-report-problem/#prior-art","title":"Prior art","text":"

    A brief search was done for error handling in messaging systems with few useful results found. Perhaps the best was the Uber article referenced in the \"Timeout\" section above.

    "},{"location":"aip2/0035-report-problem/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0035-report-problem/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes RFC 0036: Issue Credential Protocol The problem-report message is adopted by this protocol. MISSING test results RFC 0037: Present Proof Protocol The problem-report message is adopted by this protocol. MISSING test results Trinsic.id Commercial mobile and web app built using Aries Framework - .NET MISSING test results"},{"location":"aip2/0044-didcomm-file-and-mime-types/","title":"Aries RFC 0044: DIDComm File and MIME Types","text":""},{"location":"aip2/0044-didcomm-file-and-mime-types/#summary","title":"Summary","text":"

    Defines the media (MIME) types and file types that hold DIDComm messages in encrypted, signed, and plaintext forms. Covers DIDComm V1, plus a little of V2 to clarify how DIDComm versions are detected.

    "},{"location":"aip2/0044-didcomm-file-and-mime-types/#motivation","title":"Motivation","text":"

    Most work on DIDComm so far has assumed HTTP as a transport. However, we know that DID communication is transport-agnostic. We should be able to say the same thing no matter which channel we use.

    An incredibly important channel or transport for messages is digital files. Files can be attached to messages in email or chat, can be carried around on a thumb drive, can be backed up, can be distributed via CDN, can be replicated on distributed file systems like IPFS, can be inserted in an object store or in content-addressable storage, can be viewed and modified in editors, and support a million other uses.

    We need to define how files and attachments can contain DIDComm messages, and what the semantics of processing such files will be.

    "},{"location":"aip2/0044-didcomm-file-and-mime-types/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0044-didcomm-file-and-mime-types/#media-types","title":"Media Types","text":"

    Media types are based on the conventions of RFC6838. Similar to RFC7515, the application/ prefix MAY be omitted and the recipient MUST treat media types not containing / as having the application/ prefix present.

    "},{"location":"aip2/0044-didcomm-file-and-mime-types/#didcomm-v1-encrypted-envelope-dee","title":"DIDComm v1 Encrypted Envelope (*.dee)","text":"

    The raw bytes of an encrypted envelope may be persisted to a file without any modifications whatsoever. In such a case, the data will be encrypted and packaged such that only specific receiver(s) can process it. However, the file will contain a JOSE-style header that can be used by magic bytes algorithms to detect its type reliably.

    The file extension associated with this filetype is dee, giving a globbing pattern of *.dee; this should be be read as \"STAR DOT D E E\" or as \"D E E\" files.

    The name of this file format is \"DIDComm V1 Encrypted Envelope.\" We expect people to say, \"I am looking at a DIDComm V1 Encrypted Envelope\", or \"This file is in DIDComm V1 Encrypted Envelope format\", or \"Does my editor have a DIDComm V1 Encrypted Envelope plugin?\"

    Although the format of encrypted envelopes is derived from JSON and the JWT/JWE family of specs, no useful processing of these files will take place by viewing them as JSON, and viewing them as generic JWEs will greatly constrain which semantics are applied. Therefore, the recommended MIME type for *.dee files is application/didcomm-envelope-enc, with application/jwe as a fallback, and application/json as an even less desirable fallback. (In this, we are making a choice similar to the one that views *.docx files primarily as application/msword instead of application/xml.) If format evolution takes place, the version could become a parameter as described in RFC 1341: application/didcomm-envelope-enc;v=2.

    A recipient using the media type value MUST treat it as if \u201capplication/\u201d were prepended to any \"typ\" or \"cty\" value not containing a \u2018/\u2019 in compliance with the JWE /JWS family of specs.

    The default action for DIDComm V1 Encrypted Envelopes (what happens when a user double-clicks one) should be Handle (that is, process the message as if it had just arrived by some other transport), if the software handling the message is an agent. In other types of software, the default action might be to view the file. Other useful actions might include Send, Attach (to email, chat, etc), Open with agent, and Decrypt to *.dp.

    NOTE: The analog to this content type in DIDComm v2 is called a \"DIDComm Encrypted Message.\" Its format is slightly different. For more info, see Detecting DIDComm Versions below.

    "},{"location":"aip2/0044-didcomm-file-and-mime-types/#didcomm-v1-signed-envelopes-dse","title":"DIDComm V1 Signed Envelopes (*.dse)","text":"

    When DIDComm messages are signed, the signing uses a JWS signing envelope. Often signing is unnecessary, since authenticated encryption proves the sender of the message to the recipient(s), but sometimes when non-repudiation is required, this envelope is used. It is also required when the recipient of a message is unknown, but tamper-evidence is still required, as in the case of a public invitation.

    By convention, DIDComm Signed Envelopes contain plaintext; if encryption is used in combination with signing, the DSE goes inside the DEE.

    The file extension associated with this filetype is dse, giving a globbing pattern of *.dse; this should be be read as \"STAR DOT D S E\" or as \"D S E\" files.

    The name of this file format is \"DIDComm V1 Signed Envelope.\" We expect people to say, \"I am looking at a DIDComm V1 Signed Envelope\", or \"This file is in DIDComm V1 Signed Envelope format\", or \"Does my editor have a DIDComm V1 Signed Envelope plugin?\"

    As with *.dee files, the best way to hande *.dse files is to map them to a custom MIME type. The recommendation is application/didcomm-sig-env, with application/jws as a fallback, and application/json as an even less desirable fallback.

    A recipient using the media type value MUST treat it as if \u201capplication/\u201d were prepended to any \"typ\" or \"cty\" value not containing a \u2018/\u2019 in compliance with the JWE /JWS family of specs.

    The default action for DIDComm V1 Signed Envelopes (what happens when a user double-clicks one) should be Validate (that is, process the signature to see if it is valid.

    NOTE: The analog to this content type in DIDComm v2 is called a \"DIDComm Signed Message.\" Its format is slightly different. For more info, see Detecting DIDComm Versions below.

    "},{"location":"aip2/0044-didcomm-file-and-mime-types/#didcomm-v1-messages-dm","title":"DIDComm V1 Messages (*.dm)","text":"

    The plaintext representation of a DIDComm message--something like a credential offer, a proof request, a connection invitation, or anything else worthy of a DIDComm protocol--is JSON. As such, it should be editable by anything that expects JSON.

    However, all such files have some additional conventions, over and above the simple requirements of JSON. For example, key decorators have special meaning ( @id, ~thread, @trace , etc). Nonces may be especially significant. The format of particular values such as DID and DID+key references is important. Therefore, we refer to these messages generically as JSON, but we also define a file format for tools that are aware of the additional semantics.

    The file extension associated with this filetype is *.dm, and should be read as \"STAR DOT D M\" or \"D M\" files. If a format evolution takes place, a subsequent version could be noted by appending a digit, as in *.dm2 for second-generation dm files.

    The name of this file format is \"DIDComm V1 Message.\" We expect people to say, \"I am looking at a DIDComm V1 Message\", or \"This file is in DIDComm V1 Message format\", or \"Does my editor have a DIDComm V1 Message plugin?\" For extra clarity, it is acceptable to add the adjective \"plaintext\", as in \"DIDComm V1 Plaintext Message.\"

    The most specific MIME type of *.dm files is application/json;flavor=didcomm-msg--or, if more generic handling is appropriate, just application/json.

    A recipient using the media type value MUST treat it as if \u201capplication/\u201d were prepended to any \"typ\" or \"cty\" value not containing a \u2018/\u2019 in compliance with the JWE /JWS family of specs.

    The default action for DIDComm V1 Messages should be to View or Validate them. Other interesting actions might be Encrypt to *.dee, Sign to *.dse, and Find definition of protocol.

    NOTE: The analog to this content type in DIDComm v2 is called a \"DIDComm Plaintext Message.\" Its format is slightly different. For more info, see Detecting DIDComm Versions below.

    As a general rule, DIDComm messages that are being sent in production use cases of DID communication should be stored in encrypted form (*.dee) at rest. There are cases where this might not be preferred, e.g., providing documentation of the format of message or during a debugging scenario using message tracing. However, these are exceptional cases. Storing meaningful *.dm files decrypted is not a security best practice, since it replaces all the privacy and security guarantees provided by the DID communication mechanism with only the ACLs and other security barriers that are offered by the container.

    "},{"location":"aip2/0044-didcomm-file-and-mime-types/#native-object-representation","title":"Native Object representation","text":"

    This is not a file format, but rather an in-memory form of a DIDComm Message using whatever object hierarchy is natural for a programming language to map to and from JSON. For example, in python, the natural Native Object format is a dict that contains properties indexed by strings. This is the representation that python's json library expects when converting to JSON, and the format it produces when converting from JSON. In Java, Native Object format might be a bean. In C++, it might be a std::map<std::string, variant>...

    There can be more than one Native Object representation for a given programming language.

    Native Object forms are never rendered directly to files; rather, they are serialized to DIDComm Plaintext Format and then persisted (likely after also encrypting to DIDComm V1 Encrypted Envelope).

    "},{"location":"aip2/0044-didcomm-file-and-mime-types/#negotiating-compatibility","title":"Negotiating Compatibility","text":"

    When parties want to communicate via DIDComm, a number of mechanisms must align. These include:

    1. The type of service endpoint used by each party
    2. The key types used for encryption and/or signing
    3. The format of the encryption and/or signing envelopes
    4. The encoding of plaintext messages
    5. The protocol used to forward and route
    6. The protocol embodied in the plaintext messages

    Although DIDComm allows flexibility in each of these choices, it is not expected that a given DIDComm implementation will support many permutations. Rather, we expect a few sets of choices that commonly go together. We call a set of choices that work well together a profile. Profiles are identified by a string that matches the conventions of IANA media types, but they express choices about plaintext, encryption, signing, and routing in a single value. The following profile identifiers are defined in this version of the RFC:

    "},{"location":"aip2/0044-didcomm-file-and-mime-types/#defined-profiles","title":"Defined Profiles","text":"

    Profiles are named in the accept section of a DIDComm service endpoint and in an out-of-band message. When Alice declares that she accepts didcomm/aip2;env=rfc19, she is making a declaration about more than her own endpoint. She is saying that all publicly visible steps in an inbound route to her will use the didcomm/aip2;env=rfc19 profile, such that a sender only has to use didcomm/aip2;env=rfc19 choices to get the message from Alice's outermost mediator to Alice's edge. It is up to Alice to select and configure mediators and internal routing in such a way that this is true for the sender.

    "},{"location":"aip2/0044-didcomm-file-and-mime-types/#detecting-didcomm-versions","title":"Detecting DIDComm Versions","text":"

    Because media types differ from DIDComm V1 to V2, and because media types are easy to communicate in headers and message fields, they are a convenient way to detect which version of DIDComm applies in a given context:

    Nature of Content V1 V2 encrypted application/didcomm-envelope-encDIDComm V1 Encrypted Envelope*.dee application/didcomm-encrypted+jsonDIDComm Encrypted Message*.dcem signed application/didcomm-sig-envDIDComm V1 Signed Envelope*.dse application/didcomm-signed+jsonDIDComm Signed Message*.dcsm plaintext application/json;flavor=didcomm-msgDIDComm V1 Message*.dm application/didcomm-plain+jsonDIDComm Plaintext Message*.dcpm

    It is also recommended that agents implementing Discover Features Protocol v2 respond to queries about supported DIDComm versions using the didcomm-version feature name. This allows queries about what an agent is willing to support, whereas the media type mechanism describes what is in active use. The values that should be returned from such a query are URIs that tell where DIDComm versions are developed:

    Version URI V1 https://github.com/hyperledger/aries-rfcs V2 https://github.com/decentralized-identity/didcomm-messaging"},{"location":"aip2/0044-didcomm-file-and-mime-types/#what-it-means-to-implement-this-rfc","title":"What it means to \"implement\" this RFC","text":"

    For the purposes of Aries Interop Profiles, an agent \"implements\" this RFC when:

    "},{"location":"aip2/0044-didcomm-file-and-mime-types/#reference","title":"Reference","text":"

    The file extensions and MIME types described here are also accompanied by suggested graphics. Vector forms of these graphics are available.

    "},{"location":"aip2/0044-didcomm-file-and-mime-types/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"aip2/0046-mediators-and-relays/","title":"Aries RFC 0046: Mediators and Relays","text":""},{"location":"aip2/0046-mediators-and-relays/#summary","title":"Summary","text":"

    The mental model for agent-to-agent messaging (A2A) messaging includes two important communication primitives that have a meaning unique to our ecosystem: mediator and relay.

    A mediator is a participant in agent-to-agent message delivery that must be modeled by the sender. It has its own keys and will deliver messages only after decrypting an outer envelope to reveal a forward request. Many types of mediators may exist, but two important ones should be widely understood, as they commonly manifest in DID Docs:

    1. A service that hosts many cloud agents at a single endpoint to provide herd privacy (an \"agency\") is a mediator.
    2. A cloud-based agent that routes between/among the edges of a sovereign domain is a mediator.

    A relay is an entity that passes along agent-to-agent messages, but that can be ignored when the sender considers encryption choices. It does not decrypt anything. Relays can be used to change the transport for a message (e.g., accept an HTTP POST, then turn around and emit an email; accept a Bluetooth transmission, then turn around and emit something in a message queue). Mix networks like TOR are an important type of relay.

    Read on to explore how agent-to-agent communication can model complex topologies and flows using these two primitives.

    "},{"location":"aip2/0046-mediators-and-relays/#motivation","title":"Motivation","text":"

    When we describe agent-to-agent communication, it is convenient to think of an interaction only in terms of Alice and Bob and their agents. We say things like: \"Alice's agent sends a message to Bob's agent\" -- or perhaps \"Alice's edge agent sends a message to Bob's cloud agent, which forwards it to Bob's edge agent\".

    Such statements adopt a useful level of abstraction--one that's highly recommended for most discussions. However, they make a number of simplifications. By modeling the roles of mediators and relays in routing, we can support routes that use multiple transports, routes that are not fully known (or knowable) to the sender, routes that pass through mix networks, and other advanced and powerful ../../concepts.

    "},{"location":"aip2/0046-mediators-and-relays/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0046-mediators-and-relays/#key-concepts","title":"Key Concepts","text":"

    Let's define mediators and relays by exploring how they manifest in a series of communication scenarios between Alice and Bob.

    "},{"location":"aip2/0046-mediators-and-relays/#scenario-1-base","title":"Scenario 1 (base)","text":"

    Alice and Bob are both employees of a large corporation. They work in the same office, but have never met. The office has a rule that all messages between employees must be encrypted. They use paper messages and physical delivery as the transport. Alice writes a note, encrypts it so only Bob can read it, puts it in an envelope addressed to Bob, and drops the envelope on a desk that she has been told belongs to Bob. This desk is in fact Bob's, and he later picks up the message, decrypts it, and reads it.

    In this scenario, there is no mediator, and no relay.

    "},{"location":"aip2/0046-mediators-and-relays/#scenario-2-a-gatekeeper","title":"Scenario 2: a gatekeeper","text":"

    Imagine that Bob hires an executive assistant, Carl, to filter his mail. Bob won't open any mail unless Carl looks at it and decides that it's worthy of Bob's attention.

    Alice has to change her behavior. She continues to package a message for Bob, but now she must account for Carl as well. She take the envelope for Bob, and places it inside a new envelope addressed to Carl. Inside the outer envelope, and next to the envelope destined for Bob, Alice writes Carl an encrypted note: \"This inner envelope is for Bob. Please forward.\"

    Here, Carl is acting as a mediator. He is mostly just passing messages along. But because he is processing a message himself, and because Carl is interposed between Alice and Bob, he affects the behavior of the sender. He is a known entity in the route.

    "},{"location":"aip2/0046-mediators-and-relays/#scenario-3-transparent-indirection","title":"Scenario 3: transparent indirection","text":"

    All is the same as the base scenario (Carl has been fired), except that Bob is working from home when Alice's message lands on his desk. Bob has previously arranged with his friend Darla, who lives near him, to pick up any mail that's on his desk and drop it off at his house at the end of the work day. Darla sees Alice's note and takes it home to Bob.

    In this scenario, Darla is acting as a relay. Note that Bob arranges for Darla to do this without notifying Alice, and that Alice does not need to adjust her behavior in any way for the relay to work.

    "},{"location":"aip2/0046-mediators-and-relays/#scenario-4-more-indirection","title":"Scenario 4: more indirection","text":"

    Like scenario 3, Darla brings Bob his mail at home. However, Bob isn't at home when his mail arrives. He's had to rush out on an errand, but he's left instructions with his son, Emil, to open any work mail, take a photo of the letter, and text him the photo. Emil intends to do this, but the camera on his phone misfires, so he convinces his sister, Francis, to take the picture on her phone and email it to him. Then he texts the photo to Bob, as arranged.

    Here, Emil and Francis are also acting as relays. Note that nobody knows about the full route. Alice thinks she's delivering directly to Bob. So does Darla. Bob knows about Darla and Emil, but not about Francis.

    Note, too, how the transport is changing from physical mail to email to text.

    To the party immediately upstream (closer to the sender), a relay is indistinguishable from the next party downstream (closer to the recipient). A party anywhere in the chain can insert one or more relays upstream from themselves, as long as those relays are not upstream of another named party (sender or mediator).

    "},{"location":"aip2/0046-mediators-and-relays/#more-scenarios","title":"More Scenarios","text":"

    Mediators and relays can be combined in any order and any amount in variations on our fictional scenario. Bob could employ Carl as a mediator, and Carl could work from home and arrange delivery via George, then have his daughter Hannah run messages back to Bob's desk at work. Carl could hire his own mediator. Darla could arrange or Ivan to substitute for her when she goes on vacation. And so forth.

    "},{"location":"aip2/0046-mediators-and-relays/#more-traditional-usage","title":"More Traditional Usage","text":"

    The scenarios used above are somewhat artificial. Our most familiar agent-to-agent scenarios involve edge agents running on mobile devices and accessible through bluetooth or push notification, and cloud agents that use electronic protocols as their transport. Let's see how relays and mediators apply there.

    "},{"location":"aip2/0046-mediators-and-relays/#scenario-5-traditional-base","title":"Scenario 5 (traditional base)","text":"

    Alice's cloud agent wants to talk to Bob's cloud agent. Bob's cloud agent is listening at http://bob.com/agent. Alice encrypts a message for Bob and posts it to that URL.

    In this scenario, we are using a direct transport with neither a mediator nor a relay.

    If you are familiar with common routing patterns and you are steeped in HTTP, you are likely objecting at this point, pointing out ways that this description diverges from best practice, including what's prescribed in other RFC. You may be eager to explain why this is a privacy problem, for example.

    You are not wrong, exactly. But please suspend those concerns and hang with me. This is about what's theoretically possible in the mental model. Besides, I would note that virtually the same diagram could be used for a Bluetooth agent conversation:

    "},{"location":"aip2/0046-mediators-and-relays/#scenario-6-herd-hosting","title":"Scenario 6: herd hosting","text":"

    Let's tweak Scenario 5 slightly by saying that Bob's agent is one of thousands that are hosted at the same URL. Maybe the URL is now http://agents-r-us.com/inbox. Now if Alice wants to talk to Bob's cloud agent, she has to cope with a mediator. She wraps the encrypted message for Bob's cloud agent inside a forward message that's addressed to and encrypted for the agent of agents-r-us that functions as a gatekeeper.

    This scenario is one that highlights an external mediator--so-called because the mediator lives outside the sovereign domain of the final recipient.

    "},{"location":"aip2/0046-mediators-and-relays/#scenario-7-intra-domain-dispatch","title":"Scenario 7: intra-domain dispatch","text":"

    Now let's subtract agents-r-us. We're back to Bob's cloud agent listening directly at http://bob.com/agent. However, let's say that Alice has a different goal--now she wants to talk to the edge agent running on Bob's mobile device. This agent doesn't have a permanent IP address, so Bob uses his own cloud agent as a mediator. He tells Alice that his mobile device agent can only be reached via his cloud agent.

    Once again, this causes Alice to modify her behavior. Again, she wraps her encrypted message. The inner message is enclosed in an outer envelope, and the outer envelope is passed to the mediator.

    This scenario highlights an internal mediator. Internal and external mediators introduce similar ../../features and similar constraints; the relevant difference is that internal mediators live within the sovereign domain of the recipient, and may thus be worthy of greater trust.

    "},{"location":"aip2/0046-mediators-and-relays/#scenario-8-double-mediation","title":"Scenario 8: double mediation","text":"

    Now let's combine. Bob's cloud agent is hosted at agents-r-us, AND Alice wants to reach Bob's mobile:

    This is a common pattern with HTTP-based cloud agents plus mobile edge agents, which is the most common deployment pattern we expect for many users of self-sovereign identity. Note that the properties of the agency and the routing agent are not particularly special--they are just an external and an internal mediator, respectively.

    "},{"location":"aip2/0046-mediators-and-relays/#related-concepts","title":"Related Concepts","text":""},{"location":"aip2/0046-mediators-and-relays/#routes-are-one-way-not-duplex","title":"Routes are One-Way (not duplex)","text":"

    In all of this discussion, note that we are analyzing only a flow from Alice to Bob. How Bob gets a message back to Alice is a completely separate question. Just because Carl, Darla, Emil, Francis, and Agents-R-Us may be involved in how messages flow from Alice to Bob, does not mean they are involved in flow the opposite direction.

    Note how this breaks the simple assumptions of pure request-response technologies like HTTP, that assume the channel in (request) is also the channel out (response). Duplex request-response can be modeled with A2A, but doing so requires support that may not always be available, plus cooperative behavior governed by the ~thread decorator.

    "},{"location":"aip2/0046-mediators-and-relays/#conventions-on-direction","title":"Conventions on Direction","text":"

    For any given one-way route, the direction of flow is always from sender to receiver. We could use many different metaphors to talk about the \"closer to sender\" and \"closer to receiver\" directions -- upstream and downstream, left and right, before and after, in and out. We've chosen to standardize on two:

    "},{"location":"aip2/0046-mediators-and-relays/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Connect.Me Free mobile app from Evernym. Installed via app store on iOS and Android. Verity Commercially licensed enterprise agent, SaaS or on-prem. DIDComm mediator Open source cloud-based mediator with Firebase support."},{"location":"aip2/0047-json-ld-compatibility/","title":"Aries RFC 0047: JSON-LD Compatibility","text":""},{"location":"aip2/0047-json-ld-compatibility/#summary","title":"Summary","text":"

    Explains the goals of DID Communication with respect to JSON-LD, and how Aries proposes to accomplish them.

    "},{"location":"aip2/0047-json-ld-compatibility/#motivation","title":"Motivation","text":"

    JSON-LD is a familiar body of conventions that enriches the expressive power of plain JSON. It is natural for people who arrive in the DID Communication (DIDComm) ecosystem to wonder whether we are using JSON-LD--and if so, how. We need a coherent answer that clarifies our intentions and that keeps us true to those intentions as the ecosystem evolves.

    "},{"location":"aip2/0047-json-ld-compatibility/#tutorial","title":"Tutorial","text":"

    The JSON-LD spec is a recommendation work product of the W3C RDF Working Group Since it was formally recommended as version 1.0 in 2014, the JSON for Linking Data Community Group has taken up not-yet-standards-track work on a 1.1 update.

    JSON-LD has significant gravitas in identity circles. It gives to JSON some capabilities that are sorely needed to model the semantic web, including linking, namespacing, datatyping, signing, and a strong story for schema (partly through the use of JSON-LD on schema.org).

    However, JSON-LD also comes with some conceptual and technical baggage. It can be hard for developers to master its subtleties; it requires very flexible parsing behavior after built-in JSON support is used to deserialize; it references a family of related specs that have their own learning curve; the formality of its test suite and libraries may get in the way of a developer who just wants to read and write JSON and \"get stuff done.\"

    In addition, the problem domain of DIDComm is somewhat different from the places where JSON-LD has the most traction. The sweet spot for DIDComm is small, relatively simple JSON documents where code behavior is strongly bound to the needs of a specific interaction. DIDComm needs to work with extremely simple agents on embedded platforms. Such agents may experience full JSON-LD support as an undue burden when they don't even have a familiar desktop OS. They don't need arbitrary semantic complexity.

    If we wanted to use email technology to send a verifiable credential, we would model the credential as an attachment, not enrich the schema of raw email message bodies. DIDComm invites a similar approach.

    "},{"location":"aip2/0047-json-ld-compatibility/#goal","title":"Goal","text":"

    The DIDComm messaging effort that began in the Indy community wants to benefit from the accessibility of ordinary JSON, but leave an easy path for more sophisticated JSON-LD-driven patterns when the need arises. We therefore set for ourselves this goal:

    Be compatible with JSON-LD, such that advanced use cases can take advantage of it where it makes sense, but impose no dependencies on the mental model or the tooling of JSON-LD for the casual developer.

    "},{"location":"aip2/0047-json-ld-compatibility/#what-the-casual-developer-needs-to-know","title":"What the Casual Developer Needs to Know","text":"

    That's it.

    "},{"location":"aip2/0047-json-ld-compatibility/#details","title":"Details","text":"

    Compatibility with JSON-LD was evaluated against version 1.1 of the JSON-LD spec, current in early 2019. If material changes in the spec are forthcoming, a new analysis may be worthwhile. Our current understanding follows.

    "},{"location":"aip2/0047-json-ld-compatibility/#type","title":"@type","text":"

    The type of a DIDComm message, and its associated route or handler in dispatching code, is given by the JSON-LD @type property at the root of a message. JSON-LD requires this value to be an IRI. DIDComm DID references are fully compliant. Instances of @type on any node other than a message root have JSON-LD meaning, but no predefined relevance in DIDComm.

    "},{"location":"aip2/0047-json-ld-compatibility/#id","title":"@id","text":"

    The identifier for a DIDComm message is given by the JSON-LD @id property at the root of a message. JSON-LD requires this value to be an IRI. DIDComm message IDs are relative IRIs, and can be converted to absolute form as described in RFC 0217: Linkable Message Paths. Instances of @id on any node other than a message root have JSON-LD meaning, but no predefined relevance in DIDComm.

    "},{"location":"aip2/0047-json-ld-compatibility/#context","title":"@context","text":"

    This is JSON-LD\u2019s namespacing mechanism. It is active in DIDComm messages, but can be ignored for simple processing, in the same way namespaces in XML are often ignored for simple tasks.

    Every DIDComm message has an associated @context, but we have chosen to follow the procedure described in section 6 of the JSON-LD spec, which focuses on how ordinary JSON can be intepreted as JSON-LD by communicating @context out of band.

    DIDComm messages communicate the context out of band by specifying it in the protocol definition (e.g., RFC) for the associated message type; thus, the value of @type indirectly gives the relevant @context. In advanced use cases, @context may appear in a DIDComm message, supplementing this behavior.

    "},{"location":"aip2/0047-json-ld-compatibility/#ordering","title":"Ordering","text":"

    JSON-LD specifies that the order of items in arrays is NOT significant, and notes (correctly) that this is the opposite of the standard assumption for plain JSON. This makes sense when viewed through the lens of JSON-LD\u2019s role as a transformation of RDF.

    Since we want to violate as few assumptions as possible for a developer with general knowledge of JSON, DIDComm messages reverse this default, making arrays an ordered construct, as if all DIDComm message @contexts contained something like:

    \"each field\": { \"@container\": \"@list\"}\n
    To contravene the default, use a JSON-LD construction like this in @context:

    \"myfield\": { \"@container\": \"@set\"}\n
    "},{"location":"aip2/0047-json-ld-compatibility/#decorators","title":"Decorators","text":"

    Decorators are JSON fragments that can be included in any DIDComm message. They enter the formally defined JSON-LD namespace via a JSON-LD fragment that is automatically imputed to every DIDComm message:

    \"@context\": {\n  \"@vocab\": \"https://github.com/hyperledger/aries-rfcs/\"\n}\n

    All decorators use the reserved prefix char ~ (tilde). For more on decorators, see the Decorator RFC.

    "},{"location":"aip2/0047-json-ld-compatibility/#signing","title":"Signing","text":"

    JSON-LD is associated but not strictly bound to a signing mechanism, LD-Signatures. It\u2019s a good mechanism, but it comes with some baggage: you must canonicalize, which means you must resolve every \u201cterm\u201d (key name) to its fully qualified form by expanding contexts before signing. This raises the bar for JSON-LD sophistication and library dependencies.

    The DIDComm community is not opposed to using LD Signatures for problems that need them, but has decided not to adopt the mechanism across the board. There is another signing mechanism that is far simpler, and adequate for many scenarios. We\u2019ll use whichever scheme is best suited to circumstances.

    "},{"location":"aip2/0047-json-ld-compatibility/#type-coercion","title":"Type Coercion","text":"

    DIDComm messages generally do not need this feature of JSON-LD, because there are well understood conventions around date-time datatypes, and individual RFCs that define each message type can further clarify such subtleties. However, it is available on a message-type-definition basis (not ad hoc).

    "},{"location":"aip2/0047-json-ld-compatibility/#node-references","title":"Node References","text":"

    JSON-LD lets one field reference another. See example 93 (note that the ref could have just been \u201c#me\u201d instead of the fully qualified IRI). We may need this construct at some point in DIDComm, but it is not in active use yet.

    "},{"location":"aip2/0047-json-ld-compatibility/#internationalization-and-localization","title":"Internationalization and Localization","text":"

    JSON-LD describes a mechanism for this. It has approximately the same ../../features as the one described in Aries RFC 0043, with a few exceptions:

    Because of these misalignments, the DIDComm ecosystem plans to use its own solution to this problem.

    "},{"location":"aip2/0047-json-ld-compatibility/#additional-json-ld-constructs","title":"Additional JSON-LD Constructs","text":"

    The following JSON-LD keywords may be useful in DIDComm at some point in the future: @base, @index, @container (cf @list and @set), @nest, @value, @graph, @prefix, @reverse, @version.

    "},{"location":"aip2/0047-json-ld-compatibility/#drawbacks","title":"Drawbacks","text":"

    By attempting compatibility but only lightweight usage of JSON-LD, we are neither all-in on JSON-LD, nor all-out. This could cause confusion. We are making the bet that most developers won't need to know or care about the details; they'll simply learn that @type and @id are special, required fields on messages. Designers of protocols will need to know a bit more.

    "},{"location":"aip2/0047-json-ld-compatibility/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"aip2/0047-json-ld-compatibility/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0047-json-ld-compatibility/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"aip2/0048-trust-ping/","title":"Aries RFC 0048: Trust Ping Protocol 1.0","text":""},{"location":"aip2/0048-trust-ping/#summary","title":"Summary","text":"

    Describe a standard way for agents to test connectivity, responsiveness, and security of a pairwise channel.

    "},{"location":"aip2/0048-trust-ping/#motivation","title":"Motivation","text":"

    Agents are distributed. They are not guaranteed to be connected or running all the time. They support a variety of transports, speak a variety of protocols, and run software from many different vendors.

    This can make it very difficult to prove that two agents have a functional pairwise channel. Troubleshooting connectivity, responsivenes, and security is vital.

    "},{"location":"aip2/0048-trust-ping/#tutorial","title":"Tutorial","text":"

    This protocol is analogous to the familiar ping command in networking--but because it operates over agent-to-agent channels, it is transport agnostic and asynchronous, and it can produce insights into privacy and security that a regular ping cannot.

    "},{"location":"aip2/0048-trust-ping/#roles","title":"Roles","text":"

    There are two parties in a trust ping: the sender and the receiver. The sender initiates the trust ping. The receiver responds. If the receiver wants to do a ping of their own, they can, but this is a new interaction in which they become the sender.

    "},{"location":"aip2/0048-trust-ping/#messages","title":"Messages","text":"

    The trust ping interaction begins when sender creates a ping message like this:

    {\n  \"@type\": \"https://didcomm.org/trust_ping/1.0/ping\",\n  \"@id\": \"518be002-de8e-456e-b3d5-8fe472477a86\",\n  \"~timing\": {\n    \"out_time\": \"2018-12-15 04:29:23Z\",\n    \"expires_time\": \"2018-12-15 05:29:23Z\",\n    \"delay_milli\": 0\n  },\n  \"comment\": \"Hi. Are you listening?\",\n  \"response_requested\": true\n}\n

    Only @type and @id are required; ~timing.out_time, ~timing.expires_time, and ~timing.delay_milli are optional message timing decorators, and comment follows the conventions of localizable message fields. If present, it may be used to display a human-friendly description of the ping to a user that gives approval to respond. (Whether an agent responds to a trust ping is a decision for each agent owner to make, per policy and/or interaction with their agent.)

    The response_requested field deserves special mention. The normal expectation of a trust ping is that it elicits a response. However, it may be desirable to do a unilateral trust ping at times--communicate information without any expecation of a reaction. In this case, \"response_requested\": false may be used. This might be useful, for example, to defeat correlation between request and response (to generate noise). Or agents A and B might agree that periodically A will ping B without a response, as a way of evidencing that A is up and functional. If response_requested is false, then the receiver MUST NOT respond.

    When the message arrives at the receiver, assuming that response_requested is not false, the receiver should reply as quickly as possible with a ping_response message that looks like this:

    {\n  \"@type\": \"https://didcomm.org/trust_ping/1.0/ping_response\",\n  \"@id\": \"e002518b-456e-b3d5-de8e-7a86fe472847\",\n  \"~thread\": { \"thid\": \"518be002-de8e-456e-b3d5-8fe472477a86\" },\n  \"~timing\": { \"in_time\": \"2018-12-15 04:29:28Z\", \"out_time\": \"2018-12-15 04:31:00Z\"},\n  \"comment\": \"Hi yourself. I'm here.\"\n}\n

    Here, @type and ~thread are required, and the rest is optional.

    "},{"location":"aip2/0048-trust-ping/#trust","title":"Trust","text":"

    This is the \"trust ping protocol\", not just the \"ping protocol.\" The \"trust\" in its name comes from several ../../features that the interaction gains by virtue of its use of standard agent-to-agent conventions:

    1. Messages should be associated with a message trust context that allows sender and receiver to evaluate how much trust can be placed in the channel. For example, both sender and receiver can check whether messages are encrypted with suitable algorithms and keys.

    2. Messages may be targeted at any known agent in the other party's sovereign domain, using cross-domain routing conventions, and may be encrypted and packaged to expose exactly and only the information desired, at each hop along the way. This allows two parties to evaluate the completeness of a channel and the alignment of all agents that maintain it.

    3. This interaction may be traced using the general message tracing mechanism.

    "},{"location":"aip2/0048-trust-ping/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community; MISSING test results Aries Framework - .NET .NET framework for building agents of all types; MISSING test results Streetcred.id Commercial mobile and web app built using Aries Framework - .NET; MISSING test results Aries Cloud Agent - Python Contributed by the government of British Columbia.; MISSING test results Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases.; MISSING test results Aries Protocol Test Suite MISSING test results"},{"location":"aip2/0050-wallets/","title":"Aries RFC 0050: Wallets","text":""},{"location":"aip2/0050-wallets/#summary","title":"Summary","text":"

    Specify the external interfaces of identity wallets in the Indy ecosystem, as well as some background ../../concepts, theory, tradeoffs, and internal implementation guidelines.

    "},{"location":"aip2/0050-wallets/#motivation","title":"Motivation","text":"

    Wallets are a familiar component metaphor that SSI has adopted from the world of cryptocurrencies. The translation isn't perfect, though; crypto wallets have only a subset of the ../../features that an identity wallet needs. This causes problems, as coders may approach wallets in Indy with assumptions that are more narrow than our actual design target.

    Since wallets are a major vector for hacking and cybersecurity issues, casual or fuzzy wallet requirements are a recipe for frustration or disaster. Divergent and substandard implementations could undermine security more broadly. This argues for as much design guidance and implementation help as possible.

    Wallets are also a unit of identity portability--if an identity owner doesn't like how her software is working, she should be able to exercise her self- sovereignty by taking the contents of her wallet to a new service. This implies that wallets need certain types of interoperability in the ecosystem, if they are to avoid vendor lock-in.

    All of these reasons--to clarify design scope, to provide uniform high security, and to guarantee interop--suggest that we need a formal RFC to document wallet architecture.

    "},{"location":"aip2/0050-wallets/#tutorial","title":"Tutorial","text":"

    (For a slide deck that gives a simplified overview of all the content in this RFC, please see http://bit.ly/2JUcIiT. The deck also includes a link to a recorded presentation, if you prefer something verbal and interactive.)

    "},{"location":"aip2/0050-wallets/#what-is-an-identity-wallet","title":"What Is an Identity Wallet?","text":"

    Informally, an identity wallet (preferably not just \"wallet\") is a digital container for data that's needed to control a self-sovereign identity. We borrow this metaphor from physical wallets:

    Notice that we do not carry around in a physical wallet every document, key, card, photo, piece of currency, or credential that we possess. A wallet is a mechanism of convenient control, not an exhaustive repository. A wallet is portable. A wallet is worth safeguarding. Good wallets are organized so we can find things easily. A wallet has a physical location.

    What does suggest about identity wallets?

    "},{"location":"aip2/0050-wallets/#types-of-sovereign-data","title":"Types of Sovereign Data","text":"

    Before we give a definitive answer to that question, let's take a detour for a moment to consider digital data. Actors in a self-sovereign identity ecosystem may own or control many different types of data:

    ...and much more. Different subsets of data may be worthy of different protection efforts:

    The data can also show huge variety in its size and in its richness:

    Because of the sensitivity difference, the size and richness difference, joint ownership, and different needs for access in different circumstances, we may store digital data in many different locations, with different backup regimes, different levels of security, and different cost profiles.

    "},{"location":"aip2/0050-wallets/#whats-out-of-scope","title":"What's Out of Scope","text":""},{"location":"aip2/0050-wallets/#not-a-vault","title":"Not a Vault","text":"

    This variety suggests that an identity wallet as a loose grab-bag of all our digital \"stuff\" will give us a poor design. We won't be able to make good tradeoffs that satisfy everybody; some will want rigorous, optimized search; others will want to minimize storage footprint; others will be concerned about maximizing security.

    We reserve the term vault to refer to the complex collection of all an identity owner's data:

    Note that a vault can contain an identity wallet. A vault is an important construct, and we may want to formalize its interface. But that is not the subject of this spec.

    "},{"location":"aip2/0050-wallets/#not-a-cryptocurrency-wallet","title":"Not A Cryptocurrency Wallet","text":"

    The cryptocurrency community has popularized the term \"wallet\"--and because identity wallets share with crypto wallets both high-tech crypto and a need to store secrets, it is tempting to equate these two ../../concepts. However, an identity wallet can hold more than just cryptocurrency keys, just as a physical wallet can hold more than paper currency. Also, identity wallets may need to manage hundreds of millions of relationships (in the case of large organizations), whereas most crypto wallets manage a small number of keys:

    "},{"location":"aip2/0050-wallets/#not-a-gui","title":"Not a GUI","text":"

    As used in this spec, an identity wallet is not a visible application, but rather a data store. Although user interfaces (superb ones!) can and should be layered on top of wallets, from indy's perspective the wallet itself consists of a container and its data; its friendly face is a separate construct. We may casually refer to an application as a \"wallet\", but what we really mean is that the application provides an interface to the underlying wallet.

    This is important because if a user changes which app manages his identity, he should be able to retain the wallet data itself. We are aiming for a better portability story than browsers offer (where if you change browsers, you may be able to export+import your bookmarks, but you have to rebuild all sessions and logins from scratch).

    "},{"location":"aip2/0050-wallets/#personas","title":"Personas","text":"

    Wallets have many stakeholders. However, three categories of wallet users are especially impactful on design decisions, so we define a persona for each.

    "},{"location":"aip2/0050-wallets/#alice-individual-identity-owner","title":"Alice (individual identity owner)","text":"

    Alice owns several devices, and she has an agent in the cloud. She has a thousand relationships--some with institutions, some with other people. She has a couple hundred credentials. She owns three different types of cryptocurrency. She doesn\u2019t issue or revoke credentials--she just uses them. She receives proofs from other entities (people and orgs). Her main tool for exercising a self-sovereign identity is an app on a mobile device.

    "},{"location":"aip2/0050-wallets/#faber-intitutional-identity-owner","title":"Faber (intitutional identity owner)","text":"

    Faber College has an on-prem data center as well as many resources and processes in public and private clouds. It has relationships with a million students, alumni, staff, former staff, applicants, business partners, suppliers, and so forth. Faber issues credentials and must manage their revocation. Faber may use crypto tokens to sell and buy credentials and proofs.

    "},{"location":"aip2/0050-wallets/#the-org-book-trust-hub","title":"The Org Book (trust hub)","text":"

    The Org Book holds credentials (business licenses, articles of incorporation, health permits, etc) issued by various government agencies, about millions of other business entities. It needs to index and search credentials quickly. Its data is public. It serves as a reference for many relying parties--thus its trust hub role.

    "},{"location":"aip2/0050-wallets/#use-cases","title":"Use Cases","text":"

    The specific uses cases for an identity wallet are too numerous to fully list, but we can summarize them as follows:

    As an identity owner (any of the personas above), I want to manage identity and its relationships in a way that guarantees security and privacy:

    "},{"location":"aip2/0050-wallets/#managing-secrets","title":"Managing Secrets","text":"

    Certain sensitive things require special handling. We would never expect to casually lay an ebola zaire sample on the counter in our bio lab; rather, it must never leave a special controlled isolation chamber.

    Cybersecurity in wallets can be greatly enhanced if we take a similar tack with high-value secrets. We prefer to generate such secrets in their final resting place, possibly using a seed if we need determinism. We only use such secrets in their safe place, instead of passing them out to untrusted parties.

    TPMs, HSMs, and so forth follow these rules. Indy\u2019s current wallet interface does, too. You can\u2019t get private keys out.

    "},{"location":"aip2/0050-wallets/#composition","title":"Composition","text":"

    The foregoing discussions about cybersecurity, the desirability of design guidance and careful implementation, and wallet data that includes but is not limited to secrets motivates the following logical organization of identity wallets in Indy:

    The world outside a wallet interfaces with the wallet through a public interface provided by indy-sdk, and implemented only once. This is the block labeled encryption, query (wallet core) in the diagram. The implementation in this layer guarantees proper encryption and secret-handling. It also provides some query ../../features. Records (items) to be stored in a wallet are referenced by a public handle if they are secrets. This public handle might be a public key in a key pair, for example. Records that are not secrets can be returned directly across the API boundary.

    Underneath, this common wallet code in libindy is supplemented with pluggable storage-- a technology that provides persistence and query ../../features. This pluggable storage could be a file system, an object store, an RDBMS, a NoSQL DB, a Graph DB, a key~value store, or almost anything similar. The pluggable storage is registered with the wallet layer by providing a series of C-callable functions (callbacks). The storage layer doesn't have to worry about encryption at all; by the time data reaches it, it is encrypted robustly, and the layer above the storage takes care of translating queries to and from encrypted form for external consumers of the wallet.

    "},{"location":"aip2/0050-wallets/#tags-and-queries","title":"Tags and Queries","text":"

    Searchability in wallets is facilitated with a tagging mechanism. Each item in a wallet can be associated with zero or more tags, where a tag is a key=value pair. Items can be searched based on the tags associated with them, and tag values can be strings or numbers. With a good inventory of tags in a wallet, searching can be robust and efficient--but there is no support for joins, subqueries, and other RDBMS-like constructs, as this would constrain the type of storage plugin that could be written.

    An example of the tags on a wallet item that is a credential might be:

      item-name = \"My Driver's License\"\n  date-issued = \"2018-05-23\"\n  issuer-did = \"ABC\"\n  schema = \"DEF\"\n

    Tag names and tag values are both case-sensitive.

    Because tag values are normally encrypted, most tag values can only be tested using the $eq, $neq or $in operators (see Wallet Query Language, next). However, it is possible to force a tag to be stored in the wallet as plain text by naming it with a special prefix, ~ (tilde). This enables operators like $gt, $lt, and $like. Such tags lose their security guarantees but provide for richer queries; it is up to applications and their users to decide whether the tradeoff is appropriate.

    "},{"location":"aip2/0050-wallets/#wallet-query-language","title":"Wallet Query Language","text":"

    Wallets can be searched and filtered using a simple, JSON-based query language. We call this Wallet Query Language (WQL). WQL is designed to require no fancy parsing by storage plugins, and to be easy enough for developers to learn in just a few minutes. It is inspired by MongoDB's query syntax, and can be mapped to SQL, GraphQL, and other query languages supported by storage backends, with minimal effort.

    Formal definition of WQL language is the following:

    query = {subquery}\nsubquery = {subquery, ..., subquery} // means subquery AND ... AND subquery\nsubquery = $or: [{subquery},..., {subquery}] // means subquery OR ... OR subquery\nsubquery = $not: {subquery} // means NOT (subquery)\nsubquery = \"tagName\": tagValue // means tagName == tagValue\nsubquery = \"tagName\": {$neq: tagValue} // means tagName != tagValue\nsubquery = \"tagName\": {$gt: tagValue} // means tagName > tagValue\nsubquery = \"tagName\": {$gte: tagValue} // means tagName >= tagValue\nsubquery = \"tagName\": {$lt: tagValue} // means tagName < tagValue\nsubquery = \"tagName\": {$lte: tagValue} // means tagName <= tagValue\nsubquery = \"tagName\": {$like: tagValue} // means tagName LIKE tagValue\nsubquery = \"tagName\": {$in: [tagValue, ..., tagValue]} // means tagName IN (tagValue, ..., tagValue)\n
    "},{"location":"aip2/0050-wallets/#sample-wql-query-1","title":"Sample WQL Query 1","text":"

    Get all credentials where subject like \u2018Acme%\u2019 and issue_date > last week. (Note here that the name of the issue date tag begins with a tilde, telling the wallet to store its value unencrypted, which makes the $gt operator possible.)

    {\n  \"~subject\": {\"$like\": \"Acme%\"},\n  \"~issue_date\": {\"$gt\": 2018-06-01}\n}\n
    "},{"location":"aip2/0050-wallets/#sample-wql-query-2","title":"Sample WQL Query 2","text":"

    Get all credentials about me where schema in (a, b, c) and issuer in (d, e, f).

    {\n  \"schema_id\": {\"$in\": [\"a\", \"b\", \"c\"]},\n  \"issuer_id\": {\"$in\": [\"d\", \"e\", \"f\"]},\n  \"holder_role\": \"self\"\n}\n
    "},{"location":"aip2/0050-wallets/#encryption","title":"Encryption","text":"

    Wallets need very robust encryption. However, they must also be searchable, and the encryption must be equally strong regardless of which storage technology is used. We want to be able to hide data patterns in the encrypted data, such that an attacker cannot see common prefixes on keys, or common fragments of data in encrypted values. And we want to rotate the key that protects a wallet without having to re-encrypt all its content. This suggests that a trivial encryption scheme, where we pick a symmetric key and encrypt everything with it, is not adequate.

    Instead, wallet encryption takes the following approach:

    The 7 \"column\" keys are concatenated and encrypted with a wallet master key, then saved into the metadata of the wallet. This allows the master key to be rotated without re-encrypting all the items in the wallet.

    Today, all encryption is done using ChaCha20-Poly1305, with HMAC-SHA256. This is a solid, secure encryption algorithm, well tested and widely supported. However, we anticipate the desire to use different cipher suites, so in the future we will make the cipher suite pluggable.

    The way the individual fields are encrypted is shown in the following diagram. Here, data is shown as if stored in a relational database with tables. Wallet storage may or may not use tables, but regardless of how the storage distributes and divides the data, the logical relationships and the encryption shown in the diagram apply.

    "},{"location":"aip2/0050-wallets/#pluggable-storage","title":"Pluggable Storage","text":"

    Although Indy infrastructure will provide only one wallet implementation it will allow to plug different storages for covering of different use cases. Default storage shipped with libindy will be sqlite based and well suited for agents running on edge devices. The API endpoint register_wallet_storage will allow Indy Developers to register a custom storage implementation as a set of handlers.

    A storage implementation does not need any special security ../../features. It stores data that was already encrypted by libindy (or data that needs no encryption/protection, in the case of unencrypted tag values). It searches data in whatever form it is persisted, without any translation. It returns data as persisted, and lets the common wallet infrastructure in libindy decrypt it before return it to the user.

    "},{"location":"aip2/0050-wallets/#secure-enclaves","title":"Secure Enclaves","text":"

    Secure Enclaves are purposely designed to manage, generate, and securely store cryptographic material. Enclaves can be either specially designed hardware (e.g. HSM, TPM) or trusted execution environments (TEE) that isolate code and data from operating systems (e.g. Intel SGX, AMD SVE, ARM Trustzone). Enclaves can replace common cryptographic operations that wallets perform (e.g. encryption, signing). Some secrets cannot be stored in wallets like the key that encrypts the wallet itself or keys that are backed up. These cannot be stored in enclaves as keys stored in enclaves cannot be extracted. Enclaves can still protect these secrets via a mechanism called wrapping.

    "},{"location":"aip2/0050-wallets/#enclave-wrapping","title":"Enclave Wrapping","text":"

    Suppose I have a secret, X, that needs maximum protection. However, I can\u2019t store X in my secure enclave because I need to use it for operations that the enclave can\u2019t do for me; I need direct access. So how to I extend enclave protections to encompass my secret?

    I ask the secure enclave to generate a key, Y, that will be used to protect X. Y is called a wrapping key. I give X to the secure enclave and ask that it be encrypted with wrapping key Y. The enclave returns X\u2019 (ciphertext of X, now called a wrapped secret), which I can leave on disk with confidence; it cannot be decrypted to X without involving the secure enclave. Later, when I want to decrypt, I give wrapped secret X\u2019 to the secure enclave and ask it to give me back X by decrypting with wrapping key Y.

    You could ask whether this really increases security. If you can get into the enclave, you can wrap or unwrap at will.

    The answer is that an unwrapped secret is protected by only one thing--whatever ACLs exist on the filesystem or storage where it resides. A wrapped secret is protected by two things--the ACLs and the enclave. OS access may breach either one, but pulling a hard drive out of a device will not breach the enclave.

    "},{"location":"aip2/0050-wallets/#paper-wallets","title":"Paper Wallets","text":"

    It is possible to persist wallet data to physical paper (or, for that matter, to etched metal or other physical media) instead of a digital container. Such data has attractive storage properties (e.g., may survive natural disasters, power outages, and other challenges that would destroy digital data). Of course, by leaving the digital realm, the data loses its accessibility over standard APIs.

    We anticipate that paper wallets will play a role in backup and recovery, and possibly in enabling SSI usage by populations that lack easy access to smartphones or the internet. Our wallet design should be friendly to such usage, but physical persistence of data is beyond the scope of Indy's plugin storage model and thus not explored further in this RFC.

    "},{"location":"aip2/0050-wallets/#backup-and-recovery","title":"Backup and Recovery","text":"

    Wallets need a backup and recovery feature, and also a way to export data and import it. Indy's wallet API includes an export function and an import function that may be helpful in such use cases. Today, the export is unfiltered--all data is exported. The import is also all-or-nothing and must be to an empty wallet; it is not possible to import selectively or to update existing records during import.

    A future version of import and export may add filtering, overwrite, and progress callbacks. It may also allow supporting or auxiliary data (other than what the wallet directly persists) to be associated with the export/import payload.

    For technical details on how export and import work, please see the internal design docs.

    "},{"location":"aip2/0050-wallets/#reference","title":"Reference","text":""},{"location":"aip2/0050-wallets/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    We could implement wallets purely as built already in the cryptocurrency world. This would give us great security (except for crypto wallets that are cloud based), and perhaps moderately good usability.

    However, it would also mean we could not store credentials in wallets. Indy would then need an alternate mechanism to scan some sort of container when trying to satisfy a proof request. And it would mean that a person's identity would not be portable via a single container; rather, if you wanted to take your identity to a new place, you'd have to copy all crypto keys in your crypto wallet, plus copy all your credentials using some other mechanism. It would also fragment the places where you could maintain an audit trail of your SSI activities.

    "},{"location":"aip2/0050-wallets/#prior-art","title":"Prior art","text":"

    See comment about crypto wallets, above.

    "},{"location":"aip2/0050-wallets/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0050-wallets/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy SDK Most agents that implement wallets get their wallet support from Indy SDK. These are not listed separately."},{"location":"aip2/0092-transport-return-route/","title":"Aries RFC 0092: Transports Return Route","text":""},{"location":"aip2/0092-transport-return-route/#summary","title":"Summary","text":"

    Agents can indicate that an inbound message transmission may also be used as a return route for messages. This allows for transports of increased efficiency as well as agents without an inbound route.

    "},{"location":"aip2/0092-transport-return-route/#motivation","title":"Motivation","text":"

    Inbound HTTP and Websockets are used only for receiving messages by default. Return messages are sent using their own outbound connections. Including a decorator allows the receiving agent to know that using the inbound connection as a return route is acceptable. This allows two way communication with agents that may not have an inbound route available. Agents without an inbound route include mobile agents, and agents that use a client (and not a server) for communication.

    This decorator is intended to facilitate message communication between a client based agent (an agent that can only operate as a client, not a server) and the server based agents they communicate directly with. Use on messages that will be forwarded is not allowed.

    "},{"location":"aip2/0092-transport-return-route/#tutorial","title":"Tutorial","text":"

    When you send a message through a connection, you can use the ~transport decorator on the message and specify return_route. The value of return_route is discussed in the Reference section of this document.

    {\n    \"~transport\": {\n        \"return_route\": \"all\"\n    }\n}\n
    "},{"location":"aip2/0092-transport-return-route/#reference","title":"Reference","text":"

    The ~transport decorator should be processed after unpacking and prior to routing the message to a message handler.

    For HTTP transports, the presence of this message decorator indicates that the receiving agent MAY hold onto the connection and use it to return messages as designated. HTTP transports will only be able to receive at most one message at a time. Websocket transports are capable of receiving multiple messages.

    Compliance with this indicator is optional for agents generally, but required for agents wishing to connect with client based agents.

    "},{"location":"aip2/0092-transport-return-route/#drawbacks","title":"Drawbacks","text":""},{"location":"aip2/0092-transport-return-route/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"aip2/0092-transport-return-route/#prior-art","title":"Prior art","text":"

    The Decorators RFC describes scope of decorators. Transport isn't one of the scopes listed.

    "},{"location":"aip2/0092-transport-return-route/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Protocol Test Suite Used in Tests"},{"location":"aip2/0094-cross-domain-messaging/","title":"Aries RFC 0094: Cross-Domain Messaging","text":""},{"location":"aip2/0094-cross-domain-messaging/#summary","title":"Summary","text":"

    There are two layers of messages that combine to enable interoperable self-sovereign identity DIDcomm (formerly called Agent-to-Agent) communication. At the highest level are Agent Messages - messages sent between Identities to accomplish some shared goal. For example, establishing a connection between identities, issuing a Verifiable Credential from an Issuer to a Holder or even the simple delivery of a text Instant Message from one person to another. Agent Messages are delivered via the second, lower layer of messaging - encryption envelopes. An encryption envelope is a wrapper (envelope) around an Agent Message to enable the secure delivery of a message from one Agent directly to another Agent. An Agent Message going from its Sender to its Receiver may be passed through a number of Agents, and an encryption envelope is used for each hop of the journey.

    This RFC addresses Cross Domain messaging to enable interoperability. This is one of a series of related RFCs that address interoperability, including DIDDoc Conventions, Agent Messages and Encryption Envelope. Those RFCs should be considered together in understanding DIDcomm messaging.

    In order to send a message from one Identity to another, the sending Identity must know something about the Receiver's domain - the Receiver's configuration of Agents. This RFC outlines how a domain MUST present itself to enable the Sender to know enough to be able to send a message to an Agent in the domain. In support of that, a DIDcomm protocol (currently consisting of just one Message Type) is introduced to route messages through a network of Agents in both the Sender and Receiver's domain. This RFC provides the specification of the \"Forward\" Agent Message Type - an envelope that indicates the destination of a message without revealing anything about the message.

    The goal of this RFC is to define the rules that domains MUST follow to enable the delivery of Agent messages from a Sending Agent to a Receiver Agent in a secure and privacy-preserving manner.

    "},{"location":"aip2/0094-cross-domain-messaging/#motivation","title":"Motivation","text":"

    The purpose of this RFC and its related RFCs is to define a layered messaging protocol such that we can ignore the delivery of messages as we discuss the much richer Agent Messaging types and interactions. That is, we can assume that there is no need to include in an Agent message anything about how to route the message to the Receiver - it just magically happens. Alice (via her App Agent) sends a message to Bob, and (because of implementations based on this series of RFCs) we can ignore how the actual message got to Bob's App Agent.

    Put another way - these RFCs are about envelopes. They define a way to put a message - any message - into an envelope, put it into an outbound mailbox and have it magically appear in the Receiver's inbound mailbox in a secure and privacy-preserving manner. Once we have that, we can focus on letters and not how letters are sent.

    Most importantly for Agent to Agent interoperability, this RFC clearly defines the assumptions necessary to deliver a message from one domain to another - e.g. what exactly does Alice have to know about Bob's domain to send Bob a message?

    "},{"location":"aip2/0094-cross-domain-messaging/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0094-cross-domain-messaging/#core-messaging-goals","title":"Core Messaging Goals","text":"

    These are vital design goals for this RFC:

    1. Sender Encapsulation: We SHOULD minimize what the Receiver has to know about the domain (routing tree or agent infrastructure) of the Sender in order for them to communicate.
    2. Receiver Encapsulation: We SHOULD minimize what the Sender has to know about the domain (routing tree or agent infrastructure) of the Receiver in order for them to communicate.
    3. Independent Keys: Private signing keys SHOULD NOT be shared between agents; each agent SHOULD be separately identifiable for accounting and authorization/revocation purposes.
    4. Need To Know Information Sharing: Information made available to intermediary agents between the Sender and Receiver SHOULD be minimized to what is needed to perform the agent's role in the process.
    "},{"location":"aip2/0094-cross-domain-messaging/#assumptions","title":"Assumptions","text":"

    The following are assumptions upon which this RFC is predicated.

    "},{"location":"aip2/0094-cross-domain-messaging/#terminology","title":"Terminology","text":"

    The following terms are used in this RFC with the following meanings:

    "},{"location":"aip2/0094-cross-domain-messaging/#diddoc","title":"DIDDoc","text":"

    The term \"DIDDoc\" is used in this RFC as it is defined in the DID Specification:

    A DID can be resolved to get its corresponding DIDDoc by any Agent that needs access to the DIDDoc. This is true whether talking about a DID on a Public Ledger, or a pairwise DID (using the did:peer method) persisted only to the parties of the relationship. In the case of pairwise DIDs, it's the (implementation specific) domain's responsibility to ensure such resolution is available to all Agents requiring it within the domain.

    "},{"location":"aip2/0094-cross-domain-messaging/#messages-are-private","title":"Messages are Private","text":"

    Agent Messages sent from a Sender to a Receiver SHOULD be private. That is, the Sender SHOULD encrypt the message with a public key for the Receiver. Any agent in between the Sender and Receiver will know only to whom the message is intended (by DID and possibly keyname within the DID), not anything about the message.

    "},{"location":"aip2/0094-cross-domain-messaging/#the-sender-knows-the-receiver","title":"The Sender Knows The Receiver","text":"

    This RFC assumes that the Sender knows the Receiver's DID and, within the DIDDoc for that DID, the keyname to use for the Receiver's Agent. How the Sender knows the DID and keyname to send the message is not defined within this RFC - that is a higher level concern.

    The Receiver's DID MAY be a public or pairwise DID, and MAY be on a Public Ledger or only shared between the parties of the relationship.

    "},{"location":"aip2/0094-cross-domain-messaging/#example-domain-and-diddoc","title":"Example: Domain and DIDDoc","text":"

    The following is an example of an arbitrary pair of domains that will be helpful in defining the requirements in this RFC.

    In the diagram above:

    "},{"location":"aip2/0094-cross-domain-messaging/#bobs-did-for-his-relationship-with-alice","title":"Bob's DID for his Relationship with Alice","text":"

    Bob\u2019s domain has 3 devices he uses for processing messages - two phones (4 and 5) and a cloud-based agent (6). However, in Bob's relationship with Alice, he ONLY uses one phone (4) and the cloud-based agent (6). Thus the key for device 5 is left out of the DIDDoc (see below).

    Note that the keyname for the Routing Agent (3) is called \"routing\". This is an example of the kind of convention needed to allow the Sender's agents to know the keys for Agents with a designated role in the receiving domain - as defined in the DIDDoc Conventions RFC.

    {\n  \"@context\": \"https://w3id.org/did/v1\",\n  \"id\": \"did:sov:1234abcd\",\n  \"publicKey\": [\n    {\"id\": \"routing\", \"type\": \"RsaVerificationKey2018\",  \"owner\": \"did:sov:1234abcd\",\"publicKeyPem\": \"-----BEGIN PUBLIC X\u2026\"},\n    {\"id\": \"4\", \"type\": \"RsaVerificationKey2018\",  \"owner\": \"did:sov:1234abcd\",\"publicKeyPem\": \"-----BEGIN PUBLIC 9\u2026\"},\n    {\"id\": \"6\", \"type\": \"RsaVerificationKey2018\",  \"owner\": \"did:sov:1234abcd\",\"publicKeyPem\": \"-----BEGIN PUBLIC A\u2026\"}\n  ],\n  \"authentication\": [\n    {\"type\": \"RsaSignatureAuthentication2018\", \"publicKey\": \"did:sov:1234abcd#4\"}\n  ],\n  \"service\": [\n    {\n      \"id\": \"did:example:123456789abcdefghi;did-communication\",\n      \"type\": \"did-communication\",\n      \"priority\" : 0,\n      \"recipientKeys\" : [ \"did:example:1234abcd#4\" ],\n      \"routingKeys\" : [ \"did:example:1234abcd#3\" ],\n      \"serviceEndpoint\" : \"did:example:xd45fr567794lrzti67;did-communication\"\n    }\n  ]\n}\n

    For the purposes of this discussion we are defining the message flow to be:

    1 \u2192 2 \u2192 8 \u2192 9 \u2192 3 \u2192 4

    However, that flow is arbitrary and only one hop is actually required:

    "},{"location":"aip2/0094-cross-domain-messaging/#encryption-envelopes","title":"Encryption Envelopes","text":"

    An encryption envelope is used to transport any Agent Message from one Agent directly to another. In our example message flow above, there are five encryption envelopes sent, one for each hop in the flow. The separate Encryption Envelope RFC covers those details.

    "},{"location":"aip2/0094-cross-domain-messaging/#agent-message-format","title":"Agent Message Format","text":"

    An Agent Message defines the format of messages processed by Agents. Details about the general form of Agent Messages can be found in the Agent Messages RFC.

    This RFC specifies (below) the \"Forward\" message type, a part of the \"Routing\" family of Agent Messages.

    "},{"location":"aip2/0094-cross-domain-messaging/#did-diddoc-and-routing","title":"DID, DIDDoc and Routing","text":"

    A DID owned by the Receiver is resolvable by the Sender as a DIDDoc using either a Public Ledger or using pairwise DIDs based on the did:peer method. The related DIDcomm DIDDoc Conventions RFC defines the required contents of a DIDDoc created by the receiving entity. Notably, the DIDDoc given to the Sender by the Receiver specifies the required routing of the message through an optional set of mediators.

    "},{"location":"aip2/0094-cross-domain-messaging/#cross-domain-interoperability","title":"Cross Domain Interoperability","text":"

    A key goal for interoperability is that we want other domains to know just enough about the configuration of a domain to which they are delivering a message, but no more. The following walks through those minimum requirements.

    "},{"location":"aip2/0094-cross-domain-messaging/#required-the-did-and-diddoc","title":"Required: The DID and DIDDoc","text":"

    As noted above, the Sender of an Agent to Agent Message has the DID of the Receiver, and knows the key(s) from the DIDDoc to use for the Receiver's Agent(s).

    Example: Alice wants to send a message from her phone (1) to Bob's phone (4). She has Bob's B:did@A:B, the DID/DIDDoc Bob created and gave to Alice to use for their relationship. Alice created A:did@A:B and gave that to Bob, but we don't need to use that in this example. The content of the DIDDoc for B:did@A:B is presented above.

    "},{"location":"aip2/0094-cross-domain-messaging/#required-end-to-end-encryption-of-the-agent-message","title":"Required: End-to-End encryption of the Agent Message","text":"

    The Agent Message from the Sender SHOULD be hidden from all Agents other than the Receiver. Thus, it SHOULD be encrypted with the public key of the Receiver. Based on our assumptions, the Sender can get the public key of the Receiver agent because they know the DID#keyname string, can resolve the DID to the DIDDoc and find the public key associated with DID#keyname in the DIDDoc. In our example above, that is the key associated with \"did:sov:1234abcd#4\".

    Most Sender-to-Receiver messages will be sent between parties that have shared pairwise DIDs (using the did:peer method). When that is true, the Sender will (usually) AuthCrypt the message. If that is not the case, or for some other reason the Sender does not want to AuthCrypt the message, AnonCrypt will be used. In either case, the Indy-SDK pack() function handles the encryption.

    If there are mediators specified in the DID service endpoint for the Receiver agent, the Sender must wrap the message for the Receiver in a 'Forward' message for each mediator. It is assumed that the Receiver can determine the from did based on the to DID (or the sender's verkey) using their pairwise relationship.

    {\n  \"@type\" : \"https://didcomm.org/routing/1.0/forward\",\n  \"@id\": \"54ad1a63-29bd-4a59-abed-1c5b1026e6fd\",\n  \"to\"   : \"did:sov:1234abcd#4\",\n  \"msg\"  : { json object from <pack(AgentMessage,valueOf(did:sov:1234abcd#4), privKey(A.did@A:B#1))> }\n}\n

    Notes

    The bullet above about the unpack() function returning the signer's public key deserves some additional attention. The Receiver of the message knows from the \"to\" field the DID to which the message was sent. From that, the Receiver is expected to be able to determine the DID of the Sender, and from that, access the Sender's DIDDoc. However, knowing the DIDDoc is not enough to know from whom the message was sent - which key was used to send the message, and hence, which Agent controls the Sending private key. This information MUST be made known to the Receiver (from unpack()) when AuthCrypt is used so that the Receiver knows which key was used to the send the message and can, for example, use that key in responding to the arriving Message.

    The Sender can now send the Forward Agent Message on its way via the first of the encryption envelope. In our example, the Sender sends the Agent Message to 2 (in the Sender's domain), who in turn sends it to 8. That of course, is arbitrary - the Sender's Domain could have any configuration of Agents for outbound messages. The Agent Message above is passed unchanged, with each Agent able to see the @type, to and msg fields as described above. This continues until the outer forward message gets to the Receiver's first mediator or the Receiver's agent (if there are no mediators). Each agent decrypts the received encrypted envelope and either forwards it (if a mediator) or processes it (if the Receiver Agent). Per the Encryption Envelope RFC, between Agents the Agent Message is pack()'d and unpack()'d as appropriate or required.

    The diagram below shows an example use of the forward messages to encrypt the message all the way to the Receiver with two mediators in between - a shared domain endpoint (aka https://agents-r-us.com) and a routing agent owned by the receiving entity.

    "},{"location":"aip2/0094-cross-domain-messaging/#required-cross-domain-encryption","title":"Required: Cross Domain Encryption","text":"

    While within a domain the Agents MAY choose to use encryption or not when sending messages from Agent to Agent, encryption MUST be used when sending a message into the Receiver's domain. The endpoint agent unpack()'s the encryption envelope and processes the message - usually a forward. Note that within a domain, the agents may use arbitrary relays for messages, unknown to the sender. How the agents within the domain knows where to send the message is implementation specific - likely some sort of dynamic DID-to-Agent routing table. If the path to the receiving agent includes mediators, the message must go through those mediators in order (for example, through 3 in our example) as the message being forwarded has been encrypted for the mediators.

    "},{"location":"aip2/0094-cross-domain-messaging/#required-mediators-process-forward-messages","title":"Required: Mediators Process Forward Messages","text":"

    When a mediator (eventually) receives the message, it determines it is the target of the (current) outer forward Agent Message and so decrypts the message's msg value to reveal the inner \"Forward\" message. Mediators use their (implementation specific) knowledge to map from the to field to deliver the message to the physical endpoint of the next agent to process the message on it's way to the Receiver.

    "},{"location":"aip2/0094-cross-domain-messaging/#required-the-receiver-app-agent-decryptsprocesses-the-agent-message","title":"Required: The Receiver App Agent Decrypts/Processes the Agent Message","text":"

    When the Receiver Agent receives the message, it determines it is the target of the forward message, decrypts the payload and processes the message.

    "},{"location":"aip2/0094-cross-domain-messaging/#exposed-data","title":"Exposed Data","text":"

    The following summarizes the information needed by the Sender's agents:

    The DIDDoc will have a public key entry for each additional Agent message Receiver and each mediator.

    In many cases, the entry for the endpoint agent should be a public DID, as it will likely be operated by an agency (for example, https://agents-r-us.com) rather than by the Receiver entity (for example, a person). By making that a public DID in that case, the agency can rotate its public key(s) for receiving messages in a single operation, rather than having to notify each identity owner and in turn having them update the public key in every pairwise DID that uses that endpoint.

    "},{"location":"aip2/0094-cross-domain-messaging/#data-not-exposed","title":"Data Not Exposed","text":"

    Given the sequence specified above, the following data is NOT exposed to the Sender's agents:

    "},{"location":"aip2/0094-cross-domain-messaging/#message-types","title":"Message Types","text":"

    The following Message Types are defined in this RFC.

    "},{"location":"aip2/0094-cross-domain-messaging/#corerouting10forward","title":"Core:Routing:1.0:Forward","text":"

    The core message type \"forward\", version 1.0 of the \"routing\" family is defined in this RFC. An example of the message is the following:

    {\n  \"@type\" : \"https://didcomm.org/routing/1.0/forward\",\n  \"@id\": \"54ad1a63-29bd-4a59-abed-1c5b1026e6fd\",\n  \"to\"   : \"did:sov:1234abcd#4\",\n  \"msg\"  : { json object from <pack(AgentMessage,valueOf(did:sov:1234abcd#4), privKey(A.did@A:B#1))> }\n}\n

    The to field is required and takes one of two forms:

    The first form is used when sending forward messages across one or more agents that do not need to know the details of a domain. The Receiver of the message is the designated Routing Agent in the Receiver Domain, as it controls the key used to decrypt messages sent to the domain, but not to a specific Agent.

    The second form is used when the precise key (and hence, the Agent controlling that key) is used to encrypt the Agent Message placed in the msg field.

    The msg field calls the Indy-SDK pack() function to encrypt the Agent Message to be forwarded. The Sender calls the pack() with the suitable arguments to AnonCrypt or AuthCrypt the message. The pack() and unpack() functions are described in more detail in the Encryption Envelope RFC.

    "},{"location":"aip2/0094-cross-domain-messaging/#reference","title":"Reference","text":"

    See the other RFCs referenced in this document:

    "},{"location":"aip2/0094-cross-domain-messaging/#drawbacks","title":"Drawbacks","text":"

    N/A

    "},{"location":"aip2/0094-cross-domain-messaging/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    A number of discussions were held about this RFC. In those discussions, the rationale for the RFC evolved into the text, and the alternatives were eliminated. See prior versions of the superseded HIPE (in status section, above) for details.

    A suggestion was made that the following optional parameters could be defined in the \"routing/1.0/forward\" message type:

    The optional parameters have been left off for now, but could be added in this RFC or to a later version of the message type.

    "},{"location":"aip2/0094-cross-domain-messaging/#prior-art","title":"Prior art","text":"

    N/A

    "},{"location":"aip2/0094-cross-domain-messaging/#unresolved-questions","title":"Unresolved questions","text":"

    N/A

    "},{"location":"aip2/0094-cross-domain-messaging/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"aip2/0095-basic-message/","title":"Aries RFC 0095: Basic Message Protocol 1.0","text":""},{"location":"aip2/0095-basic-message/#summary","title":"Summary","text":"

    The BasicMessage protocol describes a stateless, easy to support user message protocol. It has a single message type used to communicate.

    "},{"location":"aip2/0095-basic-message/#motivation","title":"Motivation","text":"

    It is a useful feature to be able to communicate human written messages. BasicMessage is the most basic form of this written message communication, explicitly excluding advanced ../../features to make implementation easier.

    "},{"location":"aip2/0095-basic-message/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0095-basic-message/#roles","title":"Roles","text":"

    There are two roles in this protocol: sender and receiver. It is anticipated that both roles are supported by agents that provide an interface for humans, but it is possible for an agent to only act as a sender (do not process received messages) or a receiver (will never send messages).

    "},{"location":"aip2/0095-basic-message/#states","title":"States","text":"

    There are not really states in this protocol, as sending a message leaves both parties in the same state they were before.

    "},{"location":"aip2/0095-basic-message/#out-of-scope","title":"Out of Scope","text":"

    There are many useful ../../features of user messaging systems that we will not be adding to this protocol. We anticipate the development of more advanced and full-featured message protocols to fill these needs. Features that are considered out of scope for this protocol include:

    "},{"location":"aip2/0095-basic-message/#reference","title":"Reference","text":"

    Protocol: https://didcomm.org/basicmessage/1.0/

    message

    Example:

    {\n    \"@id\": \"123456780\",\n    \"@type\": \"https://didcomm.org/basicmessage/1.0/message\",\n    \"~l10n\": { \"locale\": \"en\" },\n    \"sent_time\": \"2019-01-15 18:42:01Z\",\n    \"content\": \"Your hovercraft is full of eels.\"\n}\n
    "},{"location":"aip2/0095-basic-message/#drawbacks","title":"Drawbacks","text":""},{"location":"aip2/0095-basic-message/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"aip2/0095-basic-message/#prior-art","title":"Prior art","text":"

    BasicMessage has parallels to SMS, which led to the later creation of MMS and even the still-under-development RCS.

    "},{"location":"aip2/0095-basic-message/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0095-basic-message/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community; MISSING test results Aries Framework - .NET .NET framework for building agents of all types; MISSING test results Streetcred.id Commercial mobile and web app built using Aries Framework - .NET; MISSING test results Aries Cloud Agent - Python Contributed by the government of British Columbia.; MISSING test results Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases.; MISSING test results Aries Protocol Test Suite ; MISSING test results"},{"location":"aip2/0183-revocation-notification/","title":"Aries RFC 0183: Revocation Notification 1.0","text":""},{"location":"aip2/0183-revocation-notification/#summary","title":"Summary","text":"

    This RFC defines the message format which an issuer uses to notify a holder that a previously issued credential has been revoked.

    "},{"location":"aip2/0183-revocation-notification/#motivation","title":"Motivation","text":"

    We need a standard protocol for an issuer to notify a holder that a previously issued credential has been revoked.

    For example, suppose a passport agency revokes Alice's passport. The passport agency (an issuer) may want to notify Alice (a holder) that her passport has been revoked so that she knows that she will be unable to use her passport to travel.

    "},{"location":"aip2/0183-revocation-notification/#tutorial","title":"Tutorial","text":"

    The Revocation Notification protocol is a very simple protocol consisting of a single message:

    This simple protocol allows an issuer to choose to notify a holder that a previously issued credential has been revoked.

    It is the issuer's prerogative whether or not to notify the holder that a credential has been revoked. It is not a security risk if the issuer does not notify the holder that the credential has been revoked, nor if the message is lost. The holder will still be unable to use a revoked credential without this notification.

    "},{"location":"aip2/0183-revocation-notification/#roles","title":"Roles","text":"

    There are two parties involved in a Revocation Notification: issuer and holder. The issuer sends the revoke message to the holder.

    "},{"location":"aip2/0183-revocation-notification/#messages","title":"Messages","text":"

    The revoke message sent by the issuer to the holder is as follows:

    {\n  \"@type\": \"https://didcomm.org/revocation_notification/1.0/revoke\",\n  \"@id\": \"<uuid-revocation-notification>\",\n  \"~please_ack\": [\"RECEIPT\",\"OUTCOME\"],\n  \"thread_id\": \"<thread_id>\",\n  \"comment\": \"Some comment\"\n}\n

    Description of fields:

    "},{"location":"aip2/0183-revocation-notification/#reference","title":"Reference","text":""},{"location":"aip2/0183-revocation-notification/#drawbacks","title":"Drawbacks","text":"

    If we later added support for more general event subscription and notification message flows, this would be redundant.

    "},{"location":"aip2/0183-revocation-notification/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"aip2/0183-revocation-notification/#prior-art","title":"Prior art","text":""},{"location":"aip2/0183-revocation-notification/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0183-revocation-notification/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"aip2/0211-route-coordination/","title":"0211: Mediator Coordination Protocol","text":""},{"location":"aip2/0211-route-coordination/#summary","title":"Summary","text":"

    A protocol to coordinate mediation configuration between a mediating agent and the recipient.

    "},{"location":"aip2/0211-route-coordination/#application-scope","title":"Application Scope","text":"

    This protocol is needed when using an edge agent and a mediator agent from different vendors. Edge agents and mediator agents from the same vendor may use whatever protocol they wish without sacrificing interoperability.

    "},{"location":"aip2/0211-route-coordination/#motivation","title":"Motivation","text":"

    Use of the forward message in the Routing Protocol requires an exchange of information. The Recipient must know which endpoint and routing key(s) to share, and the Mediator needs to know which keys should be routed via this relationship.

    "},{"location":"aip2/0211-route-coordination/#protocol","title":"Protocol","text":"

    Name: coordinate-mediation

    Version: 1.0

    Base URI: https://didcomm.org/coordinate-mediation/1.0/

    "},{"location":"aip2/0211-route-coordination/#roles","title":"Roles","text":"

    mediator - The agent that will be receiving forward messages on behalf of the recipient. recipient - The agent for whom the forward message payload is intended.

    "},{"location":"aip2/0211-route-coordination/#flow","title":"Flow","text":"

    A recipient may discover an agent capable of routing using the Feature Discovery Protocol. If protocol is supported with the mediator role, a recipient may send a mediate-request to initiate a routing relationship.

    First, the recipient sends a mediate-request message to the mediator. If the mediator is willing to route messages, it will respond with a mediate-grant message. The recipient will share the routing information in the grant message with other contacts.

    When a new key is used by the recipient, it must be registered with the mediator to enable route identification. This is done with a keylist-update message.

    The keylist-update and keylist-query methods are used over time to identify and remove keys that are no longer in use by the recipient.

    "},{"location":"aip2/0211-route-coordination/#reference","title":"Reference","text":"

    Note on terms: Early versions of this protocol included the concept of terms for mediation. This concept has been removed from this version due to a need for further discussion on representing terms in DIDComm in general and lack of use of these terms in current implementations.

    "},{"location":"aip2/0211-route-coordination/#mediation-request","title":"Mediation Request","text":"

    This message serves as a request from the recipient to the mediator, asking for the permission (and routing information) to publish the endpoint as a mediator.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/mediate-request\",\n}\n
    "},{"location":"aip2/0211-route-coordination/#mediation-deny","title":"Mediation Deny","text":"

    This message serves as notification of the mediator denying the recipient's request for mediation.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/mediate-deny\",\n}\n
    "},{"location":"aip2/0211-route-coordination/#mediation-grant","title":"Mediation Grant","text":"

    A route grant message is a signal from the mediator to the recipient that permission is given to distribute the included information as an inbound route.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/mediate-grant\",\n    \"endpoint\": \"http://mediators-r-us.com\",\n    \"routing_keys\": [\"did:key:z6Mkfriq1MqLBoPWecGoDLjguo1sB9brj6wT3qZ5BxkKpuP6\"]\n}\n

    endpoint: The endpoint reported to mediation client connections.

    routing_keys: List of keys in intended routing order. Key used as recipient of forward messages.

    "},{"location":"aip2/0211-route-coordination/#keylist-update","title":"Keylist Update","text":"

    Used to notify the mediator of keys in use by the recipient.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/keylist-update\",\n    \"updates\":[\n        {\n            \"recipient_key\": \"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\",\n            \"action\": \"add\"\n        }\n    ]\n}\n

    recipient_key: Key subject of the update.

    action: One of add or remove.

    "},{"location":"aip2/0211-route-coordination/#keylist-update-response","title":"Keylist Update Response","text":"

    Confirmation of requested keylist updates.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/keylist-update-response\",\n    \"updated\": [\n        {\n            \"recipient_key\": \"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\",\n            \"action\": \"\" // \"add\" or \"remove\"\n            \"result\": \"\" // [client_error | server_error | no_change | success]\n        }\n    ]\n}\n

    recipient_key: Key subject of the update.

    action: One of add or remove.

    result: One of client_error, server_error, no_change, success; describes the resulting state of the keylist update.

    "},{"location":"aip2/0211-route-coordination/#key-list-query","title":"Key List Query","text":"

    Query mediator for a list of keys registered for this connection.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/keylist-query\",\n    \"paginate\": {\n        \"limit\": 30,\n        \"offset\": 0\n    }\n}\n

    paginate is optional.

    "},{"location":"aip2/0211-route-coordination/#key-list","title":"Key List","text":"

    Response to key list query, containing retrieved keys.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/keylist\",\n    \"keys\": [\n        {\n            \"recipient_key\": \"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\"\n        }\n    ],\n    \"pagination\": {\n        \"count\": 30,\n        \"offset\": 30,\n        \"remaining\": 100\n    }\n}\n

    pagination is optional.

    "},{"location":"aip2/0211-route-coordination/#encoding-of-keys","title":"Encoding of keys","text":"

    All keys are encoded using the did:key method as per RFC0360.

    "},{"location":"aip2/0211-route-coordination/#prior-art","title":"Prior art","text":"

    There was an Indy HIPE that never made it past the PR process that described a similar approach. That HIPE led to a partial implementation of this inside the Aries Cloud Agent Python

    "},{"location":"aip2/0211-route-coordination/#future-considerations","title":"Future Considerations","text":""},{"location":"aip2/0211-route-coordination/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0211-route-coordination/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Cloud Agent - Python Added in ACA-Py 0.6.0 MISSING test results**** DIDComm mediator Open source cloud-based mediator."},{"location":"aip2/0360-use-did-key/","title":"Aries RFC 0360: did:key Usage","text":""},{"location":"aip2/0360-use-did-key/#summary","title":"Summary","text":"

    A number of RFCs that have been defined reference what amounts to a \"naked\" public key, such that the sender relies on the receiver knowing what type the key is and how it can be used. The application of this RFC will result in the replacement of \"naked\" verkeys (public keys) in some DIDComm/Aries protocols with the did:key ledgerless DID method, a format that concisely conveys useful information about the use of the key, including the public key type. While did:key is less a DID method than a transformation from a public key and type to an opinionated DIDDoc, it provides a versioning mechanism for supporting new/different cryptographic formats and its use makes clear how a public key is intended to be used. The method also enables support for using standard DID resolution mechanisms that may simplify the use of the key. The use of a DID to represent a public key is seen as odd by some in the community. Should a representation be found that is has better properties than a plain public key but is constrained to being \"just a key\", then we will consider changing from the did:key representation.

    To Do: Update link DID Key Method link (above) from Digital Bazaar to W3C repositories when they are created and populated.

    While it is well known in the Aries community that did:key is fundamentally different from the did:peer method that is the basis of Aries protocols, it must be re-emphasized here. This RFC does NOT imply any changes to the use of did:peer in Aries, nor does it change the content of a did:peer DIDDoc. This RFC only changes references to plain public keys in the JSON of some RFCs to use did:key in place of a plain text string.

    Should this RFC be ACCEPTED, a community coordinated update will be used to apply updates to the agent code bases and impacted RFCs.

    "},{"location":"aip2/0360-use-did-key/#motivation","title":"Motivation","text":"

    When one Aries agent inserts a public key into the JSON of an Aries message (for example, the ~service decorator), it assumes that the recipient agent will use the key in the intended way. At the time this RFC is being written, this is easy because only one key type is in use by all agents. However, in order to enable the use of different cryptography algorithms, the public references must be extended to at least include the key type. The preferred and concise way to do that is the use of the multicodec mechanism, which provides a registry of encodings for known key types that are prefixed to the public key in a standard and concise way. did:key extends that mechanism by providing a templated way to transform the combination of public key and key type into a DID-standard DIDDoc.

    At the cost of adding/building a did:key resolver we get a DID standard way to access the key and key type, including specific information on how the key can be used. The resolver may be trivial or complex. In a trivial version, the key type is assumed, and the key can be easily extracted from the string. In a more complete implementation, the key type can be checked, and standard DID URL handling can be used to extract parts of the DIDDoc for specific purposes. For example, in the ed25519 did:key DIDDoc, the existence of the keyAgreement entry implies that the key can be used in a Diffie-Hellman exchange, without the developer guessing, or using the key incorrectly.

    Note that simply knowing the key type is not necessarily sufficient to be able to use the key. The cryptography supporting the processing data using the key must also be available in the agent. However, the multicodec and did:key capabilities will simplify adding support for new key types in the future.

    "},{"location":"aip2/0360-use-did-key/#tutorial","title":"Tutorial","text":"

    An example of the use of the replacement of a verkey with did:key can be found in the ~service decorator RFC. Notably in the example at the beginning of the tutorial section, the verkeys in the recipientKeys and routingKeys items would be changed from native keys to use did:key as follows:

    {\n    \"@type\": \"somemessagetype\",\n    \"~service\": {\n        \"recipientKeys\": [\"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"],\n        \"routingKeys\": [\"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"]\n        \"serviceEndpoint\": \"https://example.com/endpoint\"\n    }\n}\n

    Thus, 8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K becomes did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th using the following transformations:

    The transformation above is only for illustration within this RFC. The did:key specification is the definitive source for the appropriate transformations.

    The did:key method uses the strings that are the DID, public key and key type to construct (\"resolve\") a DIDDoc based on a template defined by the did:key specification. Further, the did:key resolver generates, in the case of an ed25519 public signing key, a key that can be used as part of a Diffie-Hellman exchange appropriate for encryption in the keyAgreement section of the DIDDoc. Presumably, as the did:key method supports other key types, similar DIDDoc templates will become part of the specification. Key types that don't support a signing/key exchange transformation would not have a keyAgreement entry in the resolved DIDDoc.

    The following currently implemented RFCs would be affected by acceptance of this RFC. In these RFCs, the JSON items that currently contain naked public keys (mostly the items recipientKeys and routingKeys) would be changed to use did:key references where applicable. Note that in these items public DIDs could also be used if applicable for a given use case.

    Service entries in did:peer DIDDocs (such as in RFCs 0094-cross-domain-messaging and 0067-didcomm-diddoc-conventions) should NOT use a did:key public key representation. Instead, service entries in the DIDDoc should reference keys defined internally in the DIDDoc where appropriate.

    To Do: Discuss the use of did:key (or not) in the context of encryption envelopes. This will be part of the ongoing discussion about JWEs and the upcoming discussions about JWMs\u2014a soon-to-be-proposed specification. That conversation will likely go on in the DIF DIDComm Working Group.

    "},{"location":"aip2/0360-use-did-key/#reference","title":"Reference","text":"

    See the did:key specification. Note that the specification is still evolving.

    "},{"location":"aip2/0360-use-did-key/#drawbacks","title":"Drawbacks","text":"

    The did:key standard is not finalized.

    The DIDDoc \"resolved\" from a did:key probably has more entries in it than are needed for DIDComm. That said, the entries in the DIDDoc make it clear to a developer how they can use the public key.

    "},{"location":"aip2/0360-use-did-key/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    We should not stick with the status quo and assume that all agents will always know the type of keys being used and how to use them.

    We should at minimum move to a scheme like multicodecs such that the key is self documenting and supports the versioning of cryptographic algorithms. However, even if we do that, we still have to document for developers how they should (and not should not) use the public key.

    Another logical alternative is to use a JWK. However, that representation only adds the type of the key (same as multicodecs) at the cost of being significantly more verbose.

    "},{"location":"aip2/0360-use-did-key/#prior-art","title":"Prior art","text":"

    To do - there are other instances of this pattern being used. Insert those here.

    "},{"location":"aip2/0360-use-did-key/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0360-use-did-key/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes

    Name / Link Implementation Notes"},{"location":"aip2/0434-outofband/","title":"Aries RFC 0434: Out-of-Band Protocol 1.1","text":""},{"location":"aip2/0434-outofband/#summary","title":"Summary","text":"

    The Out-of-band protocol is used when you wish to engage with another agent and you don't have a DIDComm connection to use for the interaction.

    "},{"location":"aip2/0434-outofband/#motivation","title":"Motivation","text":"

    The use of the invitation in the Connection and DID Exchange protocols has been relatively successful, but has some shortcomings, as follows.

    "},{"location":"aip2/0434-outofband/#connection-reuse","title":"Connection Reuse","text":"

    A common pattern we have seen in the early days of Aries agents is a user with a browser getting to a point where a connection is needed between the website's (enterprise) agent and the user's mobile agent. A QR invitation is displayed, scanned and a protocol is executed to establish a connection. Life is good!

    However, with the current invitation processes, when the same user returns to the same page, the same process is executed (QR code, scan, etc.) and a new connection is created between the two agents. There is no way for the user's agent to say \"Hey, I've already got a connection with you. Let's use that one!\"

    We need the ability to reuse a connection.

    "},{"location":"aip2/0434-outofband/#connection-establishment-versioning","title":"Connection Establishment Versioning","text":"

    In the existing Connections and DID Exchange invitation handling, the inviter dictates what connection establishment protocol all invitee's will use. A more sustainable approach is for the inviter to offer the invitee a list of supported protocols and allow the invitee to use one that it supports.

    "},{"location":"aip2/0434-outofband/#handling-of-all-out-of-band-messages","title":"Handling of all Out-of-Band Messages","text":"

    We currently have two sets of out-of-band messages that cannot be delivered via DIDComm because there is no channel. We'd like to align those messages into a single \"out-of-band\" protocol so that their handling can be harmonized inside an agent, and a common QR code handling mechanism can be used.

    "},{"location":"aip2/0434-outofband/#urls-and-qr-code-handling","title":"URLs and QR Code Handling","text":"

    We'd like to have the specification of QR handling harmonized into a single RFC (this one).

    "},{"location":"aip2/0434-outofband/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0434-outofband/#key-concepts","title":"Key Concepts","text":"

    The Out-of-band protocol is used when an agent doesn't know if it has a connection with another agent. This could be because you are trying to establish a new connection with that agent, you have connections but don't know who the other party is, or if you want to have a connection-less interaction. Since there is no DIDComm connection to use for the messages of this protocol, the messages are plaintext and sent out-of-band, such as via a QR code, in an email message or any other available channel. Since the delivery of out-of-band messages will often be via QR codes, this RFC also covers the use of QR codes.

    Two well known use cases for using an out-of-band protocol are:

    In both cases, there is only a single out-of-band protocol message sent. The message responding to the out-of-band message is a DIDComm message from an appropriate protocol.

    Note that the website-to-agent model is not the only such interaction enabled by the out-of-band protocol, and a QR code is not the only delivery mechanism for out-of-band messages. However, they are useful as examples of the purpose of the protocol.

    "},{"location":"aip2/0434-outofband/#roles","title":"Roles","text":"

    The out-of-band protocol has two roles: sender and receiver.

    "},{"location":"aip2/0434-outofband/#sender","title":"sender","text":"

    The agent that generates the out-of-band message and makes it available to the other party.

    "},{"location":"aip2/0434-outofband/#receiver","title":"receiver","text":"

    The agent that receives the out-of-band message and decides how to respond. There is no out-of-band protocol message with which the receiver will respond. Rather, if they respond, they will use a message from another protocol that the sender understands.

    "},{"location":"aip2/0434-outofband/#states","title":"States","text":"

    The state machines for the sender and receiver are a bit odd for the out-of-band protocol because it consists of a single message that kicks off a co-protocol and ends when evidence of the co-protocol's launch is received, in the form of some response. In the following state machine diagrams we generically describe the response message from the receiver as being a DIDComm message.

    The sender state machine is as follows:

    Note the \"optional\" reference under the second event in the await-response state. That is to indicate that an out-of-band message might be a single use message with a transition to done, or reusable message (received by many receivers) with a transition back to await-response.

    The receiver state machine is as follows:

    Worth noting is the first event of the done state, where the receiver may receive the message multiple times. This represents, for example, an agent returning to the same website and being greeted with instances of the same QR code each time.

    "},{"location":"aip2/0434-outofband/#messages","title":"Messages","text":"

    The out-of-band protocol a single message that is sent by the sender.

    "},{"location":"aip2/0434-outofband/#invitation-httpsdidcommorgout-of-bandverinvitation","title":"Invitation: https://didcomm.org/out-of-band/%VER/invitation","text":"
    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/invitation\",\n  \"@id\": \"<id used for context as pthid>\",\n  \"label\": \"Faber College\",\n  \"goal_code\": \"issue-vc\",\n  \"goal\": \"To issue a Faber College Graduate credential\",\n  \"accept\": [\n    \"didcomm/aip2;env=rfc587\",\n    \"didcomm/aip2;env=rfc19\"\n  ],\n  \"handshake_protocols\": [\n    \"https://didcomm.org/didexchange/1.0\",\n    \"https://didcomm.org/connections/1.0\"\n  ],\n  \"requests~attach\": [\n    {\n      \"@id\": \"request-0\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"json\": \"<json of protocol message>\"\n      }\n    }\n  ],\n  \"services\": [\"did:sov:LjgpST2rjsoxYegQDRm7EL\"]\n}\n

    The items in the message are:

    If only the handshake_protocols item is included, the initial interaction will complete with the establishment (or reuse) of the connection. Either side may then use that connection for any purpose. A common use case (but not required) would be for the sender to initiate another protocol after the connection is established to accomplish some shared goal.

    If only the requests~attach item is included, no new connection is expected to be created, although one could be used if the receiver knows such a connection already exists. The receiver responds to one of the messages in the requests~attach array. The requests~attach item might include the first message of a protocol from the sender, or might be a please-play-the-role message requesting the receiver initiate a protocol. If the protocol requires a further response from the sender to the receiver, the receiver must include a ~service decorator for the sender to use in responding.

    If both the handshake_protocols and requests~attach items are included in the message, the receiver should first establish a connection and then respond (using that connection) to one of the messages in the requests~attach message. If a connection already exists between the parties, the receiver may respond immediately to the request-attach message using the established connection.

    "},{"location":"aip2/0434-outofband/#reuse-messages","title":"Reuse Messages","text":"

    While the receiver is expected to respond with an initiating message from a handshake_protocols or requests~attach item using an offered service, the receiver may be able to respond by reusing an existing connection. Specifically, if a connection they have was created from an out-of-band invitation from the same services DID of a new invitation message, the connection MAY be reused. The receiver may choose to not reuse the existing connection for privacy purposes and repeat a handshake protocol to receive a redundant connection.

    If a message has a service block instead of a DID in the services list, you may enable reuse by encoding the key and endpoint of the service block in a Peer DID numalgo 2 and using that DID instead of a service block.

    If the receiver desires to reuse the existing connection and a requests~attach item is included in the message, the receiver SHOULD respond to one of the attached messages using the existing connection.

    If the receiver desires to reuse the existing connection and no requests~attach item is included in the message, the receiver SHOULD attempt to do so with the reuse and reuse-accepted messages. This will notify the inviter that the existing connection should be used, along with the context that can be used for follow-on interactions.

    While the invitation message is passed unencrypted and out-of-band, both the handshake-reuse and handshake-reuse-accepted messages MUST be encrypted and transmitted as normal DIDComm messages.

    "},{"location":"aip2/0434-outofband/#reuse-httpsdidcommorgout-of-bandverhandshake-reuse","title":"Reuse: https://didcomm.org/out-of-band/%VER/handshake-reuse","text":"
    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/handshake-reuse\",\n  \"@id\": \"<id>\",\n  \"~thread\": {\n    \"thid\": \"<same as @id>\",\n    \"pthid\": \"<The @id of the Out-of-Band invitation>\"\n  }\n}\n

    The items in the message are:

    Sending or receiving this message does not change the state of the existing connection.

    When the inviter receives the handshake-reuse message, they MUST respond with a handshake-reuse-accepted message to notify that invitee that the request to reuse the existing connection is successful.

    "},{"location":"aip2/0434-outofband/#reuse-accepted-httpsdidcommorgout-of-bandverhandshake-reuse-accepted","title":"Reuse Accepted: https://didcomm.org/out-of-band/%VER/handshake-reuse-accepted","text":"
    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/handshake-reuse-accepted\",\n  \"@id\": \"<id>\",\n  \"~thread\": {\n    \"thid\": \"<The Message @id of the reuse message>\",\n    \"pthid\": \"<The @id of the Out-of-Band invitation>\"\n  }\n}\n

    The items in the message are:

    If this message is not received by the invitee, they should use the regular process. This message is a mechanism by which the invitee can detect a situation where the inviter no longer has a record of the connection and is unable to decrypt and process the handshake-reuse message.

    After sending this message, the inviter may continue any desired protocol interactions based on the context matched by the pthid present in the handshake-reuse message.

    "},{"location":"aip2/0434-outofband/#responses","title":"Responses","text":"

    The following table summarizes the different forms of the out-of-band invitation message depending on the presence (or not) of the handshake_protocols item, the requests~attach item and whether or not a connection between the agents already exists.

    handshake_protocols Present? requests~attach Present? Existing connection? Receiver action(s) No No No Impossible Yes No No Uses the first supported protocol from handshake_protocols to make a new connection using the first supported services entry. No Yes No Send a response to the first supported request message using the first supported services entry. Include a ~service decorator if the sender is expected to respond. No No Yes Impossible Yes Yes No Use the first supported protocol from handshake_protocols to make a new connection using the first supported services entry, and then send a response message to the first supported attachment message using the new connection. Yes No Yes Send a handshake-reuse message. No Yes Yes Send a response message to the first supported request message using the existing connection. Yes Yes Yes Send a response message to the first supported request message using the existing connection.

    Both the goal_code and goal fields SHOULD be used with the localization service decorator. The two fields are to enable both human and machine handling of the out-of-band message. goal_code is to specify a generic, protocol level outcome for sending the out-of-band message (e.g. issue verifiable credential, request proof, etc.) that is suitable for machine handling and possibly human display, while goal provides context specific guidance, targeting mainly a person controlling the receiver's agent. The list of goal_code values is provided in the Message Catalog section of this RFC.

    "},{"location":"aip2/0434-outofband/#the-services-item","title":"The services Item","text":"

    As mentioned in the description above, the services item array is intended to be analogous to the service block of a DIDDoc. When not reusing an existing connection, the receiver scans the array and selects (according to the rules described below) a service entry to use for the response to the out-of-band message.

    There are two forms of entries in the services item array:

    The following is an example of a two entry array, one of each form:

    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/invitation\",\n  \"@id\": \"<id used for context as pthid>\",\n  \"label\": \"Faber College\",\n  \"handshake_protocols\": [\"https://didcomm.org/didexchange/1.0\"],\n  \"services\": [\n    {\n      \"id\": \"#inline\",\n      \"type\": \"did-communication\",\n      \"recipientKeys\": [\"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\"],\n      \"routingKeys\": [],\n      \"serviceEndpoint\": \"https://example.com:5000\"\n    },\n    \"did:sov:LjgpST2rjsoxYegQDRm7EL\"\n  ]\n}\n

    The processing rules for the services block are:

    The attributes in the inline form parallel the attributes of a DID Document for increased meaning. The recipientKeys and routingKeys within the inline block decorator MUST be did:key references.

    As defined in the DIDComm Cross Domain Messaging RFC, if routingKeys is present and non-empty, additional forwarding wrapping are necessary in the response message.

    When considering routing and options for out-of-band messages, keep in mind that the more detail in the message, the longer the URL will be and (if used) the more dense (and harder to scan) the QR code will be.

    "},{"location":"aip2/0434-outofband/#service-endpoint","title":"Service Endpoint","text":"

    The service endpoint used to transmit the response is either present in the out-of-band message or available in the DID Document of a presented DID. If the endpoint is itself a DID, the serviceEndpoint in the DIDDoc of the resolved DID MUST be a URI, and the recipientKeys MUST contain a single key. That key is appended to the end of the list of routingKeys for processing. For more information about message forwarding and routing, see RFC 0094 Cross Domain Messaging.

    "},{"location":"aip2/0434-outofband/#adoption-messages","title":"Adoption Messages","text":"

    The problem_report message MAY be adopted by the out-of-band protocol if the agent wants to respond with problem reports to invalid messages, such as attempting to reuse a single-use invitation.

    "},{"location":"aip2/0434-outofband/#constraints","title":"Constraints","text":"

    An existing connection can only be reused based on a DID in the services list in an out-of-band message.

    "},{"location":"aip2/0434-outofband/#reference","title":"Reference","text":""},{"location":"aip2/0434-outofband/#messages-reference","title":"Messages Reference","text":"

    The full description of the message in this protocol can be found in the Tutorial section of this RFC.

    "},{"location":"aip2/0434-outofband/#localization","title":"Localization","text":"

    The goal_code and goal fields SHOULD have localization applied. See the purpose of those fields in the message type definitions section and the message catalog section (immediately below).

    "},{"location":"aip2/0434-outofband/#message-catalog","title":"Message Catalog","text":""},{"location":"aip2/0434-outofband/#goal_code","title":"goal_code","text":"

    The following values are defined for the goal_code field:

    Code (cd) English (en) issue-vc To issue a credential request-proof To request a proof create-account To create an account with a service p2p-messaging To establish a peer-to-peer messaging relationship"},{"location":"aip2/0434-outofband/#goal","title":"goal","text":"

    The goal localization values are use case specific and localization is left to the agent implementor to enable using the techniques defined in the ~l10n RFC.

    "},{"location":"aip2/0434-outofband/#roles-reference","title":"Roles Reference","text":"

    The roles are defined in the Tutorial section of this RFC.

    "},{"location":"aip2/0434-outofband/#states-reference","title":"States Reference","text":""},{"location":"aip2/0434-outofband/#initial","title":"initial","text":"

    No out-of-band messages have been sent.

    "},{"location":"aip2/0434-outofband/#await-response","title":"await-response","text":"

    The sender has shared an out-of-band message with the intended receiver(s), and the sender has not yet received all of the responses. For a single-use out-of-band message, there will be only one response; for a multi-use out-of-band message, there is no defined limit on the number of responses.

    "},{"location":"aip2/0434-outofband/#prepare-response","title":"prepare-response","text":"

    The receiver has received the out-of-band message and is preparing a response. The response will not be an out-of-band protocol message, but a message from another protocol chosen based on the contents of the out-of-band message.

    "},{"location":"aip2/0434-outofband/#done","title":"done","text":"

    The out-of-band protocol has been completed. Note that if the out-of-band message was intended to be available to many receivers (a multiple use message), the sender returns to the await-response state rather than going to the done state.

    "},{"location":"aip2/0434-outofband/#errors","title":"Errors","text":"

    There is an optional courtesy error message stemming from an out-of-band message that the sender could provide if they have sufficient recipient information. If the out-of-band message is a single use message and the sender receives multiple responses and each receiver's response includes a way for the sender to respond with a DIDComm message, all but the first MAY be answered with a problem_report.

    "},{"location":"aip2/0434-outofband/#error-message-example","title":"Error Message Example","text":"
    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/problem_report\",\n  \"@id\": \"5678876542345\",\n  \"~thread\": { \"pthid\": \"<@id of the OutofBand message>\" },\n  \"description\": {\n    \"en\": \"The invitation has expired.\",\n    \"code\": \"expired-invitation\"\n  },\n  \"impact\": \"thread\"\n}\n

    See the problem-report protocol for details on the items in the example.

    "},{"location":"aip2/0434-outofband/#flow-overview","title":"Flow Overview","text":"

    In an out-of-band message the sender gives information to the receiver about the kind of DIDComm protocol response messages it can handle and how to deliver the response. The receiver uses that information to determine what DIDComm protocol/message to use in responding to the sender, and (from the service item or an existing connection) how to deliver the response to the sender.

    The handling of the response is specified by the protocol used.

    To Do: Make sure that the following remains in the DID Exchange/Connections RFCs

    Any Published DID that expresses support for DIDComm by defining a service that follows the DIDComm conventions serves as an implicit invitation. If an invitee wishes to connect to any Published DID, they need not wait for an out-of-band invitation message. Rather, they can designate their own label and initiate the appropriate protocol (e.g. 0160-Connections or 0023-DID-Exchange) for establishing a connection.

    "},{"location":"aip2/0434-outofband/#standard-out-of-band-message-encoding","title":"Standard Out-of-Band Message Encoding","text":"

    Using a standard out-of-band message encoding allows for easier interoperability between multiple projects and software platforms. Using a URL for that standard encoding provides a built in fallback flow for users who are unable to automatically process the message. Those new users will load the URL in a browser as a default behavior, and may be presented with instructions on how to install software capable of processing the message. Already onboarded users will be able to process the message without loading in a browser via mobile app URL capture, or via capability detection after being loaded in a browser.

    The standard out-of-band message format is a URL with a Base64Url encoded json object as a query parameter.

    Please note the difference between Base64Url and Base64 encoding.

    The URL format is as follows, with some elements described below:

    https://<domain>/<path>?oob=<outofbandMessage>\n

    <domain> and <path> should be kept as short as possible, and the full URL SHOULD return human readable instructions when loaded in a browser. This is intended to aid new users. The oob query parameter is required and is reserved to contain the out-of-band message string. Additional path elements or query parameters are allowed, and can be leveraged to provide coupons or other promise of payment for new users.

    To do: We need to rationalize this approach https:// approach with the use of a special protocol (e.g. didcomm://) that will enable handling of the URL on mobile devices to automatically invoke an installed app on both Android and iOS. A user must be able to process the out-of-band message on the device of the agent (e.g. when the mobile device can't scan the QR code because it is on a web page on device).

    The <outofbandMessage> is an agent plaintext message (not a DIDComm message) that has been Base64Url encoded such that the resulting string can be safely used in a URL.

    outofband_message = base64UrlEncode(<outofbandMessage>)\n

    During Base64Url encoding, whitespace from the JSON string SHOULD be eliminated to keep the resulting out-of-band message string as short as possible.

    "},{"location":"aip2/0434-outofband/#example-out-of-band-message-encoding","title":"Example Out-of-Band Message Encoding","text":"

    Invitation:

    {\n  \"@type\": \"https://didcomm.org/out-of-band/1.0/invitation\",\n  \"@id\": \"69212a3a-d068-4f9d-a2dd-4741bca89af3\",\n  \"label\": \"Faber College\",\n  \"goal_code\": \"issue-vc\",\n  \"goal\": \"To issue a Faber College Graduate credential\",\n  \"handshake_protocols\": [\"https://didcomm.org/didexchange/1.0\", \"https://didcomm.org/connections/1.0\"],\n  \"services\": [\"did:sov:LjgpST2rjsoxYegQDRm7EL\"]\n}\n

    Whitespace removed:

    {\"@type\":\"https://didcomm.org/out-of-band/1.0/invitation\",\"@id\":\"69212a3a-d068-4f9d-a2dd-4741bca89af3\",\"label\":\"Faber College\",\"goal_code\":\"issue-vc\",\"goal\":\"To issue a Faber College Graduate credential\",\"handshake_protocols\":[\"https://didcomm.org/didexchange/1.0\",\"https://didcomm.org/connections/1.0\"],\"services\":[\"did:sov:LjgpST2rjsoxYegQDRm7EL\"]}\n

    Base64Url encoded:

    eyJAdHlwZSI6Imh0dHBzOi8vZGlkY29tbS5vcmcvb3V0LW9mLWJhbmQvMS4wL2ludml0YXRpb24iLCJAaWQiOiI2OTIxMmEzYS1kMDY4LTRmOWQtYTJkZC00NzQxYmNhODlhZjMiLCJsYWJlbCI6IkZhYmVyIENvbGxlZ2UiLCJnb2FsX2NvZGUiOiJpc3N1ZS12YyIsImdvYWwiOiJUbyBpc3N1ZSBhIEZhYmVyIENvbGxlZ2UgR3JhZHVhdGUgY3JlZGVudGlhbCIsImhhbmRzaGFrZV9wcm90b2NvbHMiOlsiaHR0cHM6Ly9kaWRjb21tLm9yZy9kaWRleGNoYW5nZS8xLjAiLCJodHRwczovL2RpZGNvbW0ub3JnL2Nvbm5lY3Rpb25zLzEuMCJdLCJzZXJ2aWNlcyI6WyJkaWQ6c292OkxqZ3BTVDJyanNveFllZ1FEUm03RUwiXX0=\n

    Example URL with Base64Url encoded message:

    http://example.com/ssi?oob=eyJAdHlwZSI6Imh0dHBzOi8vZGlkY29tbS5vcmcvb3V0LW9mLWJhbmQvMS4wL2ludml0YXRpb24iLCJAaWQiOiI2OTIxMmEzYS1kMDY4LTRmOWQtYTJkZC00NzQxYmNhODlhZjMiLCJsYWJlbCI6IkZhYmVyIENvbGxlZ2UiLCJnb2FsX2NvZGUiOiJpc3N1ZS12YyIsImdvYWwiOiJUbyBpc3N1ZSBhIEZhYmVyIENvbGxlZ2UgR3JhZHVhdGUgY3JlZGVudGlhbCIsImhhbmRzaGFrZV9wcm90b2NvbHMiOlsiaHR0cHM6Ly9kaWRjb21tLm9yZy9kaWRleGNoYW5nZS8xLjAiLCJodHRwczovL2RpZGNvbW0ub3JnL2Nvbm5lY3Rpb25zLzEuMCJdLCJzZXJ2aWNlcyI6WyJkaWQ6c292OkxqZ3BTVDJyanNveFllZ1FEUm03RUwiXX0=\n

    Out-of-band message URLs can be transferred via any method that can send text, including an email, SMS, posting on a website, or QR Code.

    Example URL encoded as a QR Code:

    Example Email Message:

    To: alice@alum.faber.edu\nFrom: studentrecords@faber.edu\nSubject: Your request to connect and receive your graduate verifiable credential\n\nDear Alice,\n\nTo receive your Faber College graduation certificate, click here to [connect](http://example.com/ssi?oob=eyJAdHlwZSI6Imh0dHBzOi8vZGlkY29tbS5vcmcvb3V0LW9mLWJhbmQvMS4wL2ludml0YXRpb24iLCJAaWQiOiI2OTIxMmEzYS1kMDY4LTRmOWQtYTJkZC00NzQxYmNhODlhZjMiLCJsYWJlbCI6IkZhYmVyIENvbGxlZ2UiLCJnb2FsX2NvZGUiOiJpc3N1ZS12YyIsImdvYWwiOiJUbyBpc3N1ZSBhIEZhYmVyIENvbGxlZ2UgR3JhZHVhdGUgY3JlZGVudGlhbCIsImhhbmRzaGFrZV9wcm90b2NvbHMiOlsiaHR0cHM6Ly9kaWRjb21tLm9yZy9kaWRleGNoYW5nZS8xLjAiLCJodHRwczovL2RpZGNvbW0ub3JnL2Nvbm5lY3Rpb25zLzEuMCJdLCJzZXJ2aWNlcyI6WyJkaWQ6c292OkxqZ3BTVDJyanNveFllZ1FEUm03RUwiXX0=) with us, or paste the following into your browser:\n\nhttp://example.com/ssi?oob=eyJAdHlwZSI6Imh0dHBzOi8vZGlkY29tbS5vcmcvb3V0LW9mLWJhbmQvMS4wL2ludml0YXRpb24iLCJAaWQiOiI2OTIxMmEzYS1kMDY4LTRmOWQtYTJkZC00NzQxYmNhODlhZjMiLCJsYWJlbCI6IkZhYmVyIENvbGxlZ2UiLCJnb2FsX2NvZGUiOiJpc3N1ZS12YyIsImdvYWwiOiJUbyBpc3N1ZSBhIEZhYmVyIENvbGxlZ2UgR3JhZHVhdGUgY3JlZGVudGlhbCIsImhhbmRzaGFrZV9wcm90b2NvbHMiOlsiaHR0cHM6Ly9kaWRjb21tLm9yZy9kaWRleGNoYW5nZS8xLjAiLCJodHRwczovL2RpZGNvbW0ub3JnL2Nvbm5lY3Rpb25zLzEuMCJdLCJzZXJ2aWNlcyI6WyJkaWQ6c292OkxqZ3BTVDJyanNveFllZ1FEUm03RUwiXX0=\n\nIf you don't have an identity agent for holding credentials, you will be given instructions on how you can get one.\n\nThanks,\n\nFaber College\nKnowledge is Good\n
    "},{"location":"aip2/0434-outofband/#url-shortening","title":"URL Shortening","text":"

    It seems inevitable that the length of some out-of-band message will be too long to produce a useable QR code. Techniques to avoid unusable QR codes have been presented above, including using attachment links for requests, minimizing the routing of the response and eliminating unnecessary whitespace in the JSON. However, at some point a sender may need generate a very long URL. In that case, a DIDComm specific URL shortener redirection should be implemented by the sender as follows:

    A usable QR code will always be able to be generated from the shortened form of the URL.

    "},{"location":"aip2/0434-outofband/#url-shortening-caveats","title":"URL Shortening Caveats","text":"

    Some HTTP libraries don't support stopping redirects from occuring on reception of a 301 or 302, in this instance the redirect is automatically followed and will result in a response that MAY have a status of 200 and MAY contain a URL that can be processed as a normal Out-of-Band message.

    If the agent performs a HTTP GET with the Accept header requesting application/json MIME type the response can either contain the message in json or result in a redirect, processing of the response should attempt to determine which response type is received and process the message accordingly.

    "},{"location":"aip2/0434-outofband/#out-of-band-message-publishing","title":"Out-of-Band Message Publishing","text":"

    The sender will publish or transmit the out-of-band message URL in a manner available to the intended receiver. After publishing, the sender is in the await-response state, will the receiver is in the prepare-response state.

    "},{"location":"aip2/0434-outofband/#out-of-band-message-processing","title":"Out-of-Band Message Processing","text":"

    If the receiver receives an out-of-band message in the form of a QR code, the receiver should attempt to decode the QR code to an out-of-band message URL for processing.

    When the receiver receives the out-of-band message URL, there are two possible user flows, depending on whether the individual has an Aries agent. If the individual is new to Aries, they will likely load the URL in a browser. The resulting page SHOULD contain instructions on how to get started by installing an Aries agent. That install flow will transfer the out-of-band message to the newly installed software.

    A user that already has those steps accomplished will have the URL received by software directly. That software will attempt to base64URL decode the string and can read the out-of-band message directly out of the oob query parameter, without loading the URL. If this process fails then the software should attempt the steps to process a shortened URL.

    NOTE: In receiving the out-of-band message, the base64url decode implementation used MUST correctly decode padded and unpadded base64URL encoded data.

    If the receiver wants to respond to the out-of-band message, they will use the information in the message to prepare the request, including:

    "},{"location":"aip2/0434-outofband/#correlating-responses-to-out-of-band-messages","title":"Correlating responses to Out-of-Band messages","text":"

    The response to an out-of-band message MUST set its ~thread.pthid equal to the @id property of the out-of-band message.

    Example referencing an explicit invitation:

    {\n  \"@id\": \"a46cdd0f-a2ca-4d12-afbf-2e78a6f1f3ef\",\n  \"@type\": \"https://didcomm.org/didexchange/1.0/request\",\n  \"~thread\": { \"pthid\": \"032fbd19-f6fd-48c5-9197-ba9a47040470\" },\n  \"label\": \"Bob\",\n  \"did\": \"B.did@B:A\",\n  \"did_doc~attach\": {\n    \"base64\": \"eyJ0eXAiOiJKV1Qi... (bytes omitted)\",\n    \"jws\": {\n      \"header\": {\n        \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n      },\n      \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n      \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n    }\n  }\n}\n
    "},{"location":"aip2/0434-outofband/#response-transmission","title":"Response Transmission","text":"

    The response message from the receiver is encoded according to the standards of the DIDComm encryption envelope, using the service block present in (or resolved from) the out-of-band invitation.

    "},{"location":"aip2/0434-outofband/#reusing-connections","title":"Reusing Connections","text":"

    If an out-of-band invitation has a DID in the services block, and the receiver determines it has previously established a connection with that DID, the receiver MAY send its response on the established connection. See Reuse Messages for details.

    "},{"location":"aip2/0434-outofband/#receiver-error-handling","title":"Receiver Error Handling","text":"

    If the receiver is unable to process the out-of-band message, the receiver may respond with a Problem Report identifying the problem using a DIDComm message. As with any response, the ~thread decorator of the pthid MUST be the @id of the out-of-band message. The problem report MUST be in the protocol of an expected response. An example of an error that might come up is that the receiver is not able to handle any of the proposed protocols in the out-of-band message. The receiver MAY include in the problem report a ~service decorator that allows the sender to respond to the out-of-band message with a DIDComm message.

    "},{"location":"aip2/0434-outofband/#response-processing","title":"Response processing","text":"

    The sender MAY look up the corresponding out-of-band message identified in the response's ~thread.pthid to determine whether it should accept the response. Information about the related out-of-band message protocol may be required to provide the sender with context about processing the response and what to do after the protocol completes.

    "},{"location":"aip2/0434-outofband/#sender-error-handling","title":"Sender Error Handling","text":"

    If the sender receives a Problem Report message from the receiver, the sender has several options for responding. The sender will receive the message as part of an offered protocol in the out-of-band message.

    If the receiver did not include a ~service decorator in the response, the sender can only respond if it is still in session with the receiver. For example, if the sender is a website that displayed a QR code for the receiver to scan, the sender could create a new, presumably adjusted, out-of-band message, encode it and present it to the user in the same way as before.

    If the receiver included a ~service decorator in the response, the sender can provide a new message to the receiver, even a new version of the original out-of-band message, and send it to the receiver. The new message MUST include a ~thread decorator with the thid set to the @id from the problem report message.

    "},{"location":"aip2/0434-outofband/#drawbacks","title":"Drawbacks","text":""},{"location":"aip2/0434-outofband/#prior-art","title":"Prior art","text":""},{"location":"aip2/0434-outofband/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0434-outofband/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"aip2/0441-present-proof-best-practices/","title":"0441: Prover and Verifier Best Practices for Proof Presentation","text":""},{"location":"aip2/0441-present-proof-best-practices/#summary","title":"Summary","text":"

    This work prescribes best practices for provers in credential selection (toward proof presentation), for verifiers in proof acceptance, and for both regarding non-revocation interval semantics in fulfilment of the Present Proof protocol RFC0037. Of particular instance is behaviour against presentation requests and presentations in their various non-revocation interval profiles.

    "},{"location":"aip2/0441-present-proof-best-practices/#motivation","title":"Motivation","text":"

    Agents should behave consistently in automatically selecting credentials and proving presentations.

    "},{"location":"aip2/0441-present-proof-best-practices/#tutorial","title":"Tutorial","text":"

    The subsections below introduce constructs and outline best practices for provers and verifiers.

    "},{"location":"aip2/0441-present-proof-best-practices/#presentation-requests-and-non-revocation-intervals","title":"Presentation Requests and Non-Revocation Intervals","text":"

    This section prescribes norms and best practices in formulating and interpreting non-revocation intervals on proof requests.

    "},{"location":"aip2/0441-present-proof-best-practices/#semantics-of-non-revocation-interval-presence-and-absence","title":"Semantics of Non-Revocation Interval Presence and Absence","text":"

    The presence of a non-revocation interval applicable to a requested item (see below) in a presentation request signifies that the verifier requires proof of non-revocation status of the credential providing that item.

    The absence of any non-revocation interval applicable to a requested item signifies that the verifier has no interest in its credential's non-revocation status.

    A revocable or non-revocable credential may satisfy a presentation request with or without a non-revocation interval. The presence of a non-revocation interval conveys that if the prover presents a revocable credential, the presentation must include proof of non-revocation. Its presence does not convey any restriction on the revocability of the credential to present: in many cases the verifier cannot know whether a prover's credential is revocable or not.

    "},{"location":"aip2/0441-present-proof-best-practices/#non-revocation-interval-applicability-to-requested-items","title":"Non-Revocation Interval Applicability to Requested Items","text":"

    A requested item in a presentation request is an attribute or a predicate, proof of which the verifier requests presentation. A non-revocation interval within a presentation request is specifically applicable, generally applicable, or inapplicable to a requested item.

    Within a presentation request, a top-level non-revocation interval is generally applicable to all requested items. A non-revocation interval defined particularly for a requested item is specifically applicable to that requested attribute or predicate but inapplicable to all others.

    A non-revocation interval specifically applicable to a requested item overrides any generally applicable non-revocation interval: no requested item may have both.

    For example, in the following (indy) proof request

    {\n    \"name\": \"proof-request\",\n    \"version\": \"1.0\",\n    \"nonce\": \"1234567890\",\n    \"requested_attributes\": {\n        \"legalname\": {\n            \"name\": \"legalName\",\n            \"restrictions\": [\n                {\n                    \"issuer_did\": \"WgWxqztrNooG92RXvxSTWv\"\n                }\n            ]\n        },\n        \"regdate\": {\n            \"name\": \"regDate\",\n            \"restrictions\": [\n                {\n                    \"issuer_did\": \"WgWxqztrNooG92RXvxSTWv\"\n                }\n            ],\n            \"non_revoked\": {\n                \"from\": 1600001000,\n                \"to\": 1600001000\n            }\n        }\n    },\n    \"requested_predicates\": {\n    },\n    \"non_revoked\": {\n        \"from\": 1600000000,\n        \"to\": 1600000000\n    }\n}\n

    the non-revocation interval on 1600000000 is generally applicable to the referent \"legalname\" while the non-revocation interval on 1600001000 specifically applicable to referent \"regdate\".

    "},{"location":"aip2/0441-present-proof-best-practices/#semantics-of-non-revocation-interval-endpoints","title":"Semantics of Non-Revocation Interval Endpoints","text":"

    A non-revocation interval contains \"from\" and \"to\" (integer) EPOCH times. For historical reasons, any timestamp within this interval is technically acceptable in a non-revocation subproof. However, these semantics allow for ambiguity in cases where revocation occurs within the interval, and in cases where the ledger supports reinstatement. These best practices require the \"from\" value, should the prover specify it, to equal the \"to\" value: this approach fosters deterministic outcomes.

    A missing \"from\" specification defaults to the same value as the interval's \"to\" value. In other words, the non-revocation intervals

    {\n    \"to\": 1234567890\n}\n

    and

    {\n    \"from\": 1234567890,\n    \"to\": 1234567890\n}\n

    are semantically equivalent.

    "},{"location":"aip2/0441-present-proof-best-practices/#verifier-non-revocation-interval-formulation","title":"Verifier Non-Revocation Interval Formulation","text":"

    The verifier MUST specify, as current INDY-HIPE 11 notes, the same integer EPOCH time for both ends of the interval, or else omit the \"from\" key and value. In effect, where the presentation request specifies a non-revocation interval, the verifier MUST request a non-revocation instant.

    "},{"location":"aip2/0441-present-proof-best-practices/#prover-non-revocation-interval-processing","title":"Prover Non-Revocation Interval Processing","text":"

    In querying the nodes for revocation status, given a revocation interval on a single instant (i.e., on \"from\" and \"to\" the same, or \"from\" absent), the prover MUST query the ledger for all germane revocation updates from registry creation through that instant (i.e., from zero through \"to\" value): if the credential has been revoked prior to the instant, the revocation necessarily will appear in the aggregate delta.

    "},{"location":"aip2/0441-present-proof-best-practices/#provers-presentation-proposals-and-presentation-requests","title":"Provers, Presentation Proposals, and Presentation Requests","text":"

    In fulfilment of the RFC0037 Present Proof protocol, provers may initiate with a presentation proposal or verifiers may initiate with a presentation request. In the former case, the prover has both a presentation proposal and a presentation request; in the latter case, the prover has only a presentation request.

    "},{"location":"aip2/0441-present-proof-best-practices/#credential-selection-best-practices","title":"Credential Selection Best Practices","text":"

    This section specifies a prover's best practices in matching a credential to a requested item. The specification pertains to automated credential selection: obviously, a human user may select any credential in response to a presentation request; it is up to the verifier to verify the resulting presentation as satisfactory or not.

    Note that where a prover selects a revocable credential for inclusion in response to a requested item with a non-revocation interval in the presentation request, the prover MUST create a corresponding sub-proof of non-revocation at a timestamp within that non-revocation interval (insofar as possible; see below).

    "},{"location":"aip2/0441-present-proof-best-practices/#with-presentation-proposal","title":"With Presentation Proposal","text":"

    If prover initiated the protocol with a presentation proposal specifying a value (or predicate threshold) for an attribute, and the presentation request does not require a different value for it, then the prover MUST select a credential matching the presentation proposal, in addition to following the best practices below regarding the presentation request.

    "},{"location":"aip2/0441-present-proof-best-practices/#preference-for-irrevocable-credentials","title":"Preference for Irrevocable Credentials","text":"

    In keeping with the specification above, presentation of an irrevocable credential ipso facto constitutes proof of non-revocation. Provers MUST always prefer irrevocable credentials to revocable credentials, when the wallet has both satisfying a requested item, whether the requested item has an applicable non-revocation interval or not. Note that if a non-revocation interval is applicable to a credential's requested item in the presentation request, selecting an irrevocable credential for presentation may lead to a missing timestamp at the verifier (see below).

    If only revocable credentials are available to satisfy a requested item with no applicable non-revocation interval, the prover MUST present such for proof. As per above, the absence of a non-revocation interval signifies that the verifier has no interest in its revocation status.

    "},{"location":"aip2/0441-present-proof-best-practices/#verifiers-presentations-and-timestamps","title":"Verifiers, Presentations, and Timestamps","text":"

    This section prescribes verifier best practices concerning a received presentation by its timestamps against the corresponding presentation request's non-revocation intervals.

    "},{"location":"aip2/0441-present-proof-best-practices/#timestamp-for-irrevocable-credential","title":"Timestamp for Irrevocable Credential","text":"

    A presentation's inclusion of a timestamp pertaining to an irrevocable credential evinces tampering: the verifier MUST reject such a presentation.

    "},{"location":"aip2/0441-present-proof-best-practices/#missing-timestamp","title":"Missing Timestamp","text":"

    A presentation with no timestamp for a revocable credential purporting to satisfy a requested item in the corresponding presentation request, where the requested item has an applicable non-revocation interval, evinces tampering: the verifier MUST reject such a presentation.

    It is licit for a presentation to have no timestamp for an irrevocable credential: the applicable non-revocation interval is superfluous in the presentation request.

    "},{"location":"aip2/0441-present-proof-best-practices/#timestamp-outside-non-revocation-interval","title":"Timestamp Outside Non-Revocation Interval","text":"

    A presentation may include a timestamp outside of a the non-revocation interval applicable to the requested item that a presented credential purports to satisfy. If the latest timestamp from the ledger for a presented credential's revocation registry predates the non-revocation interval, but the timestamp is not in the future (relative to the instant of presentation proof, with a reasonable allowance for clock skew), the verifier MUST log and continue the proof verification process.

    Any timestamp in the future (relative to the instant of presentation proof, with a reasonable allowance for clock skew) evinces tampering: the verifier MUST reject a presentation with a future timestamp. Similarly, any timestamp predating the creation of its corresponding credential's revocation registry on the ledger evinces tampering: the verifier MUST reject a presentation with such a timestamp.

    "},{"location":"aip2/0441-present-proof-best-practices/#dates-and-predicates","title":"Dates and Predicates","text":"

    This section prescribes issuer and verifier best practices concerning representing dates for use in predicate proofs (eg proving Alice is over 21 without revealing her birth date).

    "},{"location":"aip2/0441-present-proof-best-practices/#dates-in-credentials","title":"Dates in Credentials","text":"

    In order for dates to be used in a predicate proof they MUST be expressed as an Int32. While unix timestamps could work for this, it has several drawbacks including: can't represent dates outside of the years 1901-2038, isn't human readable, and is overly precise in that birth time down to the second is generally not needed for an age check. To address these issues, date attributes SHOULD be represented as integers in the form YYYYMMDD (eg 19991231). This addresses the issues with unix timestamps (or any seconds-since-epoch system) while still allowing date values to be compared with < > operators. Note that this system won't work for any general date math (eg adding or subtracting days), but it will work for predicate proofs which just require comparisons. In order to make it clear that this format is being used, the attribute name SHOULD have the suffix _dateint. Since most datetime libraries don't include this format, here are some examples of helper functions written in typescript.

    "},{"location":"aip2/0441-present-proof-best-practices/#dates-in-presentations","title":"Dates in Presentations","text":"

    When constructing a proof request, the verifier SHOULD express the minimum/maximum date as an integer in the form YYYYMMDD. For example if today is Jan 1, 2021 then the verifier would request that bithdate_dateint is before or equal to Jan 1 2000 so <= 20000101. The holder MUST construct a predicate proof with a YYYYMMDD represented birth date less than that value to satisfy the proof request.

    "},{"location":"aip2/0441-present-proof-best-practices/#reference","title":"Reference","text":""},{"location":"aip2/0441-present-proof-best-practices/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"aip2/0453-issue-credential-v2/","title":"Aries RFC 0453: Issue Credential Protocol 2.0","text":""},{"location":"aip2/0453-issue-credential-v2/#version-change-log","title":"Version Change Log","text":"

    For a period of time, versions 2.1 and 2.2 where defined in this RFC. Those definitions were added prior to any implementations, and to date, there are no known implementations available or planned. An attempt at implementing version 2.1 was not merged into the main branch of Aries Cloud Agent Python, deemed overly complicated and not worth the effort for what amounts to an edge case (issuing multiple credentials of the same type in a single protocol instance). Further, there is a version 3.0 of this protocol that has been specified and implemented that does not include these capabilities. Thus, a decision was made that versions 2.1 and 2.2 be removed as being not accepted by the community and overly complicated to both implement and migrate from. Those interested in seeing how those capabilities were specified can look at this protocol before they were removed.

    "},{"location":"aip2/0453-issue-credential-v2/#20propose-credential-and-identifiers","title":"2.0/propose-credential and identifiers","text":"

    Version 2.0 of the protocol is introduced because of a breaking changes in the propose-credential message, replacing the (indy-specific) filtration criteria with a generalized filter attachment to align with the rest of the messages in the protocol. The previous version is 1.1/propose-credential. Version 2.0 also uses <angle brackets> explicitly to mark all values that may vary between instances, such as identifiers and comments.

    The \"formats\" field is added to all the messages to enable the linking the specific attachment IDs with the the format (credential format and version) of the attachment.

    The details that are part of each message type about the different attachment formats serves as a registry of the known formats and versions.

    "},{"location":"aip2/0453-issue-credential-v2/#summary","title":"Summary","text":"

    Formalizes messages used to issue a credential--whether the credential is JWT-oriented, JSON-LD-oriented, or ZKP-oriented. The general flow is similar, and this protocol intends to handle all of them. If you are using a credential type that doesn't fit this protocol, please raise a Github issue.

    "},{"location":"aip2/0453-issue-credential-v2/#motivation","title":"Motivation","text":"

    We need a standard protocol for issuing credentials. This is the basis of interoperability between Issuers and Holders.

    "},{"location":"aip2/0453-issue-credential-v2/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0453-issue-credential-v2/#name-and-version","title":"Name and Version","text":"

    issue-credential, version 2.0

    "},{"location":"aip2/0453-issue-credential-v2/#roles","title":"Roles","text":"

    There are two roles in this protocol: Issuer and Holder. Technically, the latter role is only potential until the protocol completes; that is, the second party becomes a Holder of a credential by completing the protocol. However, we will use the term Holder throughout, to keep things simple.

    Note: When a holder of credentials turns around and uses those credentials to prove something, they become a Prover. In the sister RFC to this one, 0454: Present Proof Protocol 2.0, the Holder is therefore renamed to Prover. Sometimes in casual conversation, the Holder role here might be called \"Prover\" as well, but more formally, \"Holder\" is the right term at this phase of the credential lifecycle.

    "},{"location":"aip2/0453-issue-credential-v2/#goals","title":"Goals","text":"

    When the goals of each role are not available because of context, goal codes may be specifically included in protocol messages. This is particularly helpful to differentiate between credentials passed between the same parties for several different reasons. A goal code included should be considered to apply to the entire thread and is not necessary to be repeated on each message. Changing the goal code may be done by including the new code in a message. All goal codes are optional, and without default.

    "},{"location":"aip2/0453-issue-credential-v2/#states","title":"States","text":"

    The choreography diagram below details how state evolves in this protocol, in a \"happy path.\" The states include

    "},{"location":"aip2/0453-issue-credential-v2/#issuer-states","title":"Issuer States","text":""},{"location":"aip2/0453-issue-credential-v2/#holder-states","title":"Holder States","text":"

    Errors might occur in various places. For example, an Issuer might offer a credential for a price that the Holder is unwilling to pay. All errors are modeled with a problem-report message. Easy-to-anticipate errors reset the flow as shown in the diagrams, and use the code issuance-abandoned; more exotic errors (e.g., server crashed at Issuer headquarters in the middle of a workflow) may have different codes but still cause the flow to be abandoned in the same way. That is, in this version of the protocol, all errors cause the state of both parties (the sender and the receiver of the problem-report) to revert to null (meaning it is no longer engaged in the protocol at all). Future versions of the protocol may allow more granular choices (e.g., requesting and receiving a (re-)send of the issue-credential message if the Holder times out while waiting in the request-sent state).

    The state table outlines the protocol states and transitions.

    "},{"location":"aip2/0453-issue-credential-v2/#messages","title":"Messages","text":"

    The Issue Credential protocol consists of these messages:

    In addition, the ack and problem-report messages are adopted into the protocol for confirmation and error handling.

    "},{"location":"aip2/0453-issue-credential-v2/#message-attachments","title":"Message Attachments","text":"

    This protocol is about the messages that must be exchanged to issue verifiable credentials, NOT about the specifics of particular verifiable credential schemes. DIDComm attachments are deliberately used in messages to isolate the protocol flow/semantics from the credential artifacts themselves as separate constructs. Attachments allow credential formats and this protocol to evolve through versioning milestones independently instead of in lockstep. Links are provided in the message descriptions below, to describe how the protocol adapts to specific verifiable credential implementations.

    The attachment items in the messages are arrays. The arrays are provided to support the issuing of different credential formats (e.g. ZKP, JSON-LD JWT, or other) containing the same data (claims). The arrays are not to be used for issuing credentials with different claims. The formats field of each message associates each attachment with the format (and version) of the attachment.

    A registry of attachment formats is provided in this RFC within the message type sections. A sub-section should be added for each attachment format type (and optionally, each version). Updates to the attachment type formats does NOT impact the versioning of the Issue Credential protocol. Formats are flexibly defined. For example, the first definitions are for hlindy/cred-abstract@v2.0 et al., assuming that all Hyperledger Indy implementations and ledgers will use a common format. However, if a specific instance of Indy uses a different format, another format value can be documented as a new registry entry.

    Any of the 0017-attachments RFC embedded inline attachments can be used. In the examples below, base64 is used in most cases, but implementations MUST expect any of the formats.

    "},{"location":"aip2/0453-issue-credential-v2/#choreography-diagram","title":"Choreography Diagram","text":"

    Note: This diagram was made in draw.io. To make changes:

    The protocol has 3 alternative beginnings:

    1. The Issuer can begin with an offer.
    2. The Holder can begin with a proposal.
    3. the Holder can begin with a request.

    The offer and proposal messages are part of an optional negotiation phase and may trigger back-and-forth counters. A request is not subject to negotiation; it can only be accepted or rejected.

    "},{"location":"aip2/0453-issue-credential-v2/#propose-credential","title":"Propose Credential","text":"

    An optional message sent by the potential Holder to the Issuer to initiate the protocol or in response to an offer-credential message when the Holder wants some adjustments made to the credential data offered by Issuer.

    Note: In Hyperledger Indy, where the `request-credential` message can **only** be sent in response to an `offer-credential` message, the `propose-credential` message is the only way for a potential Holder to initiate the workflow.

    Message format:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/propose-credential\",\n    \"@id\": \"<uuid of propose-message>\",\n    \"goal_code\": \"<goal-code>\",\n    \"comment\": \"<some comment>\",\n    \"credential_preview\": <json-ld object>,\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\"\n        }\n    ],\n    \"filters~attach\": [\n        {\n            \"@id\": \"<attachment identifier>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ]\n}\n

    Description of attributes:

    "},{"location":"aip2/0453-issue-credential-v2/#propose-attachment-registry","title":"Propose Attachment Registry","text":"Credential Format Format Value Link to Attachment Format Comment DIF Credential Manifest dif/credential-manifest@v1.0 propose-credential attachment format Linked Data Proof VC Detail aries/ld-proof-vc-detail@v1.0 ld-proof-vc-detail attachment format Hyperledger Indy Credential Filter hlindy/cred-filter@v2.0 cred filter format Hyperledger AnonCreds Credential Filter anoncreds/credential-filter@v1.0 Credential Filter format"},{"location":"aip2/0453-issue-credential-v2/#offer-credential","title":"Offer Credential","text":"

    A message sent by the Issuer to the potential Holder, describing the credential they intend to offer and possibly the price they expect to be paid. In Hyperledger Indy, this message is required, because it forces the Issuer to make a cryptographic commitment to the set of fields in the final credential and thus prevents Issuers from inserting spurious data. In credential implementations where this message is optional, an Issuer can use the message to negotiate the issuing following receipt of a request-credential message.

    Message Format:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/offer-credential\",\n    \"@id\": \"<uuid of offer message>\",\n    \"goal_code\": \"<goal-code>\",\n    \"replacement_id\": \"<issuer unique id>\",\n    \"comment\": \"<some comment>\",\n    \"credential_preview\": <json-ld object>,\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"offers~attach\": [\n        {\n            \"@id\": \"<attach@id value>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    It is possible for an Issuer to add a ~timing.expires_time decorator to this message to convey the idea that the offer will expire at a particular point in the future. Such behavior is not a special part of this protocol, and support for it is not a requirement of conforming implementations; the ~timing decorator is simply a general possibility for any DIDComm message. We mention it here just to note that the protocol can be enriched in composable ways.

    "},{"location":"aip2/0453-issue-credential-v2/#offer-attachment-registry","title":"Offer Attachment Registry","text":"Credential Format Format Value Link to Attachment Format Comment DIF Credential Manifest dif/credential-manifest@v1.0 offer-credential attachment format Hyperledger Indy Credential Abstract hlindy/cred-abstract@v2.0 cred abstract format Linked Data Proof VC Detail aries/ld-proof-vc-detail@v1.0 ld-proof-vc-detail attachment format Hyperledger AnonCreds Credential Offer anoncreds/credential-offer@v1.0 Credential Offer format W3C VC - Data Integrity Proof Credential Offer didcomm/w3c-di-vc-offer@v0.1 Credential Offer format"},{"location":"aip2/0453-issue-credential-v2/#request-credential","title":"Request Credential","text":"

    This is a message sent by the potential Holder to the Issuer, to request the issuance of a credential. Where circumstances do not require a preceding Offer Credential message (e.g., there is no cost to issuance that the Issuer needs to explain in advance, and there is no need for cryptographic negotiation), this message initiates the protocol. When using the Hyperledger Indy AnonCreds verifiable credential format, this message can only be sent in response to an offer-credential message.

    Message Format:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/request-credential\",\n    \"@id\": \"<uuid of request message>\",\n    \"goal_code\": \"<goal-code>\",\n    \"comment\": \"<some comment>\",\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"requests~attach\": [\n        {\n            \"@id\": \"<attachment identifier>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        },\n    ]\n}\n

    Description of Fields:

    "},{"location":"aip2/0453-issue-credential-v2/#request-attachment-registry","title":"Request Attachment Registry","text":"Credential Format Format Value Link to Attachment Format Comment DIF Credential Manifest dif/credential-manifest@v1.0 request-credential attachment format Hyperledger Indy Credential Request hlindy/cred-req@v2.0 cred request format Linked Data Proof VC Detail aries/ld-proof-vc-detail@v1.0 ld-proof-vc-detail attachment format Hyperledger AnonCreds Credential Request anoncreds/credential-request@v1.0 Credential Request format W3C VC - Data Integrity Proof Credential Request didcomm/w3c-di-vc-request@v0.1 Credential Request format"},{"location":"aip2/0453-issue-credential-v2/#issue-credential","title":"Issue Credential","text":"

    This message contains a verifiable credential being issued as an attached payload. It is sent in response to a valid Request Credential message.

    Message Format:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/issue-credential\",\n    \"@id\": \"<uuid of issue message>\",\n    \"goal_code\": \"<goal-code>\",\n    \"replacement_id\": \"<issuer unique id>\",\n    \"comment\": \"<some comment>\",\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"credentials~attach\": [\n        {\n            \"@id\": \"<attachment-id>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    If the issuer wants an acknowledgement that he issued credential was accepted, this message must be decorated with the ~please-ack decorator using the OUTCOME acknowledgement request. Outcome in the context of this protocol means the acceptance of the credential in whole, i.e. the credential is verified and the contents of the credential are acknowledged. Note that this is different from the default behavior as described in 0317: Please ACK Decorator. It is then best practice for the new Holder to respond with an explicit ack message as described in the please ack decorator RFC.

    "},{"location":"aip2/0453-issue-credential-v2/#credentials-attachment-registry","title":"Credentials Attachment Registry","text":"Credential Format Format Value Link to Attachment Format Comment Linked Data Proof VC aries/ld-proof-vc@v1.0 ld-proof-vc attachment format Hyperledger Indy Credential hlindy/cred@v2.0 credential format Hyperledger AnonCreds Credential anoncreds/credential@v1.0 Credential format W3C VC - Data Integrity Proof Credential didcomm/w3c-di-vc@v0.1 Credential format"},{"location":"aip2/0453-issue-credential-v2/#adopted-problem-report","title":"Adopted Problem Report","text":"

    The problem-report message is adopted by this protocol. problem-report messages can be used by either party to indicate an error in the protocol.

    "},{"location":"aip2/0453-issue-credential-v2/#preview-credential","title":"Preview Credential","text":"

    This is not a message but an inner object for other messages in this protocol. It is used construct a preview of the data for the credential that is to be issued. Its schema follows:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/credential-preview\",\n    \"attributes\": [\n        {\n            \"name\": \"<attribute name>\",\n            \"mime-type\": \"<type>\",\n            \"value\": \"<value>\"\n        },\n        // more attributes\n    ]\n}\n

    The main element is attributes. It is an array of (object) attribute specifications; the subsections below outline their semantics.

    "},{"location":"aip2/0453-issue-credential-v2/#attribute-name","title":"Attribute Name","text":"

    The mandatory \"name\" key maps to the attribute name as a string.

    "},{"location":"aip2/0453-issue-credential-v2/#mime-type-and-value","title":"MIME Type and Value","text":"

    The optional mime-type advises the issuer how to render a binary attribute, to judge its content for applicability before issuing a credential containing it. Its value parses case-insensitively in keeping with MIME type semantics of RFC 2045. If mime-type is missing, its value is null.

    The mandatory value holds the attribute value:

    "},{"location":"aip2/0453-issue-credential-v2/#threading","title":"Threading","text":"

    Threading can be used to initiate a sub-protocol during an issue credential protocol instance. For example, during credential issuance, the Issuer may initiate a child message thread to execute the Present Proof sub-protocol to have the potential Holder (now acting as a Prover) prove attributes about themselves before issuing the credential. Depending on circumstances, this might be a best practice for preventing credential fraud at issuance time.

    If threading were added to all of the above messages, a ~thread decorator would be present, and later messages in the flow would reference the @id of earlier messages to stitch the flow into a single coherent sequence. Details about threading can be found in the 0008: Message ID and Threading RFC.

    "},{"location":"aip2/0453-issue-credential-v2/#limitations","title":"Limitations","text":"

    Smart contracts may be missed in ecosystem, so operation \"issue credential after payment received\" is not atomic. It\u2019s possible case that malicious issuer will charge first and then will not issue credential in fact. But this situation should be easily detected and appropriate penalty should be applied in such type of networks.

    "},{"location":"aip2/0453-issue-credential-v2/#negotiation-and-preview","title":"Negotiation and Preview","text":"

    Negotiation prior to issuing the credential can be done using the offer-credential and propose-credential messages. A common negotiation use case would be about the data to go into the credential. For that, the credential_preview element is used.

    "},{"location":"aip2/0453-issue-credential-v2/#drawbacks","title":"Drawbacks","text":"

    None documented

    "},{"location":"aip2/0453-issue-credential-v2/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"aip2/0453-issue-credential-v2/#prior-art","title":"Prior art","text":"

    See RFC 0036 Issue Credential, v1.x.

    "},{"location":"aip2/0453-issue-credential-v2/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0453-issue-credential-v2/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"aip2/0454-present-proof-v2/","title":"Aries RFC 0454: Present Proof Protocol 2.0","text":""},{"location":"aip2/0454-present-proof-v2/#version-change-log","title":"Version Change Log","text":"

    For a period of time, versions 2.1 and 2.2 where defined in this RFC. Those definitions were added prior to any implementations, and to date, there are no known implementations available or planned. An attempt at implementing version 2.1 of the associated \"issue multiple credentials\" was not merged into the main branch of Aries Cloud Agent Python, deemed overly complicated and not worth the effort for what amounts to an edge case (presenting multiple presentations of the same type in a single protocol instance). Further, there is a version 3.0 of this protocol that has been specified and implemented that does not include these capabilities. Thus, a decision was made that versions 2.1 and 2.2 be removed as being not accepted by the community and overly complicated to both implement and migrate from. Those interested in seeing how those capabilities were specified can look at this protocol before they were removed.

    "},{"location":"aip2/0454-present-proof-v2/#20-alignment-with-rfc-0453-issue-credential","title":"2.0 - Alignment with RFC 0453 Issue Credential","text":""},{"location":"aip2/0454-present-proof-v2/#summary","title":"Summary","text":"

    A protocol supporting a general purpose verifiable presentation exchange regardless of the specifics of the underlying verifiable presentation request and verifiable presentation format.

    "},{"location":"aip2/0454-present-proof-v2/#motivation","title":"Motivation","text":"

    We need a standard protocol for a verifier to request a presentation from a prover, and for the prover to respond by presenting a proof to the verifier. When doing that exchange, we want to provide a mechanism for the participants to negotiate the underlying format and content of the proof.

    "},{"location":"aip2/0454-present-proof-v2/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0454-present-proof-v2/#name-and-version","title":"Name and Version","text":"

    present-proof, version 2.0

    "},{"location":"aip2/0454-present-proof-v2/#key-concepts","title":"Key Concepts","text":"

    This protocol is about the messages to support the presentation of verifiable claims, not about the specifics of particular verifiable presentation formats. DIDComm attachments are deliberately used in messages to make the protocol agnostic to specific verifiable presentation format payloads. Links are provided in the message data element descriptions to details of specific verifiable presentation implementation data structures.

    Diagrams in this protocol were made in draw.io. To make changes:

    "},{"location":"aip2/0454-present-proof-v2/#roles","title":"Roles","text":"

    The roles are verifier and prover. The verifier requests the presentation of a proof and verifies the presentation, while the prover prepares the proof and presents it to the verifier. Optionally, although unlikely from a business sense, the prover may initiate an instance of the protocol using the propose-presentation message.

    "},{"location":"aip2/0454-present-proof-v2/#goals","title":"Goals","text":"

    When the goals of each role are not available because of context, goal codes may be specifically included in protocol messages. This is particularly helpful to differentiate between credentials passed between the same parties for several different reasons. A goal code included should be considered to apply to the entire thread and is not necessary to be repeated on each message. Changing the goal code may be done by including the new code in a message. All goal codes are optional, and without default.

    "},{"location":"aip2/0454-present-proof-v2/#states","title":"States","text":"

    The following states are defined and included in the state transition table below.

    "},{"location":"aip2/0454-present-proof-v2/#states-for-verifier","title":"States for Verifier","text":""},{"location":"aip2/0454-present-proof-v2/#states-for-prover","title":"States for Prover","text":"

    For the most part, these states map onto the transitions shown in both the state transition table above, and in the choreography diagram (below) in obvious ways. However, a few subtleties are worth highlighting:

    "},{"location":"aip2/0454-present-proof-v2/#choreography-diagram","title":"Choreography Diagram","text":""},{"location":"aip2/0454-present-proof-v2/#messages","title":"Messages","text":"

    The present proof protocol consists of these messages:

    In addition, the ack and problem-report messages are adopted into the protocol for confirmation and error handling.

    The messages that include ~attach attachments may use any form of the embedded attachment. In the examples below, the forms of the attachment are arbitrary.

    The ~attach array is to be used to enable a single presentation to be requested/delivered in different verifiable presentation formats. The ability to have multiple attachments must not be used to request/deliver multiple different presentations in a single instance of the protocol.

    "},{"location":"aip2/0454-present-proof-v2/#propose-presentation","title":"Propose Presentation","text":"

    An optional message sent by the prover to the verifier to initiate a proof presentation process, or in response to a request-presentation message when the prover wants to propose using a different presentation format or request. Schema:

    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/propose-presentation\",\n    \"@id\": \"<uuid-propose-presentation>\",\n    \"goal_code\": \"<goal-code>\",\n    \"comment\": \"some comment\",\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"proposals~attach\": [\n        {\n            \"@id\": \"<attachment identifier>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"json\": \"<json>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    If the proposals~attach is not provided, the attach_id item in the formats array should not be provided. That form of the propose-presentation message is to indicate the presentation formats supported by the prover, independent of the verifiable presentation request content.

    "},{"location":"aip2/0454-present-proof-v2/#negotiation-and-preview","title":"Negotiation and Preview","text":"

    Negotiation prior to the delivery of the presentation can be done using the propose-presentation and request-presentation messages. The common negotiation use cases would be about the claims to go into the presentation and the format of the verifiable presentation.

    "},{"location":"aip2/0454-present-proof-v2/#propose-attachment-registry","title":"Propose Attachment Registry","text":"Presentation Format Format Value Link to Attachment Format Comment Hyperledger Indy Proof Req hlindy/proof-req@v2.0 proof request format Used to propose as well as request proofs. DIF Presentation Exchange dif/presentation-exchange/definitions@v1.0 propose-presentation attachment format Hyperledger AnonCreds Proof Request anoncreds/proof-request@v1.0 Proof Request format Used to propose as well as request proofs."},{"location":"aip2/0454-present-proof-v2/#request-presentation","title":"Request Presentation","text":"

    From a verifier to a prover, the request-presentation message describes values that need to be revealed and predicates that need to be fulfilled. Schema:

    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/request-presentation\",\n    \"@id\": \"<uuid-request>\",\n    \"goal_code\": \"<goal-code>\",\n    \"comment\": \"some comment\",\n    \"will_confirm\": true,\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"request_presentations~attach\": [\n        {\n            \"@id\": \"<attachment identifier>\",\n            \"mime-type\": \"application/json\",\n            \"data\":  {\n                \"base64\": \"<base64 data>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    "},{"location":"aip2/0454-present-proof-v2/#presentation-request-attachment-registry","title":"Presentation Request Attachment Registry","text":"Presentation Format Format Value Link to Attachment Format Comment Hyperledger Indy Proof Req hlindy/proof-req@v2.0 proof request format Used to propose as well as request proofs. DIF Presentation Exchange dif/presentation-exchange/definitions@v1.0 propose-presentation attachment format Hyperledger AnonCreds Proof Request anoncreds/proof-request@v1.0 Proof Request format Used to propose as well as request proofs."},{"location":"aip2/0454-present-proof-v2/#presentation","title":"Presentation","text":"

    This message is a response to a Presentation Request message and contains signed presentations. Schema:

    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/presentation\",\n    \"@id\": \"<uuid-presentation>\",\n    \"goal_code\": \"<goal-code>\",\n    \"comment\": \"some comment\",\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"presentations~attach\": [\n        {\n            \"@id\": \"<attachment identifier>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"sha256\": \"f8dca1d901d18c802e6a8ce1956d4b0d17f03d9dc5e4e1f618b6a022153ef373\",\n                \"links\": [\"https://ibb.co/TtgKkZY\"]\n            }\n        }\n    ]\n}\n

    Description of fields:

    If the prover wants an acknowledgement that the presentation was accepted, this message may be decorated with the ~please-ack decorator using the OUTCOME acknowledgement request. This is not necessary if the verifier has indicated it will send an ack-presentation using the will_confirm property. Outcome in the context of this protocol is the definition of \"successful\" as described in Ack Presentation. Note that this is different from the default behavior as described in 0317: Please ACK Decorator. It is then best practice for the new Verifier to respond with an explicit ack message as described in the please ack decorator RFC.

    "},{"location":"aip2/0454-present-proof-v2/#presentations-attachment-registry","title":"Presentations Attachment Registry","text":"Presentation Format Format Value Link to Attachment Format Comment Hyperledger Indy Proof hlindy/proof@v2.0 proof format DIF Presentation Exchange dif/presentation-exchange/submission@v1.0 propose-presentation attachment format Hyperledger AnonCreds Proof anoncreds/proof@v1.0 Proof format"},{"location":"aip2/0454-present-proof-v2/#ack-presentation","title":"Ack Presentation","text":"

    A message from the verifier to the prover that the Present Proof protocol was completed successfully and is now in the done state. The message is an adopted ack from the RFC 0015 acks protocol. The definition of \"successful\" in this protocol means the acceptance of the presentation in whole, i.e. the proof is verified and the contents of the proof are acknowledged.

    "},{"location":"aip2/0454-present-proof-v2/#problem-report","title":"Problem Report","text":"

    A message from the verifier to the prover that follows the presentation message to indicate that the Present Proof protocol was completed unsuccessfully and is now in the abandoned state. The message is an adopted problem-report from the RFC 0015 report-problem protocol. The definition of \"unsuccessful\" from a business sense is up to the verifier. The elements of the problem-report message can provide information to the prover about why the protocol instance was unsuccessful.

    Either party may send a problem-report message earlier in the flow to terminate the protocol before its normal conclusion.

    "},{"location":"aip2/0454-present-proof-v2/#reference","title":"Reference","text":"

    Details are covered in the Tutorial section.

    "},{"location":"aip2/0454-present-proof-v2/#drawbacks","title":"Drawbacks","text":"

    The Indy format of the proposal attachment as proposed above does not allow nesting of logic along the lines of \"A and either B or C if D, otherwise A and B\", nor cross-credential options such as proposing a legal name issued by either (for example) a specific financial institution or government entity.

    The verifiable presentation standardization work being conducted in parallel to this in DIF and the W3C Credentials Community Group (CCG) should be included in at least the Registry tables of this document, and ideally used to eliminate the need for presentation format-specific options.

    "},{"location":"aip2/0454-present-proof-v2/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"aip2/0454-present-proof-v2/#prior-art","title":"Prior art","text":"

    The previous major version of this protocol is RFC 0037 Present Proof protocol and implementations.

    "},{"location":"aip2/0454-present-proof-v2/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0454-present-proof-v2/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"aip2/0510-dif-pres-exch-attach/","title":"Aries RFC 0510: Presentation-Exchange Attachment format for requesting and presenting proofs","text":""},{"location":"aip2/0510-dif-pres-exch-attach/#summary","title":"Summary","text":"

    This RFC registers three attachment formats for use in the present-proof V2 protocol based on the Decentralized Identity Foundation's (DIF) Presentation Exchange specification (P-E). Two of these formats define containers for a presentation-exchange request object and another options object carrying additional parameters, while the third format is just a vessel for the final presentation_submission verifiable presentation transferred from the Prover to the Verifier.

    Presentation Exchange defines a data format capable of articulating a rich set of proof requirements from Verifiers, and also provides a means of describing the formats in which Provers must submit those proofs.

    A Verifier's defines their requirements in a presentation_definition containing input_descriptors that describe the credential(s) the proof(s) must be derived from as well as a rich set of operators that place constraints on those proofs (eg. \"must be issued from issuer X\" or \"age over X\", etc.).

    The Verifiable Presentation format of Presentation Submissions is used as opposed to OIDC tokens or CHAPI objects. For an alternative on how to tunnel OIDC messages over DIDComm, see HTTP-Over-DIDComm. CHAPI is an alternative transport to DIDComm.

    "},{"location":"aip2/0510-dif-pres-exch-attach/#motivation","title":"Motivation","text":"

    The Presentation Exchange specification (P-E) possesses a rich language for expressing a Verifier's criterion.

    P-E lends itself well to several transport mediums due to its limited scope as a data format, and is easily transported over DIDComm.

    It is furthermore desirable to make use of specifications developed in an open standards body.

    "},{"location":"aip2/0510-dif-pres-exch-attach/#tutorial","title":"Tutorial","text":"

    Complete examples of messages are provided in the reference section.

    The Verifier sends a request-presentation to the Prover containing a presentation_definition, along with a domain and challenge the Prover must sign over in the proof.

    The Prover can optionally respond to the Verifier's request-presentation with a propose-presentation message containing \"Input Descriptors\" that describe the proofs they can provide. The contents of the attachment is just the input_descriptors attribute of the presentation_definition object.

    The Prover responds with a presentation message containing a presentation_submission.

    "},{"location":"aip2/0510-dif-pres-exch-attach/#reference","title":"Reference","text":""},{"location":"aip2/0510-dif-pres-exch-attach/#propose-presentation-attachment-format","title":"propose-presentation attachment format","text":"

    Format identifier: dif/presentation-exchange/definitions@v1.0

    "},{"location":"aip2/0510-dif-pres-exch-attach/#examples-propose-presentation","title":"Examples: propose-presentation","text":"Complete message example
    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/propose-presentation\",\n    \"@id\": \"fce30ed1-96f8-44c9-95cf-b274288009dc\",\n    \"comment\": \"some comment\",\n    \"formats\" : [{\n        \"attach_id\" : \"143c458d-1b1c-40c7-ab85-4d16808ddf0a\",\n        \"format\" : \"dif/presentation-exchange/definitions@v1.0\"\n    }],\n    \"proposal~attach\": [{\n        \"@id\": \"143c458d-1b1c-40c7-ab85-4d16808ddf0a\",\n        \"mime-type\": \"application/json\",\n        \"data\": {\n            \"json\": {\n                \"input_descriptors\": [{\n                    \"id\": \"citizenship_input\",\n                    \"name\": \"US Passport\",\n                    \"group\": [\"A\"],\n                    \"schema\": [{\n                        \"uri\": \"hub://did:foo:123/Collections/schema.us.gov/passport.json\"\n                    }],\n                    \"constraints\": {\n                        \"fields\": [{\n                            \"path\": [\"$.credentialSubject.birth_date\", \"$.vc.credentialSubject.birth_date\", \"$.birth_date\"],\n                            \"filter\": {\n                                \"type\": \"date\",\n                                \"minimum\": \"1999-5-16\"\n                            }\n                        }]\n                    }\n                }]\n            }\n        }\n    }]\n}\n
    "},{"location":"aip2/0510-dif-pres-exch-attach/#request-presentation-attachment-format","title":"request-presentation attachment format","text":"

    Format identifier: dif/presentation-exchange/definitions@v1.0

    Since the format identifier defined above is the same as the one used in the propose-presentation message, it's recommended to consider both the message @type and the format to accuarately understand the contents of the attachment.

    The contents of the attachment is a JSON object containing the Verifier's presentation definition and an options object with proof options:

    {\n    \"options\": {\n        \"challenge\": \"...\",\n        \"domain\": \"...\",\n    },\n    \"presentation_definition\": {\n        // presentation definition object\n    }\n}\n
    "},{"location":"aip2/0510-dif-pres-exch-attach/#the-options-object","title":"The options object","text":"

    options is a container of additional parameters required for the Prover to fulfill the Verifier's request.

    Available options are:

    Name Status Description challenge RECOMMENDED (for LD proofs) Random seed provided by the Verifier for LD Proofs. domain RECOMMENDED (for LD proofs) The operational domain of the requested LD proof."},{"location":"aip2/0510-dif-pres-exch-attach/#examples-request-presentation","title":"Examples: request-presentation","text":"Complete message example requesting a verifiable presentation with proof type Ed25519Signature2018
    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/request-presentation\",\n    \"@id\": \"0ac534c8-98ed-4fe3-8a41-3600775e1e92\",\n    \"comment\": \"some comment\",\n    \"formats\" : [{\n        \"attach_id\" : \"ed7d9b1f-9eed-4bde-b81c-3aa7485cf947\",\n        \"format\" : \"dif/presentation-exchange/definitions@v1.0\"\n    }],\n    \"request_presentations~attach\": [{\n        \"@id\": \"ed7d9b1f-9eed-4bde-b81c-3aa7485cf947\",\n        \"mime-type\": \"application/json\",\n        \"data\":  {\n            \"json\": {\n                \"options\": {\n                    \"challenge\": \"23516943-1d79-4ebd-8981-623f036365ef\",\n                    \"domain\": \"us.gov/DriversLicense\"\n                },\n                \"presentation_definition\": {\n                    \"input_descriptors\": [{\n                        \"id\": \"citizenship_input\",\n                        \"name\": \"US Passport\",\n                        \"group\": [\"A\"],\n                        \"schema\": [{\n                            \"uri\": \"hub://did:foo:123/Collections/schema.us.gov/passport.json\"\n                        }],\n                        \"constraints\": {\n                            \"fields\": [{\n                                \"path\": [\"$.credentialSubject.birth_date\", \"$.birth_date\"],\n                                \"filter\": {\n                                    \"type\": \"date\",\n                                    \"minimum\": \"1999-5-16\"\n                                }\n                            }]\n                        }\n                    }],\n                    \"format\": {\n                        \"ldp_vp\": {\n                            \"proof_type\": [\"Ed25519Signature2018\"]\n                        }\n                    }\n                }\n            }\n        }\n    }]\n}\n
    The same example but requesting the verifiable presentation with proof type BbsBlsSignatureProof2020 instead
    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/request-presentation\",\n    \"@id\": \"0ac534c8-98ed-4fe3-8a41-3600775e1e92\",\n    \"comment\": \"some comment\",\n    \"formats\" : [{\n        \"attach_id\" : \"ed7d9b1f-9eed-4bde-b81c-3aa7485cf947\",\n        \"format\" : \"dif/presentation-exchange/definitions@v1.0\"\n    }],\n    \"request_presentations~attach\": [{\n        \"@id\": \"ed7d9b1f-9eed-4bde-b81c-3aa7485cf947\",\n        \"mime-type\": \"application/json\",\n        \"data\":  {\n            \"json\": {\n                \"options\": {\n                    \"challenge\": \"23516943-1d79-4ebd-8981-623f036365ef\",\n                    \"domain\": \"us.gov/DriversLicense\"\n                },\n                \"presentation_definition\": {\n                    \"input_descriptors\": [{\n                        \"id\": \"citizenship_input\",\n                        \"name\": \"US Passport\",\n                        \"group\": [\"A\"],\n                        \"schema\": [{\n                            \"uri\": \"hub://did:foo:123/Collections/schema.us.gov/passport.json\"\n                        }],\n                        \"constraints\": {\n                            \"fields\": [{\n                                \"path\": [\"$.credentialSubject.birth_date\", \"$.vc.credentialSubject.birth_date\", \"$.birth_date\"],\n                                \"filter\": {\n                                    \"type\": \"date\",\n                                    \"minimum\": \"1999-5-16\"\n                                }\n                            }],\n                            \"limit_disclosure\": \"required\"\n                        }\n                    }],\n                    \"format\": {\n                        \"ldp_vc\": {\n                            \"proof_type\": [\"BbsBlsSignatureProof2020\"]\n                        }\n                    }\n                }\n            }\n        }\n    }]\n}\n
    "},{"location":"aip2/0510-dif-pres-exch-attach/#presentation-attachment-format","title":"presentation attachment format","text":"

    Format identifier: dif/presentation-exchange/submission@v1.0

    The contents of the attachment is a Presentation Submission in a standard Verifiable Presentation format containing the proofs requested.

    "},{"location":"aip2/0510-dif-pres-exch-attach/#examples-presentation","title":"Examples: presentation","text":"Complete message example
    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/presentation\",\n    \"@id\": \"f1ca8245-ab2d-4d9c-8d7d-94bf310314ef\",\n    \"comment\": \"some comment\",\n    \"formats\" : [{\n        \"attach_id\" : \"2a3f1c4c-623c-44e6-b159-179048c51260\",\n        \"format\" : \"dif/presentation-exchange/submission@v1.0\"\n    }],\n    \"presentations~attach\": [{\n        \"@id\": \"2a3f1c4c-623c-44e6-b159-179048c51260\",\n        \"mime-type\": \"application/ld+json\",\n        \"data\": {\n            \"json\": {\n                \"@context\": [\n                    \"https://www.w3.org/2018/credentials/v1\",\n                    \"https://identity.foundation/presentation-exchange/submission/v1\"\n                ],\n                \"type\": [\n                    \"VerifiablePresentation\",\n                    \"PresentationSubmission\"\n                ],\n                \"presentation_submission\": {\n                    \"descriptor_map\": [{\n                        \"id\": \"citizenship_input\",\n                        \"path\": \"$.verifiableCredential.[0]\"\n                    }]\n                },\n                \"verifiableCredential\": [{\n                    \"@context\": \"https://www.w3.org/2018/credentials/v1\",\n                    \"id\": \"https://eu.com/claims/DriversLicense\",\n                    \"type\": [\"EUDriversLicense\"],\n                    \"issuer\": \"did:foo:123\",\n                    \"issuanceDate\": \"2010-01-01T19:73:24Z\",\n                    \"credentialSubject\": {\n                        \"id\": \"did:example:ebfeb1f712ebc6f1c276e12ec21\",\n                        \"license\": {\n                            \"number\": \"34DGE352\",\n                            \"dob\": \"07/13/80\"\n                        }\n                    },\n                    \"proof\": {\n                        \"type\": \"RsaSignature2018\",\n                        \"created\": \"2017-06-18T21:19:10Z\",\n                        \"proofPurpose\": \"assertionMethod\",\n                        \"verificationMethod\": \"https://example.edu/issuers/keys/1\",\n                        \"jws\": \"...\"\n                    }\n                }],\n                \"proof\": {\n                    \"type\": \"RsaSignature2018\",\n                    \"created\": \"2018-09-14T21:19:10Z\",\n                    \"proofPurpose\": \"authentication\",\n                    \"verificationMethod\": \"did:example:ebfeb1f712ebc6f1c276e12ec21#keys-1\",\n                    \"challenge\": \"1f44d55f-f161-4938-a659-f8026467f126\",\n                    \"domain\": \"4jt78h47fh47\",\n                    \"jws\": \"...\"\n                }\n            }\n        }\n    }]\n}\n
    "},{"location":"aip2/0510-dif-pres-exch-attach/#supported-features-of-presentation-exchange","title":"Supported Features of Presentation-Exchange","text":"

    Level of support for Presentation-Exchange ../../features:

    Feature Notes presentation_definition.input_descriptors.id presentation_definition.input_descriptors.name presentation_definition.input_descriptors.purpose presentation_definition.input_descriptors.schema.uri URI for the credential's schema. presentation_definition.input_descriptors.constraints.fields.path Array of JSONPath string expressions as defined in section 8. REQUIRED as per the spec. presentation_definition.input_descriptors.constraints.fields.filter JSONSchema descriptor. presentation_definition.input_descriptors.constraints.limit_disclosure preferred or required as defined in the spec and as supported by the Holder and Verifier proof mechanisms.Note that the Holder MUST have credentials with cryptographic proof suites that are capable of selective disclosure in order to respond to a request with limit_disclosure: \"required\".See RFC0593 for appropriate crypto suites. presentation_definition.input_descriptors.constraints.is_holder preferred or required as defined in the spec.Note that this feature allows the Holder to present credentials with a different subject identifier than the DID used to establish the DIDComm connection with the Verifier. presentation_definition.format For JSONLD-based credentials: ldp_vc and ldp_vp. presentation_definition.format.proof_type For JSONLD-based credentials: Ed25519Signature2018, BbsBlsSignature2020, and JsonWebSignature2020. When specifying ldp_vc, BbsBlsSignatureProof2020 may also be used."},{"location":"aip2/0510-dif-pres-exch-attach/#proof-formats","title":"Proof Formats","text":""},{"location":"aip2/0510-dif-pres-exch-attach/#constraints","title":"Constraints","text":"

    Verifiable Presentations MUST be produced and consumed using the JSON-LD syntax.

    The proof types defined below MUST be registered in the Linked Data Cryptographic Suite Registry.

    The value of any credentialSubject.id in a credential MUST be a Dentralized Identifier (DID) conforming to the DID Syntax if present. This allows the Holder to authenticate as the credential's subject if required by the Verifier (see the is_holder property above). The Holder authenticates as the credential's subject by attaching an LD Proof on the enclosing Verifiable Presentation.

    "},{"location":"aip2/0510-dif-pres-exch-attach/#proof-formats-on-credentials","title":"Proof Formats on Credentials","text":"

    Aries agents implementing this RFC MUST support the formats outlined in RFC0593 for proofs on Verifiable Credentials.

    "},{"location":"aip2/0510-dif-pres-exch-attach/#proof-formats-on-presentations","title":"Proof Formats on Presentations","text":"

    Aries agents implementing this RFC MUST support the formats outlined below for proofs on Verifiable Presentations.

    "},{"location":"aip2/0510-dif-pres-exch-attach/#ed25519signature2018","title":"Ed25519Signature2018","text":"

    Specification.

    Request Parameters:

    Result:

    A Verifiable Presentation of type Presentation Submission containing the credentials requested under the verifiableCredential property and a proof property of type Ed25519Signature2018.

    Example
    {\n    \"@context\": [\n        \"https://www.w3.org/2018/credentials/v1\",\n        \"https://identity.foundation/presentation-exchange/submission/v1\"\n    ],\n    \"type\": [\n        \"VerifiablePresentation\",\n        \"PresentationSubmission\"\n    ],\n    \"presentation_submission\": {\n        \"descriptor_map\": [{\n           \"id\": \"citizenship_input\",\n           \"path\": \"$.verifiableCredential.[0]\"\n        }]\n    },\n    \"verifiableCredential\": [{\n        \"@context\": \"https://www.w3.org/2018/credentials/v1\",\n        \"id\": \"https://eu.com/claims/DriversLicense\",\n        \"type\": [\n            \"EUDriversLicense\"\n        ],\n        \"issuer\": \"did:foo:123\",\n        \"issuanceDate\": \"2010-01-01T19:73:24Z\",\n        \"credentialSubject\": {\n            \"id\": \"did:example:ebfeb1f712ebc6f1c276e12ec21\",\n            \"license\": {\n            \"number\": \"34DGE352\",\n            \"dob\": \"07/13/80\"\n          }\n        },\n        \"proof\": {\n            \"type\": \"RsaSignature2018\",\n            \"created\": \"2017-06-18T21:19:10Z\",\n            \"proofPurpose\": \"assertionMethod\",\n            \"verificationMethod\": \"https://example.edu/issuers/keys/1\",\n            \"jws\": \"...\"\n        }\n    }],\n    \"proof\": {\n      \"type\": \"Ed25519Signature2018\",\n      \"proofPurpose\": \"authentication\",\n      \"created\": \"2017-09-23T20:21:34Z\",\n      \"verificationMethod\": \"did:example:123456#key1\",\n      \"challenge\": \"2bbgh3dgjg2302d-d2b3gi423d42\",\n      \"domain\": \"example.org\",\n      \"jws\": \"eyJ0eXAiOiJK...gFWFOEjXk\"\n  }\n}\n
    "},{"location":"aip2/0510-dif-pres-exch-attach/#bbsblssignature2020","title":"BbsBlsSignature2020","text":"

    Specification.

    Associated RFC: RFC0646.

    Request Parameters: * presentation_definition.format: ldp_vp * presentation_definition.format.proof_type: BbsBlsSignature2020 * options.challenge: (Optional) a random string value generated by the Verifier * options.domain: (Optional) a string value specified set by the Verifier

    Result:

    A Verifiable Presentation of type Presentation Submission containing the credentials requested under the verifiableCredential property and a proof property of type BbsBlsSignature2020.

    Example
    {\n    \"@context\": [\n        \"https://www.w3.org/2018/credentials/v1\",\n        \"https://w3id.org/security/v2\",\n        \"https://w3id.org/security/bbs/v1\",\n        \"https://identity.foundation/presentation-exchange/submission/v1\"\n    ],\n    \"type\": [\n        \"VerifiablePresentation\",\n        \"PresentationSubmission\"\n    ],\n    \"presentation_submission\": {\n        \"descriptor_map\": [{\n            \"id\": \"citizenship_input\",\n            \"path\": \"$.verifiableCredential.[0]\"\n        }]\n    },\n    \"verifiableCredential\": [{\n        \"@context\": \"https://www.w3.org/2018/credentials/v1\",\n        \"id\": \"https://eu.com/claims/DriversLicense\",\n        \"type\": [\"EUDriversLicense\"],\n        \"issuer\": \"did:foo:123\",\n        \"issuanceDate\": \"2010-01-01T19:73:24Z\",\n        \"credentialSubject\": {\n            \"id\": \"did:example:ebfeb1f712ebc6f1c276e12ec21\",\n            \"license\": {\n                \"number\": \"34DGE352\",\n                \"dob\": \"07/13/80\"\n            }\n       },\n       \"proof\": {\n           \"type\": \"BbsBlsSignatureProof2020\",\n           \"created\": \"2020-04-25\",\n           \"verificationMethod\": \"did:example:489398593#test\",\n           \"proofPurpose\": \"assertionMethod\",\n           \"signature\": \"F9uMuJzNBqj4j+HPTvWjUN/MNoe6KRH0818WkvDn2Sf7kg1P17YpNyzSB+CH57AWDFunU13tL8oTBDpBhODckelTxHIaEfG0rNmqmjK6DOs0/ObksTZh7W3OTbqfD2h4C/wqqMQHSWdXXnojwyFDEg==\"\n       }\n    }],\n    \"proof\": {\n        \"type\": \"BbsBlsSignature2020\",\n        \"created\": \"2020-04-25\",\n        \"verificationMethod\": \"did:example:489398593#test\",\n        \"proofPurpose\": \"authentication\",\n        \"proofValue\": \"F9uMuJzNBqj4j+HPTvWjUN/MNoe6KRH0818WkvDn2Sf7kg1P17YpNyzSB+CH57AWDFunU13tL8oTBDpBhODckelTxHIaEfG0rNmqmjK6DOs0/ObksTZh7W3OTbqfD2h4C/wqqMQHSWdXXnojwyFDEg==\",\n        \"requiredRevealStatements\": [ 4, 5 ]\n    }\n}\n

    Note: The above example is for illustrative purposes. In particular, note that whether a Verifier requests a proof_type of BbsBlsSignature2020 has no bearing on whether the Holder is required to present credentials with proofs of type BbsBlsSignatureProof2020. The choice of proof types on the credentials is constrained by a) the available types registered in RFC0593 and b) additional constraints placed on them due to other aspects of the proof requested by the Verifier, such as requiring limited disclosure with the limit_disclosure property. In such a case, a proof type of Ed25519Signature2018 in the credentials is not appropriate whereas BbsBlsSignatureProof2020 is capable of selective disclosure.

    "},{"location":"aip2/0510-dif-pres-exch-attach/#jsonwebsignature2020","title":"JsonWebSignature2020","text":"

    Specification.

    Request Parameters:

    Result:

    A Verifiable Presentation of type Presentation Submission containing the credentials requested under the verifiableCredential property and a proof property of type JsonWebSignature2020.

    Example
    {\n    \"@context\": [\n        \"https://www.w3.org/2018/credentials/v1\",\n        \"https://identity.foundation/presentation-exchange/submission/v1\"\n    ],\n    \"type\": [\n        \"VerifiablePresentation\",\n        \"PresentationSubmission\"\n    ],\n    \"presentation_submission\": {\n        \"descriptor_map\": [{\n           \"id\": \"citizenship_input\",\n           \"path\": \"$.verifiableCredential.[0]\"\n        }]\n    },\n    \"verifiableCredential\": [{\n        \"@context\": \"https://www.w3.org/2018/credentials/v1\",\n        \"id\": \"https://eu.com/claims/DriversLicense\",\n        \"type\": [\n            \"EUDriversLicense\"\n        ],\n        \"issuer\": \"did:foo:123\",\n        \"issuanceDate\": \"2010-01-01T19:73:24Z\",\n        \"credentialSubject\": {\n            \"id\": \"did:example:ebfeb1f712ebc6f1c276e12ec21\",\n            \"license\": {\n            \"number\": \"34DGE352\",\n            \"dob\": \"07/13/80\"\n          }\n        },\n        \"proof\": {\n            \"type\": \"RsaSignature2018\",\n            \"created\": \"2017-06-18T21:19:10Z\",\n            \"proofPurpose\": \"assertionMethod\",\n            \"verificationMethod\": \"https://example.edu/issuers/keys/1\",\n            \"jws\": \"...\"\n        }\n    }],\n    \"proof\": {\n      \"type\": \"JsonWebSignature2020\",\n      \"proofPurpose\": \"authentication\",\n      \"created\": \"2017-09-23T20:21:34Z\",\n      \"verificationMethod\": \"did:example:123456#key1\",\n      \"challenge\": \"2bbgh3dgjg2302d-d2b3gi423d42\",\n      \"domain\": \"example.org\",\n      \"jws\": \"eyJ0eXAiOiJK...gFWFOEjXk\"\n  }\n}\n

    Available JOSE key types are:

    kty crv signature EC P-256 ES256 EC P-384 ES384"},{"location":"aip2/0510-dif-pres-exch-attach/#drawbacks","title":"Drawbacks","text":"

    N/A

    "},{"location":"aip2/0510-dif-pres-exch-attach/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"aip2/0510-dif-pres-exch-attach/#prior-art","title":"Prior art","text":""},{"location":"aip2/0510-dif-pres-exch-attach/#unresolved-questions","title":"Unresolved questions","text":"

    TODO it is assumed the Verifier will initiate the protocol if they can transmit their presentation definition via an out-of-band channel (eg. it is published on their website) with a request-presentation message, possibly delivered via an Out-of-Band invitation (see RFC0434). For now, the Prover sends propose-presentation as a response to request-presentation.

    "},{"location":"aip2/0510-dif-pres-exch-attach/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"aip2/0519-goal-codes/","title":"0519: Goal Codes","text":""},{"location":"aip2/0519-goal-codes/#summary","title":"Summary","text":"

    Explain how different parties in an SSI ecosystem can communicate about their intentions in a way that is understandable by humans and by automated software.

    "},{"location":"aip2/0519-goal-codes/#motivation","title":"Motivation","text":"

    Agents exist to achieve the intents of their owners. Those intents largely unfold through protocols. Sometimes intelligent action in these protocols depends on a party declaring their intent. We need a standard way to do that.

    "},{"location":"aip2/0519-goal-codes/#tutorial","title":"Tutorial","text":"

    Our early learnings in SSI focused on VC-based proving with a very loose, casual approach to context. We did demos where Alice connects with a potential employer, Acme Corp -- and we assumed that each of the interacting parties had a shared understanding of one another's needs and purposes.

    But in a mature SSI ecosystem, where unknown agents can contact one another for arbitrary reasons, this context is not always easy to deduce. Acme Corp's agent may support many different protocols, and Alice may interact with Acme in the capacity of customer or potential employee or vendor. Although we have feature discovery to learn what's possible, and we have machine-readable governance frameworks to tell us what rules might apply in a given context, we haven't had a way to establish the context in the first place. When Alice contacts Acme, a context is needed before a governance framework is selectable, and before we know which ../../features are desirable.

    The key ingredient in context is intent. If Alice says to Acme, \"I'd like to connect,\", Acme wants to be able to trigger different behavior depending on whether Alice's intent is to be a customer, apply for a job, or audit Acme's taxes. This is the purpose of a goal code.

    "},{"location":"aip2/0519-goal-codes/#the-goal-code-datatype","title":"The goal code datatype","text":"

    To express intent, this RFC formally introduces the goal code datatype. When a field in a DIDComm message contains a goal code, its semantics and format match the description given here. (Goal codes are often declared via the ~thread decorator, but may also appear in ordinary message fields. See the Scope section below. Convention is to name this field \"goal_code\" where possible; however, this is only a convention, and individual protocols may adapt to it however they wish.)

    TODO: should we make a decorator out of this, so protocols don't have to declare it, and so any message can have a goal code? Or should we just let protocols declare a field in whatever message makes sense?

    Protocols use fields of this type as a way to express the intent of the message sender, thus coloring the larger context. In a sense, goal codes are to DIDComm what the subject: field is to email -- except that goal codes have formalized meanings to make them recognizable to automation.

    Goal codes use a standard format. They are lower-cased, kebab-punctuated strings. ASCII and English are recommended, as they are intended to be read by the software developer community, not by human beings; however, full UTF-8 is allowed. They support hierarchical dotted notation, where more general categories are to the left of a dot, and more specific categories are to the right. Some example goal codes might be:

    Goals are inherently self-attested. Thus, goal codes don't represent objective fact that a recipient can rely upon in a strong sense; subsequent interactions can always yield surprises. Even so, goal codes let agents triage interactions and find misalignments early; there's no point in engaging if their goals are incompatible. This has significant benefits for spam prevention, among other things.

    "},{"location":"aip2/0519-goal-codes/#verbs","title":"Verbs","text":"

    Notice the verbs in the examples: sell, date, hire, and arrange. Goals typically involve action; a complete goal code should have one or more verbs in it somewhere. Turning verbs into nouns (e.g., employment.references instead of employment.check-references) is considered bad form. (Some namespaces may put the verbs at the end; some may put them in the middle. That's a purely stylistic choice.)

    "},{"location":"aip2/0519-goal-codes/#directionality","title":"Directionality","text":"

    Notice, too, that the verbs may imply directionality. A goal with the sell verb implies that the person announcing the goal is a would-be seller, not a buyer. We could imagine a more general verb like engage-in-commerce that would allow either behavior. However, that would often be a mistake. The value of goal codes is that they let agents align around intent; announcing that you want to engage in general commerce without clarifying whether you intend to sell or buy may be too vague to help the other party make decisions.

    It is conceivable that this would lead to parallel branchs of a goal ontology that differ only in the direction of their verb. Thus, we could imagine sell.A and sell.B being shadowed by buy.A and buy.B. This might be necessary if a family of protocols allow either party to initiate an interaction and declare the goal, and if both parties view the goals as perfect mirror images. However, practical considerations may make this kind of parallelism unlikely. A random party contacting an individual to sell something may need to be quite clear about the type of selling they intend, to make it past a spam filter. In contrast, a random individual arriving at the digital storefront of a mega retailer may be quite vague about the type of buying they intend. Thus, the buy.* side of the namespace may need much less detail than the sell.* side.

    "},{"location":"aip2/0519-goal-codes/#goals-for-others","title":"Goals for others","text":"

    Related to directionality, it may occasionally be desirable to propose goals to others, rather than adovcating your own: \"Let <parties = us = Alice, Bob, and Carol> <goal = hold an auction> -- I nominate Carol to be the <role = auctioneer> and get us started.\" The difference between a normal message and an unusual one like this is not visible in the goal code; it should be exposed in additional fields that associate the goal with a particular identifier+role pair. Essentially, you are proposing a goal to another party, and these extra fields clarify who should receive the proposal, and what role/perspective they might take with respect to the goal.

    Making proposals like this may be a feature in some protocols. Where it is, the protocols determine the message field names for the goal code, the role, and the DID associated with the role and goal.

    "},{"location":"aip2/0519-goal-codes/#matching","title":"Matching","text":"

    The goal code cci.healthcare is considered a more general form of the code cci.healthcare.procedure, which is more general than cci.healthcare.procedure.schedule. Because these codes are hierarchical, wildcards and fuzzy matching are possible for either a sender or a recipient of a message. Filename-style globbing semantics are used.

    A sender agent can specify that their owner's goal is just meetupcorp.personal without clarifying more; this is like specifying that a file is located under a folder named \"meetupcorp/personal\" without specifying where; any file \"under\" that folder -- or the folder itself -- would match the pattern. A recipient agent can have a policy that says, \"Reject any attempts to connect if the goal code of the other party is aries.sell.*. Notice how this differs from aries.sell*; the first looks for things \"inside\" aries.sell; the latter looks for things \"inside\" aries that have names beginning with sell.

    "},{"location":"aip2/0519-goal-codes/#scope","title":"Scope","text":"

    When is a declared goal known to color interactions, and when is it undefined?

    We previously noted that goal codes are a bit like the subject: header on an email; they contextualize everything that follows in that thread. We don't generally want to declare a goal outside of a thread context, because that would prevent an agent from engaging in two goals at the same time.

    Given these two observations, we can say that a goal applies as soon as it is declared, and it continues to apply to all messages in the same thread. It is also inherited by implication through a thread's pthid field; that is, a parent thread's goal colors the child thread unless/until overridden.

    "},{"location":"aip2/0519-goal-codes/#namespacing","title":"Namespacing","text":"

    To avoid collision and ambiguity in code values, we need to support namespacing in our goal codes. Since goals are only a coarse-grained alignment mechanism, however, we don't need perfect decentralized precision. Confusion isn't much more than an annoyance; the worst that could happen is that two agents discover one or two steps into a protocol that they're not as aligned as they supposed. They need to be prepared to tolerate that outcome in any case.

    Thus, we follow the same general approach that's used in java's packaging system, where organizations and communities use a self-declared prefix for their ecosystem as the leftmost segment or segments of a family of identifiers (goal codes) they manage. Unlike java, though, these need not be tied to DNS in any way. We recommend a single segment namespace that is a unique string, and that is an alias for a URI identifying the origin ecosystem. (In other words, you don't need to start with \"com.yourcorp.yourproduct\" -- \"yourcorp\" is probably fine.)

    The aries namespace alias is reserved for goal codes defined in Aries RFCs. The URI aliased by this name is TBD. See the Reference section for more details.

    "},{"location":"aip2/0519-goal-codes/#versioning","title":"Versioning","text":"

    Semver-style semantics don't map to goals in an simple way; it is not obvious what constitutes a \"major\" versus a \"minor\" difference in a goal, or a difference that's not worth tracking at all. The content of a goal \u2014 the only thing that might vary across versions \u2014 is simply its free-form description, and that varies according to human judgment. Many different versions of a protocol are likely to share the goal to make a payment or to introduce two strangers. A goal is likely to be far more stable than the details of how it is accomplished.

    Because of these considerations, goal codes do not impose an explicit versioning mechanism. However, one is reserved for use, in the unusual cases where it may be helpful. It is to append -v plus a numeric suffix: my-goal-code-v1, my-goal-code-v2, etc. Goal codes that vary only by this suffix should be understood as ordered-by-numeric-suffix evolutions of one another, and goal codes that do not intend to express versioning should not use this convention for something else. A variant of the goal code without any version suffix is equivalent to a variant with the -v1 suffix. This allows human intuition about the relatedness of different codes, and it allows useful wildcard matching across versions. It also treats all version-like changes to a goal as breaking (semver \"major\") changes, which is probably a safe default.

    Families of goal codes are free to use this convention if they need it, or to invent a non-conflicting one of their own. However, we repeat our observation that versioning in goal codes is often inappropriate and unnecessary.

    "},{"location":"aip2/0519-goal-codes/#declaring-goal-codes","title":"Declaring goal codes","text":""},{"location":"aip2/0519-goal-codes/#standalone-rfcs-or-similar-sources","title":"Standalone RFCs or Similar Sources","text":"

    Any URI-referencable document can declare famlies or ontologies of goal codes. In the context of Aries, we encourage standalone RFCs for this purpose if the goals seem likely to be relevant in many contexts. Other communities may of course document goal codes in their own specs -- either dedicated to goal codes, or as part of larger topics. The following block is a sample of how we recommend that such goal codes be declared. Note that each code is individually hyperlink-able, and each is associated with a brief human-friendly description in one or more languages. This description may be used in menuing mechanisms such as the one described in Action Menu Protocol.

    "},{"location":"aip2/0519-goal-codes/#goal-codes","title":"goal codes","text":""},{"location":"aip2/0519-goal-codes/#ariessell","title":"aries.sell","text":"

    en: Sell something. Assumes two parties (buyer/seller). es: Vender algo. Asume que dos partes participan (comprador/vendedor).

    "},{"location":"aip2/0519-goal-codes/#ariessellgoodsconsumer","title":"aries.sell.goods.consumer","text":"

    en: Sell tangible goods of interest to general consumers.

    "},{"location":"aip2/0519-goal-codes/#ariessellservicesconsumer","title":"aries.sell.services.consumer","text":"

    en: Sell services of interest to general consumers.

    "},{"location":"aip2/0519-goal-codes/#ariessellservicesenterprise","title":"aries.sell.services.enterprise","text":"

    en: Sell services of interest to enterprises.

    "},{"location":"aip2/0519-goal-codes/#in-didcomm-based-protocol-specs","title":"In DIDComm-based Protocol Specs","text":"

    Occasionally, goal codes may have meaning only within the context of a specific protocol. In such cases, it may be appropriate to declare the goal codes directly in a protocol spec. This can be done using a section of the RFC as described above.

    More commonly, however, a protocol will accomplish one or more goals (e.g., when the protocol is fulfilling a co-protocol interface), or will require a participant to identify a goal at one or more points in a protocol flow. In such cases, the goal codes are probably declared external to the protocol. If they can be enumerated, they should still be referenced (hyperlinked to their respective definitions) in the protocol RFC.

    "},{"location":"aip2/0519-goal-codes/#in-governance-frameworks","title":"In Governance Frameworks","text":"

    Goal codes can also be (re-)declared in a machine-readable governance framework.

    "},{"location":"aip2/0519-goal-codes/#reference","title":"Reference","text":""},{"location":"aip2/0519-goal-codes/#known-namespace-aliases","title":"Known Namespace Aliases","text":"

    No central registry of namespace aliases is maintained; you need not register with an authority to create a new one. Just pick an alias with good enough uniqueness, and socialize it within your community. For convenience of collision avoidance, however, we maintain a table of aliases that are typically used in global contexts, and welcome PRs from anyone who wants to update it.

    alias used by URI aries Hyperledger Aries Community TBD"},{"location":"aip2/0519-goal-codes/#well-known-goal-codes","title":"Well-known goal codes","text":"

    The following goal codes are defined here because they already have demonstrated utility, based on early SSI work in Aries and elsewhere.

    "},{"location":"aip2/0519-goal-codes/#ariesvc","title":"aries.vc","text":"

    Participate in some form of VC-based interaction.

    "},{"location":"aip2/0519-goal-codes/#ariesvcissue","title":"aries.vc.issue","text":"

    Issue a verifiable credential.

    "},{"location":"aip2/0519-goal-codes/#ariesvcverify","title":"aries.vc.verify","text":"

    Verify or validate VC-based assertions.

    "},{"location":"aip2/0519-goal-codes/#ariesvcrevoke","title":"aries.vc.revoke","text":"

    Revoke a VC.

    "},{"location":"aip2/0519-goal-codes/#ariesrel","title":"aries.rel","text":"

    Create, maintain, or end something that humans would consider a relationship. This may be accomplished by establishing, updating or deleting a DIDComm messaging connection that provides a secure communication channel for the relationship. The DIDComm connection itself is not the relationship, but would be used to carry out interactions between the parties to facilitate the relationship.

    "},{"location":"aip2/0519-goal-codes/#ariesrelbuild","title":"aries.rel.build","text":"

    Create a relationship. Carries the meaning implied today by a LinkedIn invitation to connect or a Facebook \"Friend\" request. Could be as limited as creating a DIDComm Connection.

    "},{"location":"aip2/0519-goal-codes/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"aip2/0557-discover-features-v2/","title":"Aries RFC 0557: Discover Features Protocol v2.x","text":""},{"location":"aip2/0557-discover-features-v2/#summary","title":"Summary","text":"

    Describes how one agent can query another to discover which ../../features it supports, and to what extent.

    "},{"location":"aip2/0557-discover-features-v2/#motivation","title":"Motivation","text":"

    Though some agents will support just one feature and will be statically configured to interact with just one other party, many exciting uses of agents are more dynamic and unpredictable. When Alice and Bob meet, they won't know in advance which ../../features are supported by one another's agents. They need a way to find out.

    "},{"location":"aip2/0557-discover-features-v2/#tutorial","title":"Tutorial","text":"

    This is version 2.0 of the Discover Features protocol, and its fully qualified PIURI for the Discover Features protocol is:

    https://didcomm.org/discover-features/2.0\n

    This version is conceptually similar to version 1.0 of this protocol. It differs in its ability to ask about multiple feature types, and to ask multiple questions and receive multiple answers in a single round trip.

    "},{"location":"aip2/0557-discover-features-v2/#roles","title":"Roles","text":"

    There are two roles in the discover-features protocol: requester and responder. Normally, the requester asks the responder about the ../../features it supports, and the responder answers. Each role uses a single message type.

    It is also possible to proactively disclose ../../features; in this case a requester receives a response without asking for it. This may eliminate some chattiness in certain use cases (e.g., where two-way connectivity is limited).

    "},{"location":"aip2/0557-discover-features-v2/#states","title":"States","text":"

    The state progression is very simple. In the normal case, it is simple request-response; in a proactive disclosure, it's a simple one-way notification.

    "},{"location":"aip2/0557-discover-features-v2/#requester","title":"Requester","text":""},{"location":"aip2/0557-discover-features-v2/#responder","title":"Responder","text":""},{"location":"aip2/0557-discover-features-v2/#messages","title":"Messages","text":""},{"location":"aip2/0557-discover-features-v2/#queries-message-type","title":"queries Message Type","text":"

    A discover-features/queries message looks like this:

    {\n  \"@type\": \"https://didcomm.org/discover-features/2.0/queries\",\n  \"@id\": \"yWd8wfYzhmuXX3hmLNaV5bVbAjbWaU\",\n  \"queries\": [\n    { \"feature-type\": \"protocol\", \"match\": \"https://didcomm.org/tictactoe/1.*\" },\n    { \"feature-type\": \"goal-code\", \"match\": \"aries.*\" }\n  ]\n}\n

    Queries messages contain one or more query objects in the queries array. Each query essentially says, \"Please tell me what ../../features of type X you support, where the feature identifiers match this (potentially wildcarded) string.\" This particular example asks an agent if it supports any 1.x versions of the tictactoe protocol, and if it supports any goal codes that begin with \"aries.\".

    Implementations of this protocol must recognize the following values for feature-type: protocol, goal-code, gov-fw, didcomm-version, and decorator/header. (The concept known as decorator in DIDComm v1 approximately maps to the concept known as header in DIDComm v2. The two values should be considered synonyms and must both be recognized.) Additional values of feature-type may be standardized by raising a PR against this RFC that defines the new type and increments the minor protocol version number; non-standardized values are also valid, but there is no guarantee that their semantics will be recognized.

    Identifiers for feature types vary. For protocols, identifiers are PIURIs. For goal codes, identifiers are goal code values. For governance frameworks, identifiers are URIs where the framework is published (typically the data_uri field if machine-readable. For DIDComm versions, identifiers are the URIs where DIDComm versions are developed (https://github.com/hyperledger/aries-rfcs for V1 and https://github.com/decentralized-identity/didcomm-messaging for V2; see \"Detecting DIDComm Versions\" in RFC 0044 for more details).

    The match field of a query descriptor may use the * wildcard. By itself, a match with just the wildcard says, \"I'm interested in anything you want to share with me.\" But usually, this wildcard will be to match a prefix that's a little more specific, as in the example that matches any 1.x version.

    Any agent may send another agent this message type at any time. Implementers of agents that intend to support dynamic relationships and rich ../../features are strongly encouraged to implement support for this message, as it is likely to be among the first messages exchanged with a stranger.

    "},{"location":"aip2/0557-discover-features-v2/#disclosures-message-type","title":"disclosures Message Type","text":"

    A discover-features/disclosures message looks like this:

    {\n  \"@type\": \"https://didcomm.org/discover-features/2.0/disclosures\",\n  \"~thread\": { \"thid\": \"yWd8wfYzhmuXX3hmLNaV5bVbAjbWaU\" },\n  \"disclosures\": [\n    {\n      \"feature-type\": \"protocol\",\n      \"id\": \"https://didcomm.org/tictactoe/1.0\",\n      \"roles\": [\"player\"]\n    },\n    {\n      \"feature-type\": \"goal-code\",\n      \"id\": \"aries.sell.goods.consumer\"\n    }\n  ]\n}\n

    The disclosures field is a JSON array of zero or more disclosure objects that describe a feature. Each descriptor has a feature-type field that contains data corresponding to feature-type in a query object, and an id field that unambiguously identifies a single item of that feature type. When the item is a protocol, the disclosure object may also contain a roles array that enumerates the roles the responding agent can play in the associated protocol. Future feature types may add additional optional fields, though no other fields are being standardized with this version of the RFC.

    Disclosures messages say, \"Here are some ../../features I support (that matched your queries).\"

    "},{"location":"aip2/0557-discover-features-v2/#sparse-disclosures","title":"Sparse Disclosures","text":"

    Disclosures do not have to contain exhaustive detail. For example, the following response omits the optional roles field but may be just as useful as one that includes it:

    {\n  \"@type\": \"https://didcomm.org/discover-features/2.0/disclosures\",\n  \"~thread\": { \"thid\": \"yWd8wfYzhmuXX3hmLNaV5bVbAjbWaU\" },\n  \"disclosures\": [\n    {\"feature-type\": \"protocol\", \"id\": \"https://didcomm.org/tictactoe/1.0\"}\n  ]\n}\n

    Less detail probably suffices because agents do not need to know everything about one another's implementations in order to start an interaction--usually the flow will organically reveal what's needed. For example, the outcome message in the tictactoe protocol isn't needed until the end, and is optional anyway. Alice can start a tictactoe game with Bob and will eventually see whether he has the right idea about outcome messages.

    The missing roles in this disclosure does not say, \"I support no roles in this protocol.\" It says, \"I support the protocol but I'm providing no detail about specific roles.\" Similar logic applies to any other omitted fields.

    An empty disclosures array does not say, \"I support no ../../features that match your query.\" It says, \"I'm not disclosing to you that I support any ../../features (that match your query).\" An agent might not tell another that it supports a feature for various reasons, including: the trust that it imputes to the other party based on cumulative interactions so far, whether it's in the middle of upgrading a plugin, whether it's currently under high load, and so forth. And responses to a discover-features query are not guaranteed to be true forever; agents can be upgraded or downgraded, although they probably won't churn in their feature profiles from moment to moment.

    "},{"location":"aip2/0557-discover-features-v2/#privacy-considerations","title":"Privacy Considerations","text":"

    Because the wildcards in a queries message can be very inclusive, the discover-features protocol could be used to mine information suitable for agent fingerprinting, in much the same way that browser fingerprinting works. This is antithetical to the ethos of our ecosystem, and represents bad behavior. Agents should use discover-features to answer legitimate questions, and not to build detailed profiles of one another. However, fingerprinting may be attempted anyway.

    For agents that want to maintain privacy, several best practices are recommended:

    "},{"location":"aip2/0557-discover-features-v2/#follow-selective-disclosure","title":"Follow selective disclosure.","text":"

    Only reveal supported ../../features based on trust in the relationship. Even if you support a protocol, you may not wish to use it in every relationship. Don't tell others about ../../features you do not plan to use with them.

    Patterns are easier to see in larger data samples. However, a pattern of ultra-minimal data is also a problem, so use good judgment about how forthcoming to be.

    "},{"location":"aip2/0557-discover-features-v2/#vary-the-format-of-responses","title":"Vary the format of responses.","text":"

    Sometimes, you might prettify your agent plaintext message one way, sometimes another.

    "},{"location":"aip2/0557-discover-features-v2/#vary-the-order-of-items-in-the-disclosures-array","title":"Vary the order of items in the disclosures array.","text":"

    If more than one key matches a query, do not always return them in alphabetical order or version order. If you do return them in order, do not always return them in ascending order.

    "},{"location":"aip2/0557-discover-features-v2/#consider-adding-some-spurious-details","title":"Consider adding some spurious details.","text":"

    If a query could match multiple ../../features, then occasionally you might add some made-up ../../features as matches. If a wildcard allows multiple versions of a protocol, then sometimes you might use some made-up versions. And sometimes not. (Doing this too aggressively might reveal your agent implementation, so use sparingly.)

    "},{"location":"aip2/0557-discover-features-v2/#vary-how-you-query-too","title":"Vary how you query, too.","text":"

    How you ask questions may also be fingerprintable.

    "},{"location":"aip2/0557-discover-features-v2/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"aip2/0592-indy-attachments/","title":"Aries RFC 0592: Indy Attachment Formats for Requesting and Presenting Credentials","text":""},{"location":"aip2/0592-indy-attachments/#summary","title":"Summary","text":"

    This RFC registers attachment formats used with Hyperledger Indy-style ZKP-oriented credentials in Issue Credential Protocol 2.0 and Present Proof Protocol 2.0. These formats are generally considered v2 formats, as they align with the \"anoncreds2\" work in Hyperledger Ursa and are a second generation implementation. They began to be used in production in 2018 and are in active deployment in 2021.

    "},{"location":"aip2/0592-indy-attachments/#motivation","title":"Motivation","text":"

    Allows Indy-style credentials to be used with credential-related protocols that take pluggable formats as payloads.

    "},{"location":"aip2/0592-indy-attachments/#reference","title":"Reference","text":""},{"location":"aip2/0592-indy-attachments/#cred-filter-format","title":"cred filter format","text":"

    The potential holder uses this format to propose criteria for a potential credential for the issuer to offer.

    The identifier for this format is hlindy/cred-filter@v2.0. It is a base64-encoded version of the data structure specifying zero or more criteria from the following (non-base64-encoded) structure:

    {\n    \"schema_issuer_did\": \"<schema_issuer_did>\",\n    \"schema_name\": \"<schema_name>\",\n    \"schema_version\": \"<schema_version>\",\n    \"schema_id\": \"<schema_identifier>\",\n    \"issuer_did\": \"<issuer_did>\",\n    \"cred_def_id\": \"<credential_definition_identifier>\"\n}\n

    The potential holder may not know, and need not specify, all of these criteria. For example, the holder might only know the schema name and the (credential) issuer DID. Recall that the potential holder may specify target attribute values and MIME types in the credential preview.

    For example, the JSON (non-base64-encoded) structure might look like this:

    {\n    \"schema_issuer_did\": \"did:sov:4RW6QK2HZhHxa2tg7t1jqt\",\n    \"schema_name\": \"bcgov-mines-act-permit.bcgov-mines-permitting\",\n    \"issuer_did\": \"did:sov:4RW6QK2HZhHxa2tg7t1jqt\"\n}\n

    A complete propose-credential message from the Issue Credential protocol 2.0 embeds this format at /filters~attach/data/base64:

    {\n    \"@id\": \"<uuid of propose message>\",\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/propose-credential\",\n    \"comment\": \"<some comment>\",\n    \"formats\" : [{\n        \"attach_id\": \"<attach@id value>\",\n        \"format\": \"hlindy/cred-filter@v2.0\"\n    }],\n    \"filters~attach\": [{\n        \"@id\": \"<attach@id value>\",\n        \"mime-type\": \"application/json\",\n        \"data\": {\n            \"base64\": \"ewogICAgInNjaGVtYV9pc3N1ZXJfZGlkIjogImRpZDpzb3Y... (clipped)... LMkhaaEh4YTJ0Zzd0MWpxdCIKfQ==\"\n        }\n    }]\n}\n
    "},{"location":"aip2/0592-indy-attachments/#cred-abstract-format","title":"cred abstract format","text":"

    This format is used to clarify the structure and semantics (but not the concrete data values) of a potential credential, in offers sent from issuer to potential holder.

    The identifier for this format is hlindy/cred-abstract@v2.0. It is a base64-encoded version of the data returned from indy_issuer_create_credential_offer().

    The JSON (non-base64-encoded) structure might look like this:

    {\n    \"schema_id\": \"4RW6QK2HZhHxa2tg7t1jqt:2:bcgov-mines-act-permit.bcgov-mines-permitting:0.2.0\",\n    \"cred_def_id\": \"4RW6QK2HZhHxa2tg7t1jqt:3:CL:58160:default\",\n    \"nonce\": \"57a62300-fbe2-4f08-ace0-6c329c5210e1\",\n    \"key_correctness_proof\" : <key_correctness_proof>\n}\n

    A complete offer-credential message from the Issue Credential protocol 2.0 embeds this format at /offers~attach/data/base64:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/offer-credential\",\n    \"@id\": \"<uuid of offer message>\",\n    \"replacement_id\": \"<issuer unique id>\",\n    \"comment\": \"<some comment>\",\n    \"credential_preview\": <json-ld object>,\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\": \"hlindy/cred-abstract@v2.0\"\n        }\n    ],\n    \"offers~attach\": [\n        {\n            \"@id\": \"<attach@id value>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"ewogICAgInNjaGVtYV9pZCI6ICI0Ulc2UUsySFpoS... (clipped)... jb3JyZWN0bmVzc19wcm9vZj4KfQ==\"\n            }\n        }\n    ]\n}\n

    The same structure can be embedded at /offers~attach/data/base64 in an offer-credential message.

    "},{"location":"aip2/0592-indy-attachments/#cred-request-format","title":"cred request format","text":"

    This format is used to formally request a credential. It differs from the credential abstract above in that it contains a cryptographic commitment to a link secret; an issuer can therefore use it to bind a concrete instance of an issued credential to the appropriate holder. (In contrast, the credential abstract describes the schema and cred def, but not enough information to actually issue to a specific holder.)

    The identifier for this format is hlindy/cred-req@v2.0. It is a base64-encoded version of the data returned from indy_prover_create_credential_req().

    The JSON (non-base64-encoded) structure might look like this:

    {\n    \"prover_did\" : \"did:sov:abcxyz123\",\n    \"cred_def_id\" : \"4RW6QK2HZhHxa2tg7t1jqt:3:CL:58160:default\",\n    // Fields below can depend on Cred Def type\n    \"blinded_ms\" : <blinded_master_secret>,\n    \"blinded_ms_correctness_proof\" : <blinded_ms_correctness_proof>,\n    \"nonce\": \"fbe22300-57a6-4f08-ace0-9c5210e16c32\"\n}\n

    A complete request-credential message from the Issue Credential protocol 2.0 embeds this format at /requests~attach/data/base64:

    {\n    \"@id\": \"cf3a9301-6d4a-430f-ae02-b4a79ddc9706\",\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/request-credential\",\n    \"comment\": \"<some comment>\",\n    \"formats\": [{\n        \"attach_id\": \"7cd11894-838a-45c0-a9ec-13e2d9d125a1\",\n        \"format\": \"hlindy/cred-req@v2.0\"\n    }],\n    \"requests~attach\": [{\n        \"@id\": \"7cd11894-838a-45c0-a9ec-13e2d9d125a1\",\n        \"mime-type\": \"application/json\",\n        \"data\": {\n            \"base64\": \"ewogICAgInByb3Zlcl9kaWQiIDogImRpZDpzb3Y6YWJjeHl.. (clipped)... DAtNTdhNi00ZjA4LWFjZTAtOWM1MjEwZTE2YzMyIgp9\"\n        }\n    }]\n}\n
    "},{"location":"aip2/0592-indy-attachments/#credential-format","title":"credential format","text":"

    A concrete, issued Indy credential may be transmitted over many protocols, but is specifically expected as the final message in Issuance Protocol 2.0. The identifier for its format is hlindy/cred@v2.0.

    This is a credential that's designed to be held but not shared directly. It is stored in the holder's wallet and used to derive a novel ZKP or W3C-compatible verifiable presentation just in time for each sharing of credential material.

    The encoded values of the credential MUST follow the encoding algorithm as described in Encoding Claims.

    This is the format emitted by libindy's indy_issuer_create_credential() function. It is JSON-based and might look like this:

    {\n    \"schema_id\": \"4RW6QK2HZhHxa2tg7t1jqt:2:bcgov-mines-act-permit.bcgov-mines-permitting:0.2.0\",\n    \"cred_def_id\": \"4RW6QK2HZhHxa2tg7t1jqt:3:CL:58160:default\",\n    \"rev_reg_id\", \"EyN78DDGHyok8qw6W96UBY:4:EyN78DDGHyok8qw6W96UBY:3:CL:56389:CardossierOrgPerson:CL_ACCUM:1-1000\",\n    \"values\": {\n        \"attr1\" : {\"raw\": \"value1\", \"encoded\": \"value1_as_int\" },\n        \"attr2\" : {\"raw\": \"value2\", \"encoded\": \"value2_as_int\" }\n    },\n    // Fields below can depend on Cred Def type\n    \"signature\": <signature>,\n    \"signature_correctness_proof\": <signature_correctness_proof>\n    \"rev_reg\": <revocation registry state>\n    \"witness\": <witness>\n}\n

    An exhaustive description of the format is out of scope here; it is more completely documented in white papers, source code, and other Indy materials.

    "},{"location":"aip2/0592-indy-attachments/#proof-request-format","title":"proof request format","text":"

    This format is used to formally request a verifiable presenation (proof) derived from an Indy-style ZKP-oriented credential. It can also be used by a holder to propose a presentation.

    The identifier for this format is hlindy/proof-req@v2.0. It is a base64-encoded version of the data returned from indy_prover_search_credentials_for_proof_req().

    Here is a sample proof request that embodies the following: \"Using a government-issued ID, disclose the credential holder\u2019s name and height, hide the credential holder\u2019s sex, get them to self-attest their phone number, and prove that their age is at least 18\":

    {\n    \"nonce\": \u201c2934823091873049823740198370q23984710239847\u201d, \n    \"name\":\"proof_req_1\",\n    \"version\":\"0.1\",\n    \"requested_attributes\":{\n        \"attr1_referent\": {\"name\":\"sex\"},\n        \"attr2_referent\": {\"name\":\"phone\"},\n        \"attr3_referent\": {\"names\": [\"name\", \"height\"], \"restrictions\": <restrictions specifying government-issued ID>}\n    },\n    \"requested_predicates\":{\n        \"predicate1_referent\":{\"name\":\"age\",\"p_type\":\">=\",\"p_value\":18}\n    }\n}\n
    "},{"location":"aip2/0592-indy-attachments/#proof-format","title":"proof format","text":"

    This is the format of an Indy-style ZKP. It plays the same role as a W3C-style verifiable presentation (VP) and can be mapped to one.

    The raw values encoded in the presentation SHOULD be verified against the encoded values using the encoding algorithm as described below in Encoding Claims.

    The identifier for this format is hlindy/proof@v2.0. It is a version of the (JSON-based) data emitted by libindy's indy_prover_create_proof()) function. A proof that responds to the previous proof request sample looks like this:

    {\n  \"proof\":{\n    \"proofs\":[\n      {\n        \"primary_proof\":{\n          \"eq_proof\":{\n            \"revealed_attrs\":{\n              \"height\":\"175\",\n              \"name\":\"1139481716457488690172217916278103335\"\n            },\n            \"a_prime\":\"5817705...096889\",\n            \"e\":\"1270938...756380\",\n            \"v\":\"1138...39984052\",\n            \"m\":{\n              \"master_secret\":\"375275...0939395\",\n              \"sex\":\"3511483...897083518\",\n              \"age\":\"13430...63372249\"\n            },\n            \"m2\":\"1444497...2278453\"\n          },\n          \"ge_proofs\":[\n            {\n              \"u\":{\n                \"1\":\"152500...3999140\",\n                \"2\":\"147748...2005753\",\n                \"0\":\"8806...77968\",\n                \"3\":\"10403...8538260\"\n              },\n              \"r\":{\n                \"2\":\"15706...781609\",\n                \"3\":\"343...4378642\",\n                \"0\":\"59003...702140\",\n                \"DELTA\":\"9607...28201020\",\n                \"1\":\"180097...96766\"\n              },\n              \"mj\":\"134300...249\",\n              \"alpha\":\"827896...52261\",\n              \"t\":{\n                \"2\":\"7132...47794\",\n                \"3\":\"38051...27372\",\n                \"DELTA\":\"68025...508719\",\n                \"1\":\"32924...41082\",\n                \"0\":\"74906...07857\"\n              },\n              \"predicate\":{\n                \"attr_name\":\"age\",\n                \"p_type\":\"GE\",\n                \"value\":18\n              }\n            }\n          ]\n        },\n        \"non_revoc_proof\":null\n      }\n    ],\n    \"aggregated_proof\":{\n      \"c_hash\":\"108743...92564\",\n      \"c_list\":[ 6 arrays of 257 numbers between 0 and 255]\n    }\n  },\n  \"requested_proof\":{\n    \"revealed_attrs\":{\n      \"attr1_referent\":{\n        \"sub_proof_index\":0,\n        \"raw\":\"Alex\",\n        \"encoded\":\"1139481716457488690172217916278103335\"\n      }\n    },\n    \"revealed_attr_groups\":{\n      \"attr4_referent\":{\n        \"sub_proof_index\":0,\n        \"values\":{\n          \"name\":{\n            \"raw\":\"Alex\",\n            \"encoded\":\"1139481716457488690172217916278103335\"\n          },\n          \"height\":{\n            \"raw\":\"175\",\n            \"encoded\":\"175\"\n          }\n        }\n      }\n    },\n    \"self_attested_attrs\":{\n      \"attr3_referent\":\"8-800-300\"\n    },\n    \"unrevealed_attrs\":{\n      \"attr2_referent\":{\n        \"sub_proof_index\":0\n      }\n    },\n    \"predicates\":{\n      \"predicate1_referent\":{\n        \"sub_proof_index\":0\n      }\n    }\n  },\n  \"identifiers\":[\n    {\n      \"schema_id\":\"NcYxiDXkpYi6ov5FcYDi1e:2:gvt:1.0\",\n      \"cred_def_id\":\"NcYxi...cYDi1e:2:gvt:1.0:TAG_1\",\n      \"rev_reg_id\":null,\n      \"timestamp\":null\n    }\n  ]\n}\n
    "},{"location":"aip2/0592-indy-attachments/#unrevealed-attributes","title":"Unrevealed Attributes","text":"

    AnonCreds supports a holder responding to a proof request with some of the requested claims included in an unrevealed_attrs array, as seen in the example above, with attr2_referent. Assuming the rest of the proof is valid, AnonCreds will indicate that a proof with unrevealed attributes has been successfully verified. It is the responsibility of the verifier to determine if the purpose of the verification has been met if some of the attributes are not revealed.

    There are at least a few valid use cases for this approach:

    "},{"location":"aip2/0592-indy-attachments/#encoding-claims","title":"Encoding Claims","text":"

    Claims in AnonCreds-based verifiable credentials are put into the credential in two forms, raw and encoded. raw is the actual data value, and encoded is the (possibly derived) integer value that is used in presentations. At this time, AnonCreds does not take an opinion on the method used for encoding the raw value.

    AnonCreds issuers and verifiers must agree on the encoding method so that the verifier can check that the raw value returned in a presentation corresponds to the proven encoded value. The following is the encoding algorithm that MUST be used by Issuers when creating credentials and SHOULD be verified by Verifiers receiving presentations:

    An example implementation in Python can be found here.

    A gist of test value pairs can be found here.

    "},{"location":"aip2/0592-indy-attachments/#notes-on-encoding-claims","title":"Notes on Encoding Claims","text":""},{"location":"aip2/0592-indy-attachments/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"aip2/0593-json-ld-cred-attach/","title":"Aries RFC 0593: JSON-LD Credential Attachment format for requesting and issuing credentials","text":""},{"location":"aip2/0593-json-ld-cred-attach/#summary","title":"Summary","text":"

    This RFC registers an attachment format for use in the issue-credential V2 protocol based on JSON-LD credentials with Linked Data Proofs from the VC Data Model.

    It defines a minimal set of parameters needed to create a common understanding of the verifiable credential to issue. It is based on version 1.0 of the Verifiable Credentials Data Model which is a W3C recommendation since 19 November 2019.

    "},{"location":"aip2/0593-json-ld-cred-attach/#motivation","title":"Motivation","text":"

    The Issue Credential protocol needs an attachment format to be able to exchange JSON-LD credentials with Linked Data Proofs. It is desirable to make use of specifications developed in an open standards body, such as the Credential Manifest for which the attachment format is described in RFC 0511: Credential-Manifest Attachment format. However, the Credential Manifest is not finished and ready yet, and therefore there is a need to bridge the gap between standards.

    "},{"location":"aip2/0593-json-ld-cred-attach/#tutorial","title":"Tutorial","text":"

    Complete examples of messages are provided in the reference section.

    "},{"location":"aip2/0593-json-ld-cred-attach/#reference","title":"Reference","text":""},{"location":"aip2/0593-json-ld-cred-attach/#ld-proof-vc-detail-attachment-format","title":"ld-proof-vc-detail attachment format","text":"

    Format identifier: aries/ld-proof-vc-detail@v1.0

    This format is used to formally propose, offer, or request a credential. The credential property should contain the credential as it is going to be issued, without the proof and credentialStatus properties. Options for these properties are specified in the options object.

    The JSON structure might look like this:

    {\n  \"credential\": {\n    \"@context\": [\n      \"https://www.w3.org/2018/credentials/v1\",\n      \"https://www.w3.org/2018/credentials/examples/v1\"\n    ],\n    \"id\": \"urn:uuid:3978344f-8596-4c3a-a978-8fcaba3903c5\",\n    \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n    \"issuer\": \"did:key:z6MkodKV3mnjQQMB9jhMZtKD9Sm75ajiYq51JDLuRSPZTXrr\",\n    \"issuanceDate\": \"2020-01-01T19:23:24Z\",\n    \"expirationDate\": \"2021-01-01T19:23:24Z\",\n    \"credentialSubject\": {\n      \"id\": \"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\",\n      \"degree\": {\n        \"type\": \"BachelorDegree\",\n        \"name\": \"Bachelor of Science and Arts\"\n      }\n    }\n  },\n  \"options\": {\n    \"proofPurpose\": \"assertionMethod\",\n    \"created\": \"2020-04-02T18:48:36Z\",\n    \"domain\": \"example.com\",\n    \"challenge\": \"9450a9c1-4db5-4ab9-bc0c-b7a9b2edac38\",\n    \"credentialStatus\": {\n      \"type\": \"CredentialStatusList2017\"\n    },\n    \"proofType\": \"Ed25519Signature2018\"\n  }\n}\n

    A complete request credential message form the Issue Credential protocol 2.0 might look like this:

    {\n  \"@id\": \"7293daf0-ed47-4295-8cc4-5beb513e500f\",\n  \"@type\": \"https://didcomm.org/issue-credential/%VER/request-credential\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"13a3f100-38ce-4e96-96b4-ea8f30250df9\",\n      \"format\": \"aries/ld-proof-vc-detail@v1.0\"\n    }\n  ],\n  \"requests~attach\": [\n    {\n      \"@id\": \"13a3f100-38ce-4e96-96b4-ea8f30250df9\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"base64\": \"ewogICJjcmVkZW50aWFsIjogewogICAgIkBjb250...(clipped)...IkVkMjU1MTlTaWduYXR1cmUyMDE4IgogIH0KfQ==\"\n      }\n    }\n  ]\n}\n

    The format is closely related to the Verifiable Credentials HTTP API, but diverts on some places. The main differences are:

    "},{"location":"aip2/0593-json-ld-cred-attach/#ld-proof-vc-attachment-format","title":"ld-proof-vc attachment format","text":"

    Format identifier: aries/ld-proof-vc@v1.0

    This format is used to transmit a verifiable credential with linked data proof. The contents of the attachment is a standard JSON-LD Verifiable Credential object with linked data proof as defined by the Verifiable Credentials Data Model and the Linked Data Proofs specification.

    The JSON structure might look like this:

    {\n  \"@context\": [\n    \"https://www.w3.org/2018/credentials/v1\",\n    \"https://www.w3.org/2018/credentials/examples/v1\"\n  ],\n  \"id\": \"http://example.gov/credentials/3732\",\n  \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n  \"issuer\": {\n    \"id\": \"did:web:vc.transmute.world\"\n  },\n  \"issuanceDate\": \"2020-03-10T04:24:12.164Z\",\n  \"credentialSubject\": {\n    \"id\": \"did:example:ebfeb1f712ebc6f1c276e12ec21\",\n    \"degree\": {\n      \"type\": \"BachelorDegree\",\n      \"name\": \"Bachelor of Science and Arts\"\n    }\n  },\n  \"proof\": {\n    \"type\": \"JsonWebSignature2020\",\n    \"created\": \"2020-03-21T17:51:48Z\",\n    \"verificationMethod\": \"did:web:vc.transmute.world#_Qq0UL2Fq651Q0Fjd6TvnYE-faHiOpRlPVQcY_-tA4A\",\n    \"proofPurpose\": \"assertionMethod\",\n    \"jws\": \"eyJiNjQiOmZhbHNlLCJjcml0IjpbImI2NCJdLCJhbGciOiJFZERTQSJ9..OPxskX37SK0FhmYygDk-S4csY_gNhCUgSOAaXFXDTZx86CmI5nU9xkqtLWg-f4cqkigKDdMVdtIqWAvaYx2JBA\"\n  }\n}\n

    A complete issue-credential message from the Issue Credential protocol 2.0 might look like this:

    {\n  \"@id\": \"284d3996-ba85-45d9-964b-9fd5805517b6\",\n  \"@type\": \"https://didcomm.org/issue-credential/%VER/issue-credential\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"format\": \"aries/ld-proof-vc@v1.0\"\n    }\n  ],\n  \"credentials~attach\": [\n    {\n      \"@id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"mime-type\": \"application/ld+json\",\n      \"data\": {\n        \"base64\": \"ewogICAgICAgICAgIkBjb250ZXogWwogICAgICAg...(clipped)...RNVmR0SXFXZhWXgySkJBIgAgfQogICAgICAgIH0=\"\n      }\n    }\n  ]\n}\n
    "},{"location":"aip2/0593-json-ld-cred-attach/#supported-proof-types","title":"Supported Proof Types","text":"

    Following are the Linked Data proof types on Verifiable Credentials that MUST be supported for compliance with this RFC. All suites listed in the following table MUST be registered in the Linked Data Cryptographic Suite Registry:

    Suite Spec Enables Selective disclosure? Enables Zero-knowledge proofs? Optional Ed25519Signature2018 Link No No No BbsBlsSignature2020** Link Yes No No JsonWebSignature2020*** Link No No Yes

    ** Note: see RFC0646 for details on how BBS+ signatures are to be produced and consumed by Aries agents.

    *** Note: P-256 and P-384 curves are supported.

    "},{"location":"aip2/0593-json-ld-cred-attach/#drawbacks","title":"Drawbacks","text":"

    N/A

    "},{"location":"aip2/0593-json-ld-cred-attach/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"aip2/0593-json-ld-cred-attach/#prior-art","title":"Prior art","text":"

    N/A

    "},{"location":"aip2/0593-json-ld-cred-attach/#unresolved-questions","title":"Unresolved questions","text":"

    N/A

    "},{"location":"aip2/0593-json-ld-cred-attach/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0003-protocols/","title":"Aries RFC 0003: Protocols","text":""},{"location":"concepts/0003-protocols/#summary","title":"Summary","text":"

    Defines peer-to-peer application-level protocols in the context of interactions among agent-like things, and shows how they should be designed and documented.

    "},{"location":"concepts/0003-protocols/#table-of-contents","title":"Table of Contents","text":""},{"location":"concepts/0003-protocols/#motivation","title":"Motivation","text":"

    APIs in the style of Swagger are familiar to nearly all developers, and it's a common assumption that we should use them to solve the problems at hand in the decentralized identity space. However, to truly decentralize, we must think about interactions at a higher level of generalization. Protocols can model all APIs, but not the other way around. This matters. We need to explain why.

    We also need to show how a protocol is defined, so the analog to defining a Swagger API is demystified.

    "},{"location":"concepts/0003-protocols/#tutorial","title":"Tutorial","text":""},{"location":"concepts/0003-protocols/#what-is-a-protocol","title":"What is a Protocol?","text":"

    A protocol is a recipe for a stateful interaction. Protocols are all around us, and are so ordinary that we take them for granted. Each of the following interactions is stateful, and has conventions that constitute a sort of \"recipe\":

    In the context of decentralized identity, protocols manifest at many different levels of the stack: at the lowest levels of networking, in cryptographic algorithms like Diffie Hellman, in the management of DIDs, in the conventions of DIDComm, and in higher-level interactions that solve problems for people with only minimal interest in the technology they're using. However, this RFC focuses on the last of these layers, where use cases and personas are transformed into ../../features with obvious social value like:

    When \"protocol\" is used in an Aries context without any qualifying adjective, it is referencing a recipe for a high-level interaction like these. Lower-level protocols are usually described more specifically and possibly with other verbiage: \"cryptographic algorithms\", \"DID management procedures\", \"DIDComm conventions\", \"transports\", and so forth. This helps us focus \"protocol\" on the place where application developers that consume Aries do most of the work that creates value.

    "},{"location":"concepts/0003-protocols/#relationship-to-apis","title":"Relationship to APIs","text":"

    The familiar world of web APIs is a world of protocols, but it comes with constraints antithetical to decentralized identity:

    Protocols impose none of these constraints. Web APIs can easily be modeled as protocols where the transport is HTTP and the payload is a message, and the Aries community actively does this. We are not opposed to APIs. We just want to describe and standardize the higher level abstraction so we don't have a web solution and a BlueTooth solution that are diverged for no good reason.

    "},{"location":"concepts/0003-protocols/#decentralized","title":"Decentralized","text":"

    As used in the agent/DIDComm world, protocols are decentralized. This means there is not an overseer for the protocol, guaranteeing information flow, enforcing behaviors, and ensuring a coherent view. It is a subtle but important divergence from API-centric approaches, where a server holds state against which all other parties (clients) operate. Instead, all parties are peers, and they interact by mutual consent and with a (hopefully) shared understanding of the rules and goals. Protocols are like a dance\u2014not one that's choreographed or directed, but one where the parties make dynamic decisions and react to them.

    "},{"location":"concepts/0003-protocols/#types-of-protocols","title":"Types of Protocols","text":"

    The simplest protocol style is notification. This style involves two parties, but it is one-way: the notifier emits a message, and the protocol ends when the notified receives it. The basic message protocol uses this style.

    Slightly more complex is the request-response protocol style. This style involve two parties, with the requester making the first move, and the responder completing the interaction. The Discover Features Protocol uses this style. Note that with protocols as Aries models them (and unlike an HTTP request), the request-response messages are asynchronous.

    However, more complex protocols exist. The Introduce Protocol involves three parties, not two. The issue credential protocol includes up to six message types (including ack and problem_report), two of which (proposal and offer) can be used to interactively negotiate details of the elements of the subsequent messages in the protocol.

    See this subsection for definitions of the terms \"role\", \"participant\", and \"party\".

    "},{"location":"concepts/0003-protocols/#agent-design","title":"Agent Design","text":"

    Protocols are the key unit of interoperable extensibility in agents and agent-like things. To add a new interoperable feature to an agent, give it the ability to handle a new protocol.

    When agents receive messages, they map the messages to a protocol handler and possibly to an interaction state that was previously persisted. This is the analog to routes, route handlers, and sessions in web APIs, and could actually be implemented as such if the transport for the protocol is HTTP. The protocol handler is code that knows the rules of a particular protocol; the interaction state tracks progress through an interaction. For more information, see the agents explainer\u2014RFC 0004 and the DIDComm explainer\u2014RFC 0005.

    "},{"location":"concepts/0003-protocols/#composable","title":"Composable","text":"

    Protocols are composable--meaning that you can build complex ones from simple ones. The protocol for asking someone to repeat their last sentence can be part of the protocol for ordering food at a restaurant. It's common to ask a potential driver's license holder to prove their street address before issuing the license. In protocol terms, this is nicely modeled as the present proof being invoked in the middle of an issue credential protocol.

    When we run one protocol inside another, we call the inner protocol a subprotocol, and the outer protocol a superprotocol. A given protocol may be a subprotocol in some contexts, and a standalone protocol in others. In some contexts, a protocol may be a subprotocol from one perspective, and a superprotocol from another (as when protocols are nested at least 3 deep).

    Commonly, protocols wait for subprotocols to complete, and then they continue. A good example of this is mentioned above\u2014starting an issue credential flow, but requiring the potential issuer and/or the potential holder to prove something to one another before completing the process.

    In other cases, a protocol B is not \"contained\" inside protocol A. Rather, A triggers B, then continues in parallel, without waiting for B to complete. This coprotocol relationship is analogous to relationship between coroutines in computer science. In the Introduce Protocol, the final step is to begin a connection protocol between the two introducees-- but the introduction coprotocol completes when the connect coprotocol starts, not when it completes.

    "},{"location":"concepts/0003-protocols/#message-types","title":"Message Types","text":"

    A protocol includes a number of message types that enable the execution of an instance of a protocol. Collectively, the message types of a protocol become the skeleton of its interface. Most of the message types are defined with the protocol, but several key message types, notably acks and problem reports are defined in separate RFCs and adopted into a protocol. This ensures that the structure of such messages is standardized, but used in the context of the protocol adopting the message types.

    "},{"location":"concepts/0003-protocols/#handling-unrecognized-items-in-messages","title":"Handling Unrecognized Items in Messages","text":"

    In the semver section of this document there is discussion of the handling of mismatches in minor versions supported and received. Notably, a recipient that supports a given minor version of a protocol less than that of a received protocol message should ignore any unrecognized fields in the message. Such handling of unrecognized data items applies more generally than just minor version mismatches. A recipient of a message from a supported major version of a protocol should ignore any unrecognized items in a received message, even if the supported and minor versions are the same. When items from the message are ignored, the recipient may want to send a warning problem-report message with code fields-ignored.

    "},{"location":"concepts/0003-protocols/#ingredients","title":"Ingredients","text":"

    A protocol has the following ingredients:

    "},{"location":"concepts/0003-protocols/#how-to-define-a-protocol","title":"How to Define a Protocol","text":"

    To define a protocol, write an RFC. Specific instructions for protocol RFCs, and a discussion about the theory behind detailed protocol ../../concepts, are given in the instructions for protocol RFCs and in the protocol RFC template.

    The tictactoe protocol is attached to this RFC as an example.

    "},{"location":"concepts/0003-protocols/#security-considerations","title":"Security Considerations","text":""},{"location":"concepts/0003-protocols/#replay-attacks","title":"Replay Attacks","text":"

    It should be noted that when defining a protocol that has domain specific requirements around preventing replay attacks, an @id property SHOULD be required. Given an @id field is most commonly set to be a UUID, it should provide randomness comparable to that of a nonce in preventing replay attacks. However, this means that care will be needed in processing of the @id field to make sure its value has not been used before. In some cases, nonces require being unpredictable as well. In this case, greater review should be taken as to how the @id field should be used in the domain specific protocol. In the event where the @id field is not adequate for preventing replay attacks, it's recommended that an additional nonce field be required by the domain specific protocol specification.

    "},{"location":"concepts/0003-protocols/#reference","title":"Reference","text":""},{"location":"concepts/0003-protocols/#message-type-and-protocol-identifier-uris","title":"Message Type and Protocol Identifier URIs","text":"

    Message types and protocols are identified with URIs that match certain conventions.

    "},{"location":"concepts/0003-protocols/#mturi","title":"MTURI","text":"

    A message type URI (MTURI) identifies message types unambiguously. Standardizing its format is important because it is parsed by agents that will map messages to handlers--basically, code will look at this string and say, \"Do I have something that can handle this message type inside protocol X version Y?\"

    When this analysis happens, strings should be compared for byte-wise equality in all segments except version. This means that case, unicode normalization, and punctuation differences all matter. It is thus best practice to avoid protocol and message names that differ only in subtle, easy-to-mistake ways.

    Comparison of the version segment of an MTURI or PIURI should follow semver rules and is discussed in the semver section of this document.

    The URI MUST be composed as follows:

    message-type-uri  = doc-uri delim protocol-name\n    \"/\" protocol-version \"/\" message-type-name\ndelim             = \"?\" / \"/\" / \"&\" / \":\" / \";\" / \"=\"\nprotocol-name     = identifier\nprotocol-version  = semver\nmessage-type-name = identifier\nidentifier        = alpha *(*(alphanum / \"_\" / \"-\" / \".\") alphanum)\n

    It can be loosely matched and parsed with the following regex:

        (.*?)([a-z0-9._-]+)/(\\d[^/]*)/([a-z0-9._-]+)$\n

    A match will have captures groups of (1) = doc-uri, (2) = protocol-name, (3) = protocol-version, and (4) = message-type-name.

    The goals of this URI are, in descending priority:

    The doc-uri portion is any URI that exposes documentation about protocols. A developer should be able to browse to that URI and use human intelligence to look up the named and versioned protocol. Optionally and preferably, the full URI may produce a page of documentation about the specific message type, with no human mediation involved.

    "},{"location":"concepts/0003-protocols/#piuri","title":"PIURI","text":"

    A shorter URI that follows the same conventions but lacks the message-type-name portion is called a protocol identifier URI (PIURI).

    protocol-identifier-uri  = doc-uri delim protocol-name\n    \"/\" semver\n

    Its loose matcher regex is:

        (.*?)([a-z0-9._-]+)/(\\d[^/]*)/?$\n

    The following are examples of valid MTURIs and PIURIs:

    "},{"location":"concepts/0003-protocols/#semver-rules-for-protocols","title":"Semver Rules for Protocols","text":"

    Semver rules apply to protocols, with the version of a protocol is expressed in the semver portion of its identifying URI. The \"ingredients\" of a protocol combine to form a public API in the semver sense. Core Aries protocols specify only major and minor elements in a version; the patch component is not used. Non-core protocols may choose to use the patch element.

    The major and minor versions of protocols match semver semantics:

    Within a given major version of a protocol, an agent should:

    This leads to the following received message handling rules:

    Note: The deprecation of the \"warning\" problem-reports in cases of minor version mismatches is because the recipient of the response can detect the mismatch by looking at the PIURI, making the \"warning\" unnecessary, and because the problem-report message may be received after (and definitely at a different time than) the response message, and so the warning is of very little value to the recipient. Recipients should still be aware that minor version mismatch warning problem-report messages may be received and handle them appropriately, likely by quietly ignoring them.

    As documented in the semver documentation, these requirements are not applied when major version 0 is used. In that case, minor version increments are considered breaking.

    Agents may support multiple major versions and select which major version to use when initiating an instance of the protocol.

    An agent should reject messages from protocols or unsupported protocol major versions with a problem-report message with code version-not-supported. Agents that receive such a problem-report message may use the discover ../../features protocol to resolve the mismatch.

    "},{"location":"concepts/0003-protocols/#semver-examples","title":"Semver Examples","text":""},{"location":"concepts/0003-protocols/#initiator","title":"Initiator","text":"

    Unless Alice's agent (the initiator of a protocol) knows from prior history that it should do something different, it should begin a protocol using the highest version number that it supports. For example, if A.1 supports versions 2.0 through 2.2 of protocol X, it should use 2.2 as the version in the message type of its first message.

    "},{"location":"concepts/0003-protocols/#recipient-rules","title":"Recipient Rules","text":"

    Agents for Bob (the recipient) should reject messages from protocols with major versions different from those they support. For major version 0, they should also reject protocols with minor versions they don't support, since semver stipulates that ../../features are not stable before 1.0. For example, if B.1 supports only versions 2.0 and 2.1 of protocol X, it should reject any messages from version 3 or version 1 or 0. In most cases, rejecting a message means sending a problem-report that the message is unsupported. The code field in such messages should be version-not-supported. Agents that receive such a problem-report can then use the Discover Features Protocol to resolve version problems.

    Recipient agents should accept messages that differ from their own supported version of a protocol only in the patch, prerelease, and/or build fields, whether these differences make the message earlier or later than the version the recipient prefers. These messages will be robustly compatible.

    For major version >= 1, recipients should also accept messages that differ only in that the message's minor version is earlier than their own preference. In such a case, the recipient should degrade gracefully to use the earlier version of the protocol. If the earlier version lacks important ../../features, the recipient may optionally choose to send, in addition to a response, a problem-report with code version-with-degraded-../../features.

    If a recipient supports protocol X version 1.0, it should tentatively accept messages with later minor versions (e.g., 1.2). Message types that differ in only in minor version are guaranteed to be compatible for the feature set of the earlier version. That is, a 1.0-capable agent can support 1.0 ../../features using a 1.2 message, though of course it will lose any ../../features that 1.2 added. Thus, accepting such a message could have two possible outcomes:

    1. The message at version 1.2 might look and behave exactly like it did at version 1.0, in which case the message will process without any trouble.

    2. The message might contain some fields that are unrecognized and need to be ignored.

    In case 2, it is best practice for the recipient to send a problem-report that is a warning, not an error, announcing that some fields could not be processed (code = fields-ignored-due-to-version-mismatch). Such a message is in addition to any response that the protocol demands of the recipient.

    If the recipient of a protocol's initial message generates a response, the response should use the latest major.minor protocol version that both parties support and know about. Generally, all messages after the first use only major.minor

    "},{"location":"concepts/0003-protocols/#state-details-and-state-machines","title":"State Details and State Machines","text":"

    While some protocols have only one sequence of states to manage, in most different roles perceive the interaction differently. The sequence of states for each role needs to be described with care in the RFC.

    "},{"location":"concepts/0003-protocols/#state-machines","title":"State Machines","text":"

    By convention, protocol state and sequence rules are described using the concept of state machines, and we encourage developers who implement protocols to build them that way.

    Among other benefits, this helps with error handling: when one agent sends a problem-report message to another, the message can make it crystal clear which state it has fallen back to as a result of the error.

    Many developers will have encountered a formal of definition of state machines as they wrote parsers or worked on other highly demanding tasks, and may worry that state machines are heavy and intimidating. But as they are used in Aries protocols, state machines are straightforward and elegant. They cleanly encapsulate logic that would otherwise be a bunch of conditionals scattered throughout agent code. The tictactoe example protocol example includes a complete state machine in less than 50 lines of python code, with tests.

    For an extended discussion of how state machines can be used, including in nested protocols, and with hooks that let custom processing happen at each point in a flow, see https://github.com/dhh1128/distributed-state-machine.

    "},{"location":"concepts/0003-protocols/#processing-points","title":"Processing Points","text":"

    A protocol definition describes key points in the flow where business logic can attach. Some of these processing points are obvious, because the protocol makes calls for decisions to be made. Others are implicit. Some examples include:

    "},{"location":"concepts/0003-protocols/#roles-participants-parties-and-controllers","title":"Roles, Participants, Parties, and Controllers","text":""},{"location":"concepts/0003-protocols/#roles","title":"Roles","text":"

    The roles in a protocol are the perspectives (responsibilities, privileges) that parties take in an interaction.

    This perspective is manifested in three general ways:

    Like parties, roles are normally known at the start of the protocol but this is not a requirement.

    In an auction protocol, there are only two roles\u2014auctioneer and bidder\u2014even though there may be many parties involved.

    "},{"location":"concepts/0003-protocols/#participants","title":"Participants","text":"

    The participants in a protocol are the agents that send and/or receive plaintext application-level messages that embody the protocol's interaction. Alice, Bob, and Carol may each have a cloud agent, a laptop, and a phone; if they engage in an introduction protocol using phones, then the agents on their phones are the participants. If the phones talk directly over Bluetooth, this is particularly clear--but even if the phones leverage push notifications and HTTP such that cloud agents help with routing, only the phone agents are participants, because only they maintain state for the interaction underway. (The cloud agents would be facilitators, and the laptops would be bystanders). When a protocol is complete, the participant agents know about the outcome; they may need to synchronize or replicate their state before other agents of the parties are aware.

    "},{"location":"concepts/0003-protocols/#parties","title":"Parties","text":"

    The parties to a protocol are the entities directly responsible for achieving the protocol's goals. When a protocol is high-level, parties are typically people or organizations; as protocols become lower-level, parties may be specific agents tasked with detail work through delegation.

    Imagine a situation where Alice wants a vacation. She engages with a travel agent named Bob. Together, they begin an \"arrange a vacation\" protocol. Alice is responsible for expressing her parameters and proving her willingness to pay; Bob is responsible for running a bunch of subprotocols to work out the details. Alice and Bob--not software agents they use--are parties to this high-level protocol, since they share responsibility for its goals.

    As soon as Alice has provided enough direction and hangs up the phone, Bob begins a sub-protocol with a hotel to book a room for Alice. This sub-protocol has related but different goals--it is about booking a particular hotel room, not about the vacation as a whole. We can see the difference when we consider that Bob could abandon the booking and choose a different hotel entirely, without affecting the overarching \"arrange a vacation\" protocol.

    With the change in goal, the parties have now changed, too. Bob and a hotel concierge are the ones responsible for making the \"book a hotel room\" protocol progress. Alice is an approver and indirect stakeholder, but she is not doing the work. (In RACI terms, Alice is an \"accountable\" or \"approving\" entity, but only Bob and the concierge are \"responsible\" parties.)

    Now, as part of the hotel reservation, Bob tells the concierge that the guest would like access to a waverunner to play in the ocean on day 2. The concierge engages in a sub-sub-protocol to reserve the waverunner. The goal of this sub-sub-protocol is to reserve the equipment, not to book a hotel or arrange a vacation. The parties to this sub-sub-protocol are the concierge and the person or automated system that manages waverunners.

    Often, parties are known at the start of a protocol; however, that is not a requirement. Some protocols might commence with some parties not yet known or assigned.

    For many protocols, there are only two parties, and they are in a pairwise relationship. Other protocols are more complex. Introductions involves three; an auction may involve many.

    Normally, the parties that are involved in a protocol also participate in the interaction but this is not always the case. Consider a gossip protocol, two parties may be talking about a third party. In this case, the third party would not even know that the protocol was happening and would definitely not participate.

    "},{"location":"concepts/0003-protocols/#controllers","title":"Controllers","text":"

    The controllers in a protocol are entities that make decisions. They may or may not be direct parties.

    Imagine a remote chess game between Bob and Carol, conducted with software agents. The chess protocol isn't technically about how to select a wise chess move; it's about communicating the moves so parties achieve the shared goal of running a game to completion. Yet choices about moves are clearly made as the protocol unfolds. These choices are made by controllers--Bob and Carol--while the agents responsible for the work of moving the game forward wait with the protocol suspended.

    In this case, Bob and Carol could be analyzed as parties to the protocol, as well as controllers. But in other cases, the ../../concepts are distinct. For example, in a protocol to issue credentials, the issuing institution might use an AI and/or business automation as a controller.

    "},{"location":"concepts/0003-protocols/#instructions-for-protocol-rfcs","title":"Instructions for Protocol RFCs","text":"

    A protocol RFC conforms to general RFC patterns, but includes some specific substructure.

    Please see the special protocol RFC template for details.

    "},{"location":"concepts/0003-protocols/#drawbacks","title":"Drawbacks","text":"

    This RFC creates some formalism around defining protocols. It doesn't go nearly as far as SOAP or CORBA/COM did, but it is slightly more demanding of a protocol author than the familiar world of RESTful Swagger/OpenAPI.

    The extra complexity is justified by the greater demands that agent-to-agent communications place on the protocol definition. See notes in Prior Art section for details.

    "},{"location":"concepts/0003-protocols/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Some of the simplest DIDComm protocols could be specified in a Swagger/OpenAPI style. This would give some nice tooling. However, not all fit into that mold. It may be desirable to create conversion tools that allow Swagger interop.

    "},{"location":"concepts/0003-protocols/#prior-art","title":"Prior art","text":""},{"location":"concepts/0003-protocols/#bpmn","title":"BPMN","text":"

    BPMN (Business Process Model and Notation) is a graphical language for modeling flows of all types (plus things less like our protocols as well). BPMN is a mature standard sponsored by OMG(Object Management Group). It has a nice tool ecosystem (such as this). It also has an XML file format, so the visual diagrams have a two-way transformation to and from formal written language. And it has a code generation mode, where BPMN can be used to drive executable behavior if diagrams are sufficiently detailed and sufficiently standard. (Since BPMN supports various extensions and is often used at various levels of formality, execution is not its most common application.)

    BPMN began with a focus on centralized processes (those driven by a business entity), with diagrams organized around the goal of the point-of-view entity and what they experience in the interaction. This is somewhat different from a DIDComm protocol where any given entity may experience the goal and the scope of interaction differently; the state machine for a home inspector in the \"buy a home\" protocol is quite different, and somewhat separable, from the state machine of the buyer, and that of the title insurance company.

    BPMN 2.0 introduced the notion of a choreography, which is much closer to the concept of an A2A protocol, and which has quite an elegant and intuitive visual representation. However, even a BPMN choreography doesn't have a way to discuss interactions with decorators, adoption of generic messages, and other A2A-specific concerns. Thus, we may lean on BPMN for some diagramming tasks, but it is not a substitute for the RFC definition procedure described here.

    "},{"location":"concepts/0003-protocols/#wsdl","title":"WSDL","text":"

    WSDL (Web Services Description Language) is a web-centric evolution of earlier, RPC-style interface definition languages like IDL in all its varieties and CORBA. These technologies describe a called interface, but they don't describe the caller, and they lack a formalism for capturing state changes, especiall by the caller. They are also out of favor in the programmer community at present, as being too heavy, too fragile, or poorly supported by current tools.

    "},{"location":"concepts/0003-protocols/#swagger-openapi","title":"Swagger / OpenAPI","text":"

    Swagger / OpenAPI overlaps with some of the concerns of protocol definition in agent-to-agent interactions. We like the tools and the convenience of the paradigm offered by OpenAPI, but where these two do not overlap, we have impedance.

    Agent-to-agent protocols must support more than 2 roles, or two roles that are peers, whereas RESTful web services assume just client and server--and only the server has a documented API.

    Agent-to-agent protocols are fundamentally asynchronous, whereas RESTful web services mostly assume synchronous request~response.

    Agent-to-agent protocols have complex considerations for diffuse trust, whereas RESTful web services centralize trust in the web server.

    Agent-to-agent protocols need to support transports beyond HTTP, whereas RESTful web services do not.

    Agent-to-agent protocols are nestable, while RESTful web services don't provide any special support for that construct.

    "},{"location":"concepts/0003-protocols/#other","title":"Other","text":""},{"location":"concepts/0003-protocols/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0003-protocols/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python several protocols, circa Feb 2019 Aries Framework - .NET several protocols, circa Feb 2019 Streetcred.id several protocols, circa Feb 2019 Aries Cloud Agent - Python numerous protocols plus extension mechanism for pluggable protocols Aries Static Agent - Python 2 or 3 protocols Aries Framework - Go DID Exchange Connect.Me mature but proprietary protocols; community protocols in process Verity mature but proprietary protocols; community protocols in process Aries Protocol Test Suite 2 or 3 core protocols; active work to implement all that are ACCEPTED, since this tests conformance of other agents Pico Labs implemented protocols: connections, trust_ping, basicmessage, routing"},{"location":"concepts/0003-protocols/roles-participants-etc/","title":"Roles participants etc","text":""},{"location":"concepts/0003-protocols/roles-participants-etc/#roles-participants-parties-and-controllers","title":"Roles, Participants, Parties, and Controllers","text":""},{"location":"concepts/0003-protocols/roles-participants-etc/#roles","title":"Roles","text":"

    The roles in a protocol are the perspectives (responsibilities, privileges) that parties take i an interaction.

    This perspective is manifested in three general ways:

    Like parties, roles are normally known at the start of the protocol but this is not a requirement.

    In an auction protocol, there are only two roles\u2014auctioneer and bidder\u2014even though there may be many parties involved.

    "},{"location":"concepts/0003-protocols/roles-participants-etc/#participants","title":"Participants","text":"

    The participants in a protocol are the agents that send and/or receive plaintext application-level messages that embody the protocol's interaction. Alice, Bob, and Carol may each have a cloud agent, a laptop, and a phone; if they engage in an introduction protocol using phones, then the agents on their phones are the participants. If the phones talk directly over Bluetooth, this is particularly clear--but even if the phones leverage push notifications and HTTP such that cloud agents help with routing, only the phone agents are participants, because only they maintain state for the interaction underway. (The cloud agents would be facilitators, and the laptops would be bystanders). When a protocol is complete, the participant agents know about the outcome; they may need to synchronize or replicate their state before other agents of the parties are aware.

    "},{"location":"concepts/0003-protocols/roles-participants-etc/#parties","title":"Parties","text":"

    The parties to a protocol are the entities directly responsible for achieving the protocol's goals. When a protocol is high-level, parties are typically people or organizations; as protocols become lower-level, parties may be specific agents tasked with detail work through delegation.

    Imagine a situation where Alice wants a vacation. She engages with a travel agent named Bob. Together, they begin an \"arrange a vacation\" protocol. Alice is responsible for expressing her parameters and proving her willingness to pay; Bob is responsible for running a bunch of subprotocols to work out the details. Alice and Bob--not software agents they use--are parties to this high-level protocol, since they share responsibility for its goals.

    As soon as Alice has provided enough direction and hangs up the phone, Bob begins a sub-protocol with a hotel to book a room for Alice. This sub-protocol has related but different goals--it is about booking a particular hotel room, not about the vacation as a whole. We can see the difference when we consider that Bob could abandon the booking and choose a different hotel entirely, without affecting the overarching \"arrange a vacation\" protocol.

    With the change in goal, the parties have now changed, too. Bob and a hotel concierge are the ones responsible for making the \"book a hotel room\" protocol progress. Alice is an approver and indirect stakeholder, but she is not doing the work. (In RACI terms, Alice is an \"accountable\" or \"approving\" entity, but only Bob and the concierge are \"responsible\" parties.)

    Now, as part of the hotel reservation, Bob tells the concierge that the guest would like access to a waverunner to play in the ocean on day 2. The concierge engages in a sub-sub-protocol to reserve the waverunner. The goal of this sub-sub-protocol is to reserve the equipment, not to book a hotel or arrange a vacation. The parties to this sub-sub-protocol are the concierge and the person or automated system that manages waverunners.

    Often, parties are known at the start of a protocol; however, that is not a requirement. Some protocols might commence with some parties not yet known or assigned.

    For many protocols, there are only two parties, and they are in a pairwise relationship. Other protocols are more complex. Introductions involves three; an auction may involve many.

    Normally, the parties that are involved in a protocol also participate in the interaction but this is not always the case. Consider a gossip protocol, two parties may be talking about a third party. In this case, the third party would not even know that the protocol was happening and would definitely not participate.

    "},{"location":"concepts/0003-protocols/roles-participants-etc/#controllers","title":"Controllers","text":"

    The controllers in a protocol are entities that make decisions. They may or may not be direct parties.

    Imagine a remote chess game between Bob and Carol, conducted with software agents. The chess protocol isn't technically about how to select a wise chess move; it's about communicating the moves so parties achieve the shared goal of running a game to completion. Yet choices about moves are clearly made as the protocol unfolds. These choices are made by controllers--Bob and Carol--while the agents responsible for the work of moving the game forward wait with the protocol suspended.

    In this case, Bob and Carol could be analyzed as parties to the protocol, as well as controllers. But in other cases, the concepts are distinct. For example, in a protocol to issue credentials, the issuing institution might use an AI and/or business automation as a controller.

    "},{"location":"concepts/0003-protocols/tictactoe/","title":"Tic Tac Toe Protocol 1.0","text":""},{"location":"concepts/0003-protocols/tictactoe/#summary","title":"Summary","text":"

    Describes a simple protocol, already familiar to most developers, as a way to demonstrate how all protocols should be documented.

    "},{"location":"concepts/0003-protocols/tictactoe/#motivation","title":"Motivation","text":"

    Playing tic-tac-toe is a good way to test whether agents are working properly, since it requires two parties to take turns and to communicate reliably about state. However, it is also pretty simple, and it has a low bar for trust (it's not dangerous to play tic-tac-toe with a malicious stranger). Thus, we expect agent tic-tac-toe to be a good way to test basic plumbing and to identify functional gaps. The game also provides a way of testing interactions with the human owners of agents, or of hooking up an agent AI.

    "},{"location":"concepts/0003-protocols/tictactoe/#tutorial","title":"Tutorial","text":"

    Tic-tac-toe is a simple game where players take turns placing Xs and Os in a 3x3 grid, attempting to capture 3 cells of the grid in a straight line.

    "},{"location":"concepts/0003-protocols/tictactoe/#name-and-version","title":"Name and Version","text":"

    This defines the tictactoe protocol, version 1.x, as identified by the following PIURI:

    did:sov:SLfEi9esrjzybysFxQZbfq;spec/tictactoe/1.0\n
    "},{"location":"concepts/0003-protocols/tictactoe/#key-concepts","title":"Key Concepts","text":"

    A tic-tac-toe game is an interaction where 2 parties take turns to make up to 9 moves. It starts when either party proposes the game, and ends when one of the parties wins, or when all all cells in the grid are occupied but nobody has won (a draw).

    Note: Optionally, a Tic-Tac-Toe game can be preceded by a Coin Flip Protocol to decide who goes first. This is not a high-value enhancement, but we add it for illustration purposes. If used, the choice-id field in the initial propose message of the Coin Flip should have the value did:sov:SLfEi9esrjzybysFxQZbfq;spec/tictactoe/1.0/who-goes-first, and the caller-wins and flipper-wins fields should contain the DIDs of the two players.

    Illegal moves and moving out of turn are errors that trigger a complaint from the other player. However, they do not scuttle the interaction. A game can also be abandoned in an unfinished state by either player, for any reason. Games can last any amount of time.

    About the Key Concepts section: Here we describe the flow at a very\nhigh level. We identify preconditions, ways the protocol can start\nand end, and what can go wrong. We also talk about timing\nconstraints and other assumptions.\n
    "},{"location":"concepts/0003-protocols/tictactoe/#roles","title":"Roles","text":"

    There are two parties in a tic-tac-toe game, but only one role, player. One player places 'X' for the duration of a game; the other places 'O'. There are no special requirements about who can be a player. The parties do not need to be trusted or even known to one another, either at the outset or as the game proceeds. No prior setup is required, other than an ability to communicate.

    About the Roles section: Here we name the roles in the protocol,\nsay who and how many can play each role, and describe constraints.\nWe also explore qualifications for roles.\n
    "},{"location":"concepts/0003-protocols/tictactoe/#states","title":"States","text":"

    The states of each player in the protocol evolve according to the following state machine:

    When a player is in the my-move state, possible valid events include send move (the normal case), send outcome (if the player decides to abandon the game), and receive outcome (if the other player decides to abandon). A receive move event could conceivably occur, too-- but it would be an error on the part of the other player, and would trigger a problem-report message as described above, leaving the state unchanged.

    In the their-move state, send move is an impossible event for a properly behaving player. All 3 of the other events could occur, causing a state transition.

    In the wrap-up state, the game is over, but communication with the outcome message has not yet occurred. The logical flow is send outcome, whereupon the player transitions to the done state.

    About the States section: Here we explain which states exist for each\nrole. We also enumerate the events that can occur, including messages,\nerrors, or events triggered by surrounding context, and what should\nhappen to state as a result. In this protocol, we only have one role,\nand thus only one state machine matrix. But in many protocols, each\nrole may have a different state machine.\n
    "},{"location":"concepts/0003-protocols/tictactoe/#messages","title":"Messages","text":"

    All messages in this protocol are part of the \"tictactoe 1.0\" message family uniquely identified by this DID reference: did:sov:SLfEi9esrjzybysFxQZbfq;spec/tictactoe/1.0

    NOTE 1: All the messages defined in a protocol should follow DIDComm best practices as far as how they name fields and define their data types and semantics. NOTE 2 about the \"DID Reference\" URI that appears here: DIDs can be resolved to a DID doc that contains an endpoint, to which everything after a semicolon can be appended. Thus, if this DID is publicly registered and its DID doc gives an endpoint of http://example.com, this URI would mean that anyone can find a formal definition of the protocol at http://example.com/spec/tictactoe/1.0. It is also possible to use a traditional URI here, such as http://example.com/spec/tictactoe/1.0. If that sort of URI is used, it is best practice for it to reference immutable content, as with a link to specific commit on github: https://github.com/hyperledger/aries-rfcs/blob/ab7a04f/concepts/0003-protocols/tictactoe/README.md#messages"},{"location":"concepts/0003-protocols/tictactoe/#move-message","title":"move message","text":"

    The protocol begins when one party sends a move message to the other. It looks like this:

    @id is required here, as it establishes a message thread that will govern the rest of the game.

    me tells which mark (X or O) the sender is placing. It is required.

    moves is optional in the first message of the interaction. If missing or empty, the sender of the first message is inviting the recipient to make the first move. If it contains a move, the sender is moving first.

    Moves are strings like \"X:B2\" that match the regular expression (?i)[XO]:[A-C][1-3]. They identify a mark to be placed (\"X\" or \"O\") and a position in the 3x3 grid. The grid's columns and rows are numbered like familiar spreadsheets, with columns A, B, and C, and rows 1, 2, and 3.

    comment is optional and probably not used much, but could be a way for players to razz one another or chat as they play. It follows the conventions of localized messages.

    Other decorators could be placed on tic-tac-toe messages, such as those to enable message timing to force players to make a move within a certain period of time.

    "},{"location":"concepts/0003-protocols/tictactoe/#subsequent-moves","title":"Subsequent Moves","text":"

    Once the initial move message has been sent, game play continues by each player taking turns sending responses, which are also move messages. With each new message the move array inside the message grows by one, ensuring that the players agree on the current accumulated state of the game. The me field is still required and must accurately reflect the role of the message sender; it thus alternates values between X and O.

    Subsequent messages in the game use the message threading mechanism where the @id of the first move becomes the ~thread.thid for the duration of the game.

    An evolving sequence of move messages might thus look like this, suppressing all fields except what's required:

    "},{"location":"concepts/0003-protocols/tictactoe/#messagemove-2","title":"Message/Move 2","text":"

    This is the first message in the thread that's sent by the player placing \"O\"; hence it has myindex = 0.

    "},{"location":"concepts/0003-protocols/tictactoe/#messagemove-3","title":"Message/Move 3","text":"

    This is the second message in the thread by the player placing \"X\"; hence it has myindex = 1.

    "},{"location":"concepts/0003-protocols/tictactoe/#messagemove-4","title":"Message/Move 4","text":"

    ...and so forth.

    Note that the order of the items in the moves array is NOT significant. The state of the game at any given point of time is fully captured by the moves, regardless of the order in which they were made.

    If a player makes an illegal move or another error occurs, the other player can complain using a problem-report message, with explain.@l10n.code set to one of the values defined in the Message Catalog section (see below).

    "},{"location":"concepts/0003-protocols/tictactoe/#outcome-message","title":"outcome message","text":"

    Game play ends when one player sends a move message that manages to mark 3 cells in a row. Thereupon, it is best practice, but not strictly required, for the other player to send an acknowledgement in the form of an outcome message.

    The moves and me fields from a move message can also, optionally, be included to further document state. The winner field is required. Its value may be \"X\", \"O\", or--in the case of a draw--\"none\".

    This outcome message can also be used to document an abandoned game, in which case winner is null, and comment can be used to explain why (e.g., timeout, loss of interest).

    About the Messages section: Here we explain the message types, but\nalso which roles send which messages, what sequencing rules apply,\nand how errors may occur during the flow. The message begins with\nan announcement of the identifier and version of the message\nfamily, and also enumerates error codes to be used with problem\nreports. This protocol is simple enough that we document the\ndatatypes and validation rules for fields inline in the narrative;\nin more complex protocols, we'd move that text into the Reference\n> Messages section instead.\n
    "},{"location":"concepts/0003-protocols/tictactoe/#constraints","title":"Constraints","text":"

    Players do not have to trust one another. Messages do not have to be authcrypted, although anoncrypted messages still have to have a path back to the sender to be useful.

    About the Constraints section: Many protocols have rules\nor mechanisms that help parties build trust. For example, in buying\na house, the protocol includes such things as commission paid to\nrealtors to guarantee their incentives, title insurance, earnest\nmoney, and a phase of the process where a home inspection takes\nplace. If you are documenting a protocol that has attributes like\nthese, explain them here.\n
    "},{"location":"concepts/0003-protocols/tictactoe/#reference","title":"Reference","text":"
    About the Reference section: If the Tutorial > Messages section\nsuppresses details, we would add a Messages section here to\nexhaustively describe each field. We could also include an\nExamples section to show variations on the main flow.\n
    "},{"location":"concepts/0003-protocols/tictactoe/#collateral","title":"Collateral","text":"

    A reference implementation of the logic of a game is provided with this RFC as python 3.x code. See game.py. There is also a simple hand-coded AI that can play the game when plugged into an agent (see ai.py), and a set of unit tests that prove correctness (see test_tictactoe.py).

    A full implementation of the state machine is provided as well; see state_machine.py and test_state_machine.py.

    The game can be played interactively by running python game.py.

    "},{"location":"concepts/0003-protocols/tictactoe/#localization","title":"Localization","text":"

    The only localizable field in this message family is comment on both move and outcome messages. It contains ad hoc text supplied by the sender, instead of a value selected from an enumeration and identified by code for use with message catalogs. This means the only approach to localize move or outcome messages is to submit comment fields to an automated translation service. Because the locale of tictactoe messages is not predefined, each message must be decorated with ~l10n.locale to make automated translation possible.

    There is one other way that localization is relevant to this protocol: in error messages. Errors are communicated through the general problem-report message type rather than through a special message type that's part of the tictactoe family. However, we define a catalog of tictactoe-specific error codes below to make this protocol's specific error strings localizable.

    Thus, all instances of this message family carry localization metadata in the form of an implicit ~l10n decorator that looks like this:

    This JSON fragment is checked in next to the narrative content of this RFC as ~l10n.json, for easy machine parsing.

    Individual messages can use the ~l10n decorator to supplement or override these settings.

    For more information about localization concepts, see the RFC about localized messages.

    "},{"location":"concepts/0003-protocols/tictactoe/#message-catalog","title":"Message Catalog","text":"

    To facilitate localization of error messages, all instances of this message family assume the following catalog in their ~l10n data:

    When referencing this catalog, please be sure you have the correct version. The official, immutable URL to this version of the catalog file is:

    https://github.com/hyperledger/indy-hipe/blob/fc7a6028/text/tictactoe-protocol/catalog.json\n

    This JSON fragment is checked in next to the narrative content of this RFC as catalog.json, for easy machine parsing. The catalog currently contains localized alternatives only for English. Other language contributions would be welcome.

    For more information, see the Message Catalog section of the localization HIPE.

    "},{"location":"concepts/0003-protocols/tictactoe/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Verity Commercially licensed enterprise agent, SaaS or on-prem. Pico Labs Open source TicTacToe for Pico Agents"},{"location":"concepts/0004-agents/","title":"Aries RFC 0004: Agents","text":""},{"location":"concepts/0004-agents/#summary","title":"Summary","text":"

    Provide a high-level introduction to the ../../concepts of agents in the self-sovereign identity ecosystem.

    "},{"location":"concepts/0004-agents/#tutorial","title":"Tutorial","text":"

    Managing an identity is complex. We need tools to help us.

    In the physical world, we often delegate complexity to trusted proxies that can help. We hire an accountant to do our taxes, a real estate agent to help us buy a house, and a talent agent to help us pitch an album to a recording studio.

    On the digital landscape, humans and organizations (and sometimes, things) cannot directly consume and emit bytes, store and manage data, or perform the crypto that self-sovereign identity demands. They need delegates--agents--to help. Agents are a vital dimension across which we exercise sovereignty over identity.

    "},{"location":"concepts/0004-agents/#essential-characteristics","title":"Essential Characteristics","text":"

    When we use the term \"agent\" in the SSI community, we more properly mean \"an agent of self-sovereign identity.\" This means something more specific than just a \"user agent\" or a \"software agent.\" Such an agent has three defining characteristics:

    1. It acts as a fiduciary on behalf of a single identity owner (or, for agents of things like IoT devices, pets, and similar things, a single controller).
    2. It holds cryptographic keys that uniquely embody its delegated authorization.
    3. It interacts using interoperable DIDComm protocols.

    These characteristics don't tie an agent to any particular blockchain. It is possible to implement agents without any use of blockchain at all (e.g., with peer DIDs), and some efforts to do so are quite active.

    "},{"location":"concepts/0004-agents/#canonical-examples","title":"Canonical Examples","text":"

    Three types of agents are especially common:

    1. A mobile app that Alice uses to manage credentials and to connect to others is an agent for Alice.
    2. A cloud-based service that Alice uses to expose a stable endpoint where other agents can talk to her is an agent for Alice.
    3. A server run by Faber College, allowing it to issue credentials to its students, is an agent for Faber.

    Depending on your perspective, you might describe these agents in various ways. #1 can correctly be called a \"mobile\" or \"edge\" or \"rich\" agent. #2 can be called a \"cloud\" or \"routing\" agent. #3 can be called an \"on-prem\" or \"edge\" or \"advanced\" agent. See Categorizing Agents for a discussion about why multiple labels are correct.

    Agents can be other things as well. They can big or small, complex or simple. They can interact and be packaged in various ways. They can be written in a host of programming languages. Some are more canonical than others. But all the ones we intend to interact with in the self-sovereign identity problem domain share the three essential characteristics described above.

    "},{"location":"concepts/0004-agents/#how-agents-talk","title":"How Agents Talk","text":"

    DID communication (DIDComm), and the protocols built atop it are each rich subjects unto themselves. Here, we will stay very high-level.

    Agents can use many different communication transports: HTTP(S) 1.x and 2.0, WebSockets, IRC, Bluetooth, AMQP, NFC, Signal, email, push notifications to mobile devices, ZMQ, and more. However, all A2A is message-based, and is secured by modern, best-practice public key cryptography. How messages flow over a transport may vary--but their security and privacy toolset, their links to the DIDs and DID Docs of identity owners, and the ways their messages are packaged and handled are standard.

    Agents connect to one another through a standard connection protocol, discover one another's endpoints and keys through standard DID Docs, discover one another's ../../features in a standard way, and maintain relationships in a standard way. All of these points of standardization are what makes them interoperable.

    Because agents speak so many different ways, and because many of them won't have a permanent, accessible point of presence on the network, they can't all be thought of as web servers with a Swagger-compatible API for request-response. The analog to an API construct in agent-land is protocols. These are patterns for stateful interactions. They specify things like, \"If you want to negotiate a sale with an agent, send it a message of type X. It will respond with a message of type Y or type Z, or with an error message of type W. Repeat until the negotiation finishes.\" Some interesting A2A protocols include the one where two parties connect to one another to build a relationship, the one where agents discover which protocols they each support, the one where credentials are issued, and the one where proof is requested and sent. Hundreds of other protocols are being defined.

    "},{"location":"concepts/0004-agents/#how-to-get-an-agent","title":"How to Get an Agent","text":"

    As the ecosystem for self-sovereign identity matures, the average person or organization will get an agent by downloading it from the app store, installing it with their OS package manager, or subscribing to it as a service. However, the availability of quality pre-packaged agents is still limited today.

    Agent providers are emerging in the marketplace, though. Some are governments, NGOs, or educational institutions that offer agents for free; others are for-profit ventures. If you'd like suggestions about ready-to-use agent offerings, please describe your use case in #aries on chat.hyperledger.org.

    There is also intense activity in the SSI community around building custom agents and the tools and processes that enable them. A significant amount of early work occurred in the Indy Agent Community with some of those efforts materializing in the indy-agent repo on github.com and other code bases. The indy-agent repo is now deprecated but is still valuable in demonstrating the basics of agents. With the introduction of Hyperledger Aries, agent efforts are migrating from the Indy Agent community.

    Hyperledger Aries provides a number of code bases ranging from agent frameworks to tools to aid in development to ready-to-use agents.

    "},{"location":"concepts/0004-agents/#how-to-write-an-agent","title":"How to Write an Agent","text":"

    This is one of the most common questions that Aries newcomers ask. It's a challenging one to answer, because it's so open-ended. It's sort of like someone asking, \"Can you give me a recipe for dinner?\" The obvious follow-up question would be, \"What type of dinner did you have in mind?\"

    Here are some thought questions to clarify intent:

    "},{"location":"concepts/0004-agents/#general-patterns","title":"General Patterns","text":"

    We said it's hard to provide a recipe for an agent without specifics. However, the majority of agents do have two things in common: they listen to and process A2A messages, and they use a wallet to manage keys, credentials, and other sensitive material. Unless you have uses cases that involve IoT, cron jobs, or web hooks, your agent is likely to fit this mold.

    The heart of such an agent is probably a messaging handling loop, with pluggable protocols to give it new capabilities, and pluggable transports to let it talk in different ways. The pseudocode for its main function might look like this:

    "},{"location":"concepts/0004-agents/#pseudocode-for-main","title":"Pseudocode for main()","text":"
    1  While not done:\n2      Get next message.\n3      Verify it (decrypt, identify sender, check signature...).\n3      Look at the type of the plaintext message.\n4      Find a plugged in protocol handler that matches that type.\n5      Give plaintext message and security metadata to handler.\n

    Line 2 can be done via standard HTTP dispatch, or by checking an email inbox, or in many other ways. Line 3 can be quite sophisticated--the sender will not be Alice, but rather one of the agents that she has authorized. Verification may involve consulting cached information and/or a blockchain where a DID and DID Doc are stored, among other things.

    The pseudocode for each protocol handler it loads might look like:

    "},{"location":"concepts/0004-agents/#pseudocode-for-protocol-handler","title":"Pseudocode for protocol handler","text":"
    1  Check authorization against metadata. Reject if needed.\n2  Read message header. Is it part of an ongoing interaction?\n3  If yes, load persisted state.\n4  Process the message and update interaction state.\n5  If a response is appropriate:\n6      Prepare response content.\n7      Ask my outbound comm module to package and send it.\n

    Line 4 is the workhorse. For example, if the interaction is about issuing credentials and this agent is doing the issuance, this would be where it looks up the material for the credential in internal databases, formats it appropriately, and records the fact that the credential has now been built. Line 6 might be where that credential is attached to an outgoing message for transmission to the recipient.

    The pseudocode for the outbound communication module might be:

    "},{"location":"concepts/0004-agents/#pseudocode-for-outbound","title":"Pseudocode for outbound","text":"
    1  Iterate through all pluggable transports to find best one to use\n     with the intended recipient.\n2  Figure out how to route the message over the selected transport.\n3  Serialize the message content and encrypt it appropriately.\n4  Send the message.\n

    Line 2 can be complex. It involves looking up one or more endpoints in the DID Doc of the recipient, and finding an intersection between transports they use, and transports the sender can speak. Line 3 requires the keys of the sender, which would normally be held in a wallet.

    If you are building this sort of code using Aries technology, you will certainly want to use Aries Agent SDK. This gives you a ready-made, highly secure wallet that can be adapted to many requirements. It also provides easy functions to serialize and encrypt. Many of the operations you need to do are demonstrated in the SDK's /doc/how-tos folder, or in its Getting Started Guide.

    "},{"location":"concepts/0004-agents/#how-to-learn-more","title":"How to Learn More","text":""},{"location":"concepts/0004-agents/#reference","title":"Reference","text":""},{"location":"concepts/0004-agents/#categorizing-agents","title":"Categorizing Agents","text":"

    Agents can be categorized in various ways, and these categories lead to terms you're likely to encounter in RFCs and other documentation. Understanding the categories will help the definitions make sense.

    "},{"location":"concepts/0004-agents/#by-trust","title":"By Trust","text":"

    A trustable agent runs in an environment that's under the direct control of its owner; the owner can trust it without incurring much risk. A semi-trustable agent runs in an environment where others besides the owner may have access, so giving it crucial secrets is less advisable. (An untrustable delegate should never be an agent, by definition, so we don't use that term.)

    Note that these distinctions highlight what is advisable, not how much trust the owner actually extends.

    "},{"location":"concepts/0004-agents/#by-location","title":"By Location","text":"

    Two related but deprecated terms are edge agent and cloud agent. You will probably hear these terms in the community or read them in docs. The problem with them is that they suggest location, but were formally defined to imply levels of trust. When they were chosen, location and levels of trust were seen as going together--you trust your edge more, and your cloud less. We've since realized that a trustable agent could exist in the cloud, if it is directly controlled by the owner, and a semi-trustable agent could be on-prem, if the owner's control is indirect. Thus we are trying to correct usage and make \"edge\" and \"cloud\" about location instead.

    "},{"location":"concepts/0004-agents/#by-platform","title":"By Platform","text":""},{"location":"concepts/0004-agents/#by-complexity","title":"By Complexity","text":"

    We can arrange agents on a continuum, from simple to complex. The simplest agents are static--they are preconfigured for a single relationship. Thin agents are somewhat fancier. Thick agents are fancier still, and rich agents exhibit the most sophistication and flexibility:

    A nice visualization of several dimensions of agent category has been built by Michael Herman:

    "},{"location":"concepts/0004-agents/#the-agent-ness-continuum","title":"The Agent-ness Continuum","text":"

    The tutorial above gives three essential characteristics of agents, and lists some canonical examples. This may make it feel like agent-ness is pretty binary. However, we've learned that reality is more fuzzy.

    Having a tight definition of an agent may not matter in all cases. However, it is important when we are trying to understand interoperability goals. We want agents to be able to interact with one another. Does that mean they must interact with every piece of software that is even marginally agent-like? Probably not.

    Some attributes that are not technically necessary in agents include:

    Agents that lack these characteristics can still be fully interoperable.

    Some interesting examples of less prototypical agents or agent-like things include:

    "},{"location":"concepts/0004-agents/#dif-hubs","title":"DIF Hubs","text":"

    A DIF Identity Hub is construct that resembles agents in some ways, but that focuses on the data-sharing aspects of identity. Currently DIF Hubs do not use the protocols known to the Aries community, and vice versa. However, there are efforts to bridge that gap.

    "},{"location":"concepts/0004-agents/#identity-wallets","title":"Identity Wallets","text":"

    \"Identity wallet\" is a term that's carefully defined in our ecosystem, and in strict, technical usage it maps to a concept much closer to \"database\" than \"agent\". This is because it is an inert storage container, not an active interacter. However, in casual usage, it may mean the software that uses a wallet to do identity work--in which case it is definitely an agent.

    "},{"location":"concepts/0004-agents/#crypto-wallets","title":"Crypto Wallets","text":"

    Cryptocurrency wallets are quite agent-like in that they hold keys and represent a user. However, they diverge from the agent definition in that they talk proprietary protocols to blockchains, rather than A2A to other agents.

    "},{"location":"concepts/0004-agents/#uport","title":"uPort","text":"

    The uPort app is an edge agent. Here, too, there are efforts to bridge a protocol gap.

    "},{"location":"concepts/0004-agents/#learning-machine","title":"Learning Machine","text":"

    The credential issuance technology offered by Learning Machine, and the app used to share those credentials, are agents of institutions and individuals, respectively. Again, there is a protocol gap to bridge.

    "},{"location":"concepts/0004-agents/#cron-jobs","title":"Cron Jobs","text":"

    A cron job that runs once a night at Faber, scanning a database and revoking credentials that have changes status during the day, is an agent for Faber. This is true even though it doesn't listen for incoming messages (it only talks revocation protocol to the ledger). In order to talk that protocol, it must hold keys delegated by Faber, and it is surely Faber's fiduciary.

    "},{"location":"concepts/0004-agents/#operating-systems","title":"Operating Systems","text":"

    The operating system on a laptop could be described as agent-like, in that it works for a single owner and may have a keystore. However, it doesn't talk A2A to other agents--at least not yet. (OSes that service multiple users fit the definition less.)

    "},{"location":"concepts/0004-agents/#devices","title":"Devices","text":"

    A device can be thought of as an agent (e.g., Alice's phone as an edge agent). However, strictly speaking, one device might run multiple agents, so this is only casually correct.

    "},{"location":"concepts/0004-agents/#sovrin-mainnet","title":"Sovrin MainNet","text":"

    The Sovrin MainNet can be thought of as an agent for the Sovrin community (but NOT the Sovrin Foundation, which codifies the rules but leaves operation of the network to its stewards). Certainly, the blockchain holds keys, uses A2A protocols, and acts in a fiduciary capacity toward the community to further its interests. The only challenge with this perspective is that the Sovrin community has a very fuzzy identity.

    "},{"location":"concepts/0004-agents/#validators","title":"Validators","text":"

    Validator nodes on a particular blockchain are agents of the stewards that operate them.

    "},{"location":"concepts/0004-agents/#digital-assistants","title":"Digital Assistants","text":"

    Digital assistants like Alexa and Google Home are somewhat agent-like. However, the Alexa in the home of the Jones family is probably not an agent for either the Jones family or Amazon. It accepts delegated work from anybody who talks to it (instead of a single controlling identity), and all current implementations are totally antithetical to the ethos of privacy and security required by self-sovereign identity. Although it interfaces with Amazon to download data and ../../features, it isn't Amazon's fiduciary, either. It doesn't hold keys that allow it to represent its owner. The protocols it uses are not interactions with other agents, but with non-agent entities. Perhaps agents and digtal assistants will converge in the future.

    "},{"location":"concepts/0004-agents/#doorbell","title":"Doorbell","text":"

    An doorbell that emits a simple signal each time it is pressed is not an agent. It doesn't represent a fiduciary or hold keys. (However, a fancy IoT doorbell that reports to Alice's mobile agent using an A2A protocol would be an agent.)

    "},{"location":"concepts/0004-agents/#microservices","title":"Microservices","text":"

    A microservice run by AcmeCorp to integrate with its vendors is not an agent for Acme's vendors. Depending on whether it holds keys and uses A2A protocols, it may or may not be an agent for Acme.

    "},{"location":"concepts/0004-agents/#human-delegates","title":"Human Delegates","text":"

    A human delegate who proves empowerment through keys might be thought of as an agent.

    "},{"location":"concepts/0004-agents/#paper","title":"Paper","text":"

    The keys for an agent can be stored on paper. This storage basically constitutes a wallet. It isn't an agent. However, it can be thought of as playing the role of an agent in some cases when designing backup and recovery solutions.

    "},{"location":"concepts/0004-agents/#prior-art","title":"Prior art","text":""},{"location":"concepts/0004-agents/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community Aries Framework - .NET .NET framework for building agents of all types Streetcred.id Commercial mobile and web app built using Aries Framework for .NET Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases. Aries Framework - Go For building agents, hubs and other DIDComm ../../features in GoLang. Connect.Me Free mobile app from Evernym. Installed via app store on iOS and Android. Verity Commercially licensed enterprise agent, SaaS or on-prem. Aries Protocol Test Suite Pico Labs Pico Agents protocols: connections, trust_ping, basicmessage, routing Rust Agent Rust implementation of a framework for building agents of all types"},{"location":"concepts/0005-didcomm/","title":"Aries RFC 0005: DID Communication","text":""},{"location":"concepts/0005-didcomm/#summary","title":"Summary","text":"

    Explain the basics of DID communication (DIDComm) at a high level, and link to other RFCs to promote deeper exploration.

    NOTE: The version of DIDComm collectively defined in Aries RFCs is known by the label \"DIDComm V1.\" A newer version of DIDComm (\"DIDComm V2\") is now being incubated at DIF. Many ../../concepts are the same between the two versions, but there are some differences in the details. For information about detecting V1 versus V2, see Detecting DIDComm Versions.

    "},{"location":"concepts/0005-didcomm/#motivation","title":"Motivation","text":"

    The DID communication between agents and agent-like things is a rich subject with a lot of tribal knowledge. Newcomers to the decentralized identity ecosystem tend to bring mental models that are subtly divergent from its paradigm. When they encounter dissonance, DIDComm becomes mysterious. We need a standard high-level reference.

    "},{"location":"concepts/0005-didcomm/#tutorial","title":"Tutorial","text":"

    This discussion assumes that you have a reasonable grasp on topics like self-sovereign identity, DIDs and DID docs, and agents. If you find yourself lost, please review that material for background and starting assumptions.

    Agent-like things have to interact with one another to get work done. How they talk in general is DIDComm, the subject of this RFC. The specific interactions enabled by DIDComm--connecting and maintaining relationships, issuing credentials, providing proof, etc.--are called protocols; they are described elsewhere.

    "},{"location":"concepts/0005-didcomm/#rough-overview","title":"Rough Overview","text":"

    A typical DIDComm interaction works like this:

    Imagine Alice wants to negotiate with Bob to sell something online, and that DIDComm, not direct human communication, is involved. This means Alice's agent and Bob's agent are going to exchange a series of messages. Alice may just press a button and be unaware of details, but underneath, her agent begins by preparing a plaintext JSON message about the proposed sale. (The particulars are irrelevant here, but would be described in the spec for a \"sell something\" protocol.) It then looks up Bob's DID Doc to access two key pieces of information: * An endpoint (web, email, etc) where messages can be delivered to Bob. * The public key that Bob's agent is using in the Alice:Bob relationship. Now Alice's agent uses Bob's public key to encrypt the plaintext so that only Bob's agent can read it, adding authentication with its own private key. The agent arranges delivery to Bob. This \"arranging\" can involve various hops and intermediaries. It can be complex. Bob's agent eventually receives and decrypts the message, authenticating its origin as Alice using her public key. It prepares its response and routes it back using a reciprocal process (plaintext -> lookup endpoint and public key for Alice -> encrypt with authentication -> arrange delivery).

    That's it.

    Well, mostly. The description is pretty good, if you squint, but it does not fit all DIDComm interactions:

    Before we provide more details, let's explore what drives the design of DIDComm.

    "},{"location":"concepts/0005-didcomm/#goals-and-ramifications","title":"Goals and Ramifications","text":"

    The DIDComm design attempts to be:

    1. Secure
    2. Private
    3. Interoperable
    4. Transport-agnostic
    5. Extensible

    As a list of buzz words, this may elicit nods rather than surprise. However, several items have deep ramifications.

    Taken together, Secure and Private require that the protocol be decentralized and maximally opaque to the surveillance economy.

    Interoperable means that DIDComm should work across programming languages, blockchains, vendors, OS/platforms, networks, legal jurisdictions, geos, cryptographies, and hardware--as well as across time. That's quite a list. It means that DIDComm intends something more than just compatibility within Aries; it aims to be a future-proof lingua franca of all self-sovereign interactions.

    Transport-agnostic means that it should be possible to use DIDComm over HTTP(S) 1.x and 2.0, WebSockets, IRC, Bluetooth, AMQP, NFC, Signal, email, push notifications to mobile devices, Ham radio, multicast, snail mail, carrier pigeon, and more.

    All software design involves tradeoffs. These goals, prioritized as shown, lead down an interesting path.

    "},{"location":"concepts/0005-didcomm/#message-based-asynchronous-and-simplex","title":"Message-Based, Asynchronous, and Simplex","text":"

    The dominant paradigm in mobile and web development today is duplex request-response. You call an API with certain inputs, and you get back a response with certain outputs over the same channel, shortly thereafter. This is the world of OpenAPI (Swagger), and it has many virtues.

    Unfortunately, many agents are not good analogs to web servers. They may be mobile devices that turn off at unpredictable intervals and that lack a stable connection to the network. They may need to work peer-to-peer, when the internet is not available. They may need to interact in time frames of hours or days, not with 30-second timeouts. They may not listen over the same channel that they use to talk.

    Because of this, the fundamental paradigm for DIDComm is message-based, asynchronous, and simplex. Agent X sends a message over channel A. Sometime later, it may receive a response from Agent Y over channel B. This is much closer to an email paradigm than a web paradigm.

    On top of this foundation, it is possible to build elegant, synchronous request-response interactions. All of us have interacted with a friend who's emailing or texting us in near-realtime. However, interoperability begins with a least-common-denominator assumption that's simpler.

    "},{"location":"concepts/0005-didcomm/#message-level-security-reciprocal-authentication","title":"Message-Level Security, Reciprocal Authentication","text":"

    The security and privacy goals, and the asynchronous+simplex design decision, break familiar web assumptions in another way. Servers are commonly run by institutions, and we authenticate them with certificates. People and things are usually authenticated to servers by some sort of login process quite different from certificates, and this authentication is cached in a session object that expires. Furthermore, web security is provided at the transport level (TLS); it is not an independent attribute of the messages themselves.

    In a partially disconnected world where a comm channel is not assumed to support duplex request-response, and where the security can't be ignored as a transport problem, traditional TLS, login, and expiring sessions are impractical. Furthermore, centralized servers and certificate authorities perpetuate a power and UX imbalance between servers and clients that doesn't fit with the peer-oriented DIDComm.

    DIDComm uses public key cryptography, not certificates from some parties and passwords from others. Its security guarantees are independent of the transport over which it flows. It is sessionless (though sessions can easily be built atop it). When authentication is required, all parties do it the same way.

    "},{"location":"concepts/0005-didcomm/#reference","title":"Reference","text":"

    The following RFCs profide additional information: * 0021: DIDComm Message Anatomy * 0020: Message Types * 0011: Decorators * 0008: Message ID and Threading * 0019: Encryption Envelope * 0025: Agent Transports

    "},{"location":"concepts/0005-didcomm/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community Aries Framework - .NET .NET framework for building agents of all types Streetcred.id Commercial mobile and web app built using Aries Framework - .NET Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases. Aries Framework - Go For building agents, hubs and other DIDComm ../../features in GoLang. Connect.Me Free mobile app from Evernym. Installed via app store on iOS and Android. Verity Commercially licensed enterprise agent, SaaS or on-prem. Aries Protocol Test Suite Pico Labs Pico Agents protocols: connections, trust_ping, basicmessage, routing"},{"location":"concepts/0006-ssi-notation/","title":"Aries RFC 0006: SSI Notation","text":""},{"location":"concepts/0006-ssi-notation/#summary","title":"Summary","text":"

    This RFC describes a simple, standard notation for various ../../concepts related to decentralized and self-sovereign identity (SSI).

    The notation could be used in design docs, other RFCs, source code comments, chat channels, scripts, debug logs, and miscellaneous technical materials throughout the Aries ecosystem. We hope it is also used in the larger SSI community.

    This RFC is complementary to official terms like the ones curated in the TOIP Concepts and Terminology Working group, the Sovrin Glossary, and so forth.

    "},{"location":"concepts/0006-ssi-notation/#motivation","title":"Motivation","text":"

    All technical materials in our ecosystem hinge on fundamental ../../concepts of self-sovereign identity such as controllers, keys, DIDs, and agents. We need a standard, documented notation to refer to such things, so we can use it consistently, and so we can link to the notation's spec for definitive usage.

    "},{"location":"concepts/0006-ssi-notation/#tutorial","title":"Tutorial","text":"

    The following explanation is meant to be read sequentially and should provide a friendly overview for most who encounter the RFC. See the Reference section for quick lookup.

    "},{"location":"concepts/0006-ssi-notation/#requirements","title":"Requirements","text":"

    This notation aims to be:

    The final requirement deserves special comment. Cryptologists are a major stakeholder in SSI theory. They already have many notational conventions, some more standardized than others. Generally, their notation derives from advanced math and uses specialized symbols and fonts. These experts also tend to intersect strongly with academic circles, where LaTeX and similar rendering technologies are common.

    Despite the intersection between SSI, cryptology, and academia, SSI has to serve a broader audience. Its practicioners are not just mathematicians; they may include support and IT staff, lawyers specializing in intellectual property, business people, auditors, regulators, and individuals sometimes called \"end users.\" In particular, SSI ecosystems are built and maintained by coders. Coders regularly write docs in markdown and html. They interact with one another on chat. They write emails and source code where comments might need to be embedded. They create UML diagrams. They type in shells. They paste code into slide decks and word processors. All of these behaviors militate against a notation that requires complex markup.

    Instead, we want something simple, clean, and universally supported. Hence the 7-bit ASCII requirement. A future version of this RFC, or an addendum to it, might explain how to map this 7-bit ASCII notation to various schemes that use mathematical symbols and are familiar to experts from other fields.

    "},{"location":"concepts/0006-ssi-notation/#solution","title":"Solution","text":""},{"location":"concepts/0006-ssi-notation/#controllers-and-subjects","title":"Controllers and Subjects","text":"

    An identified thing (the referent of an identifier) is called an identity subject. Identity subjects can include:

    The latter category may also act as an identity controller -- something that projects its intent with respect to identity onto the digital landscape.

    When an identity controller controls its own identity, we say that it has self sovereignty -- and we call it a self. (The term identity owner was originally used for an identity controller managing itself, but this hid some of the nuance and introduced legal ../../concepts of ownership that are problematic, so we'll avoid it here.)

    In our notation, selves (or identity controllers) are denoted with a single upper-case ASCII alpha, often corresponding to a first initial of their human-friendly name. For example, Alice might be represented as A. By preference, the first half of the alphabet is used (because \"x\", \"y\", and \"z\" tend to have other ad-hoc meanings). When reading aloud, the spoken form of a symbol like this is the name of the letter. The relevant ABNF fragment is:

    ```ABNF ucase-alpha = %x41-5A ; A-Z lcase-alpha = %x61-7A ; a-z digit = %x30-39 ; 0-9

    self = ucase-alpha ```

    Identity subjects that are not self-controlled are referenced in our notation using a single lower-case ASCII alpha. For example, a movie might be m. For clarity in scenarios where multiple subjects are referenced, it is best to choose letters that differ in something other than case.

      controlled = lcase-alpha\n\n  subject = self / controlled\n

    The set of devices, keys, endpoints, data, and other resources controlled by or for a given subject is called the subject's identity domain (or just domain for short). When the controller is a self, the domain is self-sovereign; otherwise, the domain is controlled. Either way, the domain of an identity subject is like its private universe, so the name or symbol of a subject is often used to denote its domain as well; context eliminates ambiguity. You will see examples of this below.

    "},{"location":"concepts/0006-ssi-notation/#association","title":"Association","text":"

    Elements associated with a domain are named in a way that makes their association clear, using a name@context pattern familiar from email addresses: 1@A (\u201cone at A\u201d) is agent 1 in A\u2019s sovereign domain. (Note how we use an erstwhile identity owner symbol, A, to reference a domain here, but there is no ambiguity.) This fully qualified form of a subject reference is useful for clarification but is often not necessary.

    In addition to domains, this same associating notation may be used where a relationship is the context, because sometimes the association is to the relationship rather than to a participant. See the DID example in the next section.

    "},{"location":"concepts/0006-ssi-notation/#agents","title":"Agents","text":"

    Agents are not subjects. They neither control or own a domain; rather, they live and act within it. They take instructions from the domain's controller. Agents (and hubs, and other things like them) are the first example of elements associated with an identity subject. Despite this, agent-ish things are the primary focus of interactions within SSI ecosystems.

    Additionally, agents are distinct from devices, even though we often (and inaccurately) used them interchangeably. We may say things like \"Alice's iPhone sends a message\" when we more precisely mean \"the agent on Alice's iPhone sends a message.\" In reality, there may be zero, one, or more than one agents running on a particular device.

    Agents are numbered and are represented by up to three digits and then with an association. In most discussions, one digit is plenty, but three digits are allowed so agents can be conveniently grouped by prefix (e.g., all edge agents in Alice's domain might begin with 1, and all cloud might begin with 2).

    agent = 1*3digit \"@\" subject\n
    "},{"location":"concepts/0006-ssi-notation/#devices","title":"Devices","text":"

    Devices are another element inside a subject's domain. They are represented with two or more lower-case ASCII alphanumerics or underscore characters, where the first char cannot be a digit. They end with an association: bobs_car@B, drone4@F, alices_iphone9@A.

    name-start-char = lcase-alpha / \"_\"            ; a-z or underscore\nname-other-char = digit / lcase-alpha / \"_\"    ; 0-9 or a-z or underscore\ndevice = name-start-char 1*name-other-char \"@\" subject\n
    "},{"location":"concepts/0006-ssi-notation/#cross-domain-relationships","title":"Cross-Domain Relationships","text":""},{"location":"concepts/0006-ssi-notation/#short-form-more-common","title":"Short Form (more common)","text":"

    Alice\u2019s pairwise relationship with Bob is represented with colon notation: A:B. This is read aloud as \u201cA to B\u201d (preferred because it\u2019s short; alternatives such as \u201cthe A B relationship\u201d or \u201cA colon B\u201d or \u201cA with respect to B\u201d are also valid). When written in the other order, it represents the same relationship as seen from Bob\u2019s point of view. Note that passive subjects may also participate in relationships: A:bobs_car. (Contrast Intra-Domain Relationships below.)

    N-wise relationships (e.g., doctor, hospital, patient) are written with the perspective-governing subject's identifier, a single colon, then by all other identifiers for members of the relationship, in alphabetical order, separated by +: A:B+C, B:A+C. This is read aloud as in \"A to B plus C.\"

    next-subject = \"+\" subject\nshort-relationship = subject \":\" subject *next-subject\n
    "},{"location":"concepts/0006-ssi-notation/#long-form","title":"Long Form","text":"

    Short form is convenient and brief, but it is inconsistent because each party to the relationship describes it differently. Sometimes this may be undesirable, so a long and consistent form is also supported. The long form of both pairwise and N-way relationships lists all participants to the right of the colon, in alphabetical order. Thus the long forms of the Alice to Bob relationship might be A:A+B (for Alice's view of this relationship) and B:A+B (for Bob's view). For a doctor, hospital, patient relationship, we might have D:D+H+P, H:D+H+P, and P:D+H+P. Note how the enumeration of parties to the right of the colon is consistent.

    Long form and short form are allowed to vary freely; any tools that parses this notation should treat them as synonyms and stylistic choices only.

    The ABNF for long form is identical to short form, except that we are guaranteed that after the colon, we will see at least two parties and one + character:

    long-relationship = subject \":\" subject 1*next-subject\n
    "},{"location":"concepts/0006-ssi-notation/#generalized-relationships","title":"Generalized Relationships","text":""},{"location":"concepts/0006-ssi-notation/#contexts","title":"Contexts","text":"

    Some models for SSI emphasize the concept of personas or contexts. These are essentially \"masks\" that an identity controller enables, exposing a limited subset of the subject's identity to an audience that shares that context. For example, Alice might assume one persona in her employment relationships, another for government interactions, another for friends, and another when she's a customer.

    Contexts or personas can be modeled as a relationship with a generalized audience: A:Work, A:Friends.

    general-audience = ucase-alpha 1*name-other-char\ngeneral-relationship = subject \":\" general-audience\nrelationship = short-relationship / long-relationship / general-relationship\n
    "},{"location":"concepts/0006-ssi-notation/#any","title":"Any","text":"

    The concept of public DIDs suggests that someone may think about a relationship as unbounded, or as not varying no matter who the other subject is. For example, a company may create a public DID and advertise it to the world, intending for this connection point to begin relationships with customers, partners, and vendors alike. While best practice suggests that such relationships be used with care, and that they primarily serve to bootstrap pairwise relationships, the notation still needs to represent the possibility.

    The token Any is reserved for these semantics. If Acme Corp is represented as A, then Acme's public persona could be denoted with A:Any. When Any is used, it is never the subject whose perspective is captured; it is always a faceless \"other\". This means that Any appears only on the right side of a colon in a relationship, and it probably doesn't make sense to combine it with other participants since it would subsume them all.

    "},{"location":"concepts/0006-ssi-notation/#self","title":"Self","text":"

    It is sometimes useful to model a relationship with onesself. This is done with the reserved token Self.

    "},{"location":"concepts/0006-ssi-notation/#intra-domain-relationships","title":"Intra-Domain Relationships","text":"

    Within a domain, relationships among agents or devices is sometimes interesting. Such relationships use the ~ (tilde) character. Thus, the intra-domain relationship between Alice's agent 1 and agent 2 is written 1~2 and read as \"one tilde two\".

    "},{"location":"concepts/0006-ssi-notation/#constituents","title":"Constituents","text":"

    Items that belong to a domain rather than having independent identity of their own (for example, data, money, keys) use dot notation for containment or ownership: A.ls, (A\u2019s link secret), A.policy, etc.

    Names for constituents use the same rules as names for agents and devices.

    Alice\u2019s DID for her relationship with Bob is an inert constituent datum, but it is properly associated with the relationship rather than just Alice. It is thus represented with A.did@A:B. (The token did is reserved for DIDs). This is read as \u201cA\u2019s DID at A to B\u201d. Bob\u2019s complementary DID would be B.did@B:A.

    inert = name-start-char 1*name-other-char\nnested = \".\" inert\nowned-inert = subject 1*nested\n\nassociated-to = identity-owner / relationship\nassociated = subject 0*nested \"@\" associated-to\n

    If A has a cloud agent 2, then the public key (verification key or verkey) and private, secret key (signing key or sigkey) used by 2 in A:B would be: 2.pk@A:B and 2.sk@A:B. This is read as \u201c2 dot P K at A to B\u201d and \u201c2 dot S K at A to B\u201d. Here, 2 is known to belong to A because it takes A\u2019s perspective on A:B--it would be equivalent but unnecessary to write A.2.pk@A:B.

    "},{"location":"concepts/0006-ssi-notation/#did-docs-and-did-references","title":"DID Docs and DID References","text":"

    The mention of keys belonging to agents naturally raises the question of DID docs and the things they contain. How do they relate to our notation?

    DIDs are designed to be URIs, and items that carry an id property within a DID Doc can be referenced with standard URI fragment notation. This allows someone, for example, to refer to the first public key used by one of the agents owned by Alice with a notation like: did:sov:VUrvFeWW2cPv9hkNZ2ms2a;#key1.

    This notation is important and useful, but it is somewhat orthogonal to the concerns of this RFC. In the context of SSI notation, we are not DID-centric; we are subject centric, and subject are identified by a single capital alpha instead of by their DID. This helps with brevity. It lets us ignore the specific DID value and instead focus on the higher level semantics; compare:

    {A.did@A:B}/B --> B

    ...to:

    did:sov:PXqKt8sVsDu9T7BpeNqBfe sends its DID for did:sov:6tb15mkMRagD7YA3SBZg3p to did:sov:6tb15mkMRagD7YA3SBZg3p, using the agent possessing did:sov:PXqKt8sVsDu9T7BpeNqBfe;#key1 to encrypt with the corresponding signing key.

    We expect DID reference notation (the verbose text above) to be relevant for concrete communication between computers, and SSI notation (the terse equivalent shown first) to be more convenient for symbolic, higher level discussions between human beings. Occasionally, we may get very specific and map SSI notation into DID notation (e.g., A.1.vk = did:sov:PXqKt8sVsDu9T7BpeNqBfe;#key1).

    "},{"location":"concepts/0006-ssi-notation/#counting-and-iteration","title":"Counting and Iteration","text":"

    Sometimes, a concept or value evolves over time. For example, a given discussion might need to describe a DID Doc or an endpoint or a key across multiple state changes. In mathematical notation, this would typically be modeled with subscripts. In our notation, we use square brackets, and we number beginning from zero. A.pk[0]@A:B would be the first pubkey used by A in the A:B relationship; A.pk[1]@A:B would be the second pubkey, and so on. Likewise, a sequence of messages could be represented with msg[0], msg[1], and msg[2].

    "},{"location":"concepts/0006-ssi-notation/#messages","title":"Messages","text":"

    Messages are represented as quoted string literals, or with the reserved token msg, or with kebab-case names that explain their semantics, as in cred-offer:

    string-literal = %x22 c-literal %x22\nkebab-char = lcase-alpha / digit\nkebab-suffix = \"-\" 1*hint-char\nkebab-msg = 1*kebab-char *kebab-suffix\nmessage = \"msg\" / string-literal / kebab-msg\n
    "},{"location":"concepts/0006-ssi-notation/#payments","title":"Payments","text":"

    Economic activity is part of rich SSI ecosystems, and requires notation. A payment address is denoted with the pay reserved token; A.pay[4] would be A's fifth payment address. The public key and secret key for a payment address use the ppk and psk reserved token, respectively. Thus, one way to reference the payment keys for that payment address would be A.pay[4].ppk and A.pay[4].psk. (Keys are normally held by agents, not by people--and every agent has its own keys. Thus, another notation for the public key pertaining to this address might be A.1.pay[4].ppk. This is an area of clumsiness that needs further study.)

    "},{"location":"concepts/0006-ssi-notation/#encryption","title":"Encryption","text":"

    Encryption deserves special consideration in the SSI world. It often figures prominently in discussions about security and privacy, and our notation needs to be able to represent it carefully.

    The following crypto operations are recognized by the notation, without making a strong claim about how the operations are implemented. (For example, inline Diffie Helman and an ephemeral symmetric key might be used for the *_crypt algorithms. What is interesting to the notation isn't the low-level details, but the general semantics achieved.)

    The notation for these crypto primitives uses curly braces around the message, with suffixes to clarify semantics. Generally, it identifies a recipient as an identity owner or thing, without clarifying the key that's used--the pairwise key for their DID is assumed.

    asymmetric   = \"/\"                                   ; suffix\nsymmetric    = \"*\"                                   ; suffix\nsign         = \"#\"                                   ; suffix\nmultiplex    = \"%\"                                   ; suffix\nverify       = \"?\"                                   ; suffix\n\nanon-crypt   = \"{\" message \"}\" asymmetric subject          ; e.g., {\"hi\"}/B\n\n                ; sender is first subject in relationship, receiver is second\nauth-crypt   = \"{\" message \"}\" asymmetric short-relationship ; e.g., {\"hi\"}/A:B\n\nsym-crypt    = \"{\" message \"}\" symmetric subject           ; e.g., {\"hi\"}*B\n\nverify       = \"{\" message \"}\" verify subject              ; e.g., {\"hi\"}?B\n

    The relative order of suffixes reflects whether encryption or signing takes place first: {\"hello\"}*B# says that symmetric encryption happens first, and then a signature is computed over the cypertext; {\"hello\"#}*B says that plaintext is signed, and then both the plaintext and the signature are encrypted. (The {\"hello\"}#*B variant is nonsensical because it splits the encryption notation in half).

    All suffixes can be further decorated with a parenthesized algorithm name, if precision is required: {\"hello\"}*(aes256)B or {\"hello\"}/(rsa1024)A:B or {\"hello\"#(ecdsa)}/B.

    With signing, usually the signer and sender are assumed to be identical, and the notation omits any clarification about the signer. However, this can be added after # to be explicit. Thus, {msg#B}/C would be a message with plaintext signed by B, anon-encrypted for C. Similarly, {msg#(ring-rabin)BGJM}/A:C would be a message with plaintext signed according to a Rabin ring signature algorithm, by B, G, J, and M, and then auth-encrypted by A for C.

    Signing verification would be over the corresponding message and which entities perform the action. {msg#A}?B would be a message with plaintext signed by A verified by B. {msg#(threshold-sig)ABC}?DE would be a plaintext message signed according to a threshold signature algorithm by A, B, C and then verified by D and E.

    Multiplexed asymmetric encryption is noted above, but has not yet been described. This is a technique whereby a message body is encrypted with an ephemeral symmetric key, and then the ephemeral key is encrypted asymmetrically for multiple potential recipients (each of which has a unique but tiny payload [the key] to decrypt, which in turn unlocks the main payload). The notation for this looks like {msg}%BCDE for multiplexed anon_crypt (sender is anonymous), and like {msg}%A:BCDE for multiplexed auth_crypt (sender is authenticated by their private key).

    "},{"location":"concepts/0006-ssi-notation/#other-punctuation","title":"Other punctuation","text":"

    Message sending is represented with arrows: -> is most common, though <- is also reasonable in some cases. Message content and notes about sending can be embedded in the hyphens of sending arrow, as in this example, where the notation says an unknown party uses http to transmit \"hello\", anon-enrcypted for Alice:

    <unknown> -- http: {\"hello\"}/A --> 1

    Parentheses have traditional meaning (casual usage in written language, plus grouping and precedence).

    Angle braces < and > are for placeholders; any reasonable explanatory text may appear inside the angle braces, so to represent Alice's relationship with a not-yet-known subject, the notation might show something like A:<TBD>.

    "},{"location":"concepts/0006-ssi-notation/#reference","title":"Reference","text":""},{"location":"concepts/0006-ssi-notation/#examples","title":"Examples","text":""},{"location":"concepts/0006-ssi-notation/#reserved-tokens","title":"Reserved Tokens","text":""},{"location":"concepts/0006-ssi-notation/#abnf","title":"ABNF","text":"
    ucase-alpha    = %x41-5A                        ; A-Z\nlcase-alpha    = %x61-7A                        ; a-z\ndigit          = %x30-39                        ; 0-9\nname-start-char = lcase-alpha / \"_\"             ; a-z or underscore\nname-other-char = digit / lcase-alpha / \"_\"     ; 0-9 or a-z or underscore\n\nidentity-owner = ucase-alpha\nthing = lcase-alpha\nsubject = identity-owner / thing\n\nagent = 1*3digit \"@\" subject\ndevice = name-start-char 1*name-other-char \"@\" subject\n\nnext-subject = \"+\" subject\nshort-relationship = subject \":\" subject *next-subject\nlong-relationship = subject \":\" subject 1*next-subject\ngeneral-audience = ucase-alpha 1*name-other-char\ngeneral-relationship = subject \":\" general-audience\nrelationship = short-relationship / long-relationship / general-relationship\n\ninert = name-start-char 1*name-other-char\nnested = \".\" inert\nowned-inert = subject 1*nested\n\nassociated-to = identity-owner / relationship\nassociated = subject 0*nested \"@\" associated-to\n\nstring-literal = %x22 c-literal %x22\nkebab-char = lcase-alpha / digit\nkebab-suffix = \"-\" 1*hint-char\nkebab-msg = 1*kebab-char *kebab-suffix\nmessage = \"msg\" / string-literal / kebab-msg\n\nasymmetric   = \"/\"                                   ; suffix\nsymmetric    = \"*\"                                   ; suffix\nsign         = \"#\"                                   ; suffix\nmultiplex    = \"%\"                                   ; suffix\n\nanon-crypt   = \"{\" message \"}\" asymmetric subject          ; e.g., {\"hi\"}/B\n\n                ; sender is first subject in relationship, receiver is second\nauth-crypt   = \"{\" message asymmetric short-relationship ; e.g., {\"hi\"}/A:B\n\nsym-crypt    = \"{\" message \"}\" symmetric subject           ; e.g., {\"hi\"}*B\n
    "},{"location":"concepts/0006-ssi-notation/#drawbacks","title":"Drawbacks","text":""},{"location":"concepts/0006-ssi-notation/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0006-ssi-notation/#prior-art","title":"Prior art","text":"

    Also, experiments with superscripts and subscripts in this format led to semantic dead ends or undesirable nesting when patterns were applied consistently. For example, one thought had us representing Alice's verkey, signing key, and DID for her Bob relationship with ABVK, ABSK. and ABDID. This was fine until we asked how to represent the verkey for Alice's agent in the Alice to Bob relationship; is that ABDIDVK? And what about Alice's link secret, that isn't relationship-specific? And how would we handle N-way relationships?

    "},{"location":"concepts/0006-ssi-notation/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0006-ssi-notation/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Peer DID Method Spec uses notation in diagrams"},{"location":"concepts/0008-message-id-and-threading/","title":"Aries RFC 0008: Message ID and Threading","text":""},{"location":"concepts/0008-message-id-and-threading/#summary","title":"Summary","text":"

    Definition of the message @id field and the ~thread decorator.

    "},{"location":"concepts/0008-message-id-and-threading/#motivation","title":"Motivation","text":"

    Referring to messages is useful in many interactions. A standard method of adding a message ID promotes good patterns in message families. When multiple messages are coordinated in a message flow, the threading pattern helps avoid having to re-roll the same spec for each message family that needs it.

    "},{"location":"concepts/0008-message-id-and-threading/#tutorial","title":"Tutorial","text":""},{"location":"concepts/0008-message-id-and-threading/#message-ids","title":"Message IDs","text":"

    Message IDs are specified with the @id attribute, which comes from JSON-LD. The sender of the message is responsible for creating the message ID, and any message can be identified by the combination of the sender and the message ID. Message IDs should be considered to be opaque identifiers by any recipients.

    "},{"location":"concepts/0008-message-id-and-threading/#message-id-requirements","title":"Message ID Requirements","text":""},{"location":"concepts/0008-message-id-and-threading/#example","title":"Example","text":"
    {\n    \"@type\": \"did:example:12345...;spec/example_family/1.0/example_type\",\n    \"@id\": \"98fd8d72-80f6-4419-abc2-c65ea39d0f38\",\n    \"example_attribute\": \"stuff\"\n}\n

    The following was pulled from this document written by Daniel Hardman and stored in the Sovrin Foundation's protocol repository.

    "},{"location":"concepts/0008-message-id-and-threading/#threaded-messages","title":"Threaded Messages","text":"

    Message threading will be implemented as a decorator to messages, for example:

    {\n    \"@type\": \"did:example:12345...;spec/example_family/1.0/example_type\",\n    \"@id\": \"98fd8d72-80f6-4419-abc2-c65ea39d0f38\",\n    \"~thread\": {\n        \"thid\": \"98fd8d72-80f6-4419-abc2-c65ea39d0f38\",\n        \"pthid\": \"1e513ad4-48c9-444e-9e7e-5b8b45c5e325\",\n        \"sender_order\": 3,\n        \"received_orders\": {\"did:sov:abcxyz\":1},\n        \"goal_code\": \"aries.vc.issue\"\n    },\n    \"example_attribute\": \"example_value\"\n}\n

    The ~thread decorator is generally required on any type of response, since this is what connects it with the original request.

    While not recommended, the initial message of a new protocol instance MAY have an empty ({}) ~thread item. Aries agents receiving a message with an empty ~thread item MUST gracefully handle such a message.

    "},{"location":"concepts/0008-message-id-and-threading/#thread-object","title":"Thread object","text":"

    A thread object has the following fields discussed below:

    "},{"location":"concepts/0008-message-id-and-threading/#thread-id-thid","title":"Thread ID (thid)","text":"

    Because multiple interactions can happen simultaneously, it's important to differentiate between them. This is done with a Thread ID or thid.

    If the Thread object is defined and a thid is given, the Thread ID is the value given there. But if the Thread object is not defined in a message, the Thread ID is implicitly defined as the Message ID (@id) of the given message and that message is the first message of a new thread.

    "},{"location":"concepts/0008-message-id-and-threading/#sender-order-sender_order","title":"Sender Order (sender_order)","text":"

    It is desirable to know how messages within a thread should be ordered. However, it is very difficult to know with confidence the absolute ordering of events scattered across a distributed system. Alice and Bob may each send a message before receiving the other's response, but be unsure whether their message was composed before the other's. Timestamping cannot resolve an impasse. Therefore, there is no unified absolute ordering of all messages within a thread--but there is an ordering of all messages emitted by a each participant.

    In a given thread, the first message from each party has a sender_order value of 0, the second message sent from each party has a sender_order value of 1, and so forth. Note that both Alice and Bob use 0 and 1, without regard to whether the other party may be known to have used them. This gives a strong ordering with respect to each party's messages, and it means that any message can be uniquely identified in an interaction by its thid, the sender DID and/or key, and the sender_order.

    "},{"location":"concepts/0008-message-id-and-threading/#received-orders-received_orders","title":"Received Orders (received_orders)","text":"

    In an interaction, it may be useful for the recipient of a message to know if their last message was received. A received_orders value addresses this need, and could be included as a best practice to help detect missing messages.

    In the example above, if Alice is the sender, and Bob is identified by did:sov:abcxyz, then Alice is saying, \"Here's my message with index 3 (sender_order=3), and I'm sending it in response to your message 1 (received_orders: {<bob's DID>: 1}. Apparently Alice has been more chatty than Bob in this exchange.

    The received_orders field is plural to acknowledge the possibility of multiple parties. In pairwise interactions, this may seem odd. However, n-wise interactions are possible (e.g., in a doctor ~ hospital ~ patient n-wise relationship). Even in pairwise, multiple agents on either side may introduce other actors. This may happen even if an interaction is designed to be 2-party (e.g., an intermediate party emits an error unexpectedly).

    In an interaction with more parties, the received_orders object has a key/value pair for each actor/sender_order, where actor is a DID or a key for an agent:

    \"received_orders\": {\"did:sov:abcxyz\":1, \"did:sov:defghi\":14}\n

    Here, the received_orders fragment makes a claim about the last sender_order that the sender observed from did:sov:abcxyz and did:sov:defghi. The sender of this fragment is presumably some other DID, implying that 3 parties are participating. Any parties unnamed in received_orders have an undefined value for received_orders. This is NOT the same as saying that they have made no observable contribution to the thread. To make that claim, use the special value -1, as in:

    \"received_orders\": {\"did:sov:abcxyz\":1, \"did:sov:defghi\":14, \"did:sov:jklmno\":-1}\n
    "},{"location":"concepts/0008-message-id-and-threading/#example_1","title":"Example","text":"

    As an example, Alice is an issuer and she offers a credential to Bob.

    "},{"location":"concepts/0008-message-id-and-threading/#nested-interactions-parent-thread-id-or-pthid","title":"Nested interactions (Parent Thread ID or pthid)","text":"

    Sometimes there are interactions that need to occur with the same party, while an existing interaction is in-flight.

    When an interaction is nested within another, the initiator of a new interaction can include a Parent Thread ID (pthid). This signals to the other party that this is a thread that is branching off of an existing interaction.

    "},{"location":"concepts/0008-message-id-and-threading/#nested-example","title":"Nested Example","text":"

    As before, Alice is an issuer and she offers a credential to Bob. This time, she wants a bit more information before she is comfortable providing a credential.

    All of the steps are the same, except the two bolded steps that are part of a nested interaction.

    "},{"location":"concepts/0008-message-id-and-threading/#implicit-threads","title":"Implicit Threads","text":"

    Threads reference a Message ID as the origin of the thread. This allows any message to be the start of a thread, even if not originally intended. Any message without an explicit ~thread attribute can be considered to have the following ~thread attribute implicitly present.

    \"~thread\": {\n    \"thid\": <same as @id of the outer message>,\n    \"sender_order\": 0\n}\n
    "},{"location":"concepts/0008-message-id-and-threading/#implicit-replies","title":"Implicit Replies","text":"

    A message that contains a ~thread block with a thid different from the outer message @id, but no sender_order is considered an implicit reply. Implicit replies have a sender_order of 0 and an received_orders:{other:0}. Implicit replies should only be used when a further message thread is not anticipated. When further messages in the thread are expected, a full regular ~thread block should be used.

    Example Message with am Implicit Reply:

    {\n    \"@id\": \"<@id of outer message>\",\n    \"~thread\": {\n        \"thid\": \"<different than @id of outer message>\"\n    }\n}\n
    Effective Message with defaults in place:
    {\n    \"@id\": \"<@id of outer message>\",\n    \"~thread\": {\n        \"thid\": \"<different than @id of outer message>\"\n        \"sender_order\": 0,\n        \"received_orders\": { \"DID of sender\":0 }\n    }\n}\n

    "},{"location":"concepts/0008-message-id-and-threading/#reference","title":"Reference","text":""},{"location":"concepts/0008-message-id-and-threading/#drawbacks","title":"Drawbacks","text":"

    Why should we not do this?

    "},{"location":"concepts/0008-message-id-and-threading/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0008-message-id-and-threading/#prior-art","title":"Prior art","text":"

    If you're aware of relevant prior-art, please add it here.

    "},{"location":"concepts/0008-message-id-and-threading/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0008-message-id-and-threading/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases. Connect.Me Free mobile app from Evernym. Installed via app store on iOS and Android. Verity Commercially licensed enterprise agent, SaaS or on-prem. Aries Protocol Test Suite"},{"location":"concepts/0011-decorators/","title":"Aries RFC 0011: Decorators","text":""},{"location":"concepts/0011-decorators/#summary","title":"Summary","text":"

    Explain how decorators work in DID communication.

    "},{"location":"concepts/0011-decorators/#motivation","title":"Motivation","text":"

    Certain semantic patterns manifest over and over again in communication. For example, all communication needs the pattern of testing the type of message received. The pattern of identifying a message and referencing it later is likely to be useful in a high percentage of all protocols that are ever written. A pattern that associates messages with debugging/tracing/timing metadata is equally relevant. And so forth.

    We need a way to convey metadata that embodies these patterns, without complicating schemas, bloating core definitions, managing complicated inheritance hierarchies, or confusing one another. It needs to be elegant, powerful, and adaptable.

    "},{"location":"concepts/0011-decorators/#tutorial","title":"Tutorial","text":"

    A decorator is an optional chunk of JSON that conveys metadata. Decorators are not declared in a core schema but rather supplementary to it. Decorators add semantic content broadly relevant to messaging in general, and not so much tied to the problem domain of a specific type of interaction.

    You can think of decorators as a sort of mixin for agent-to-agent messaging. This is not a perfect analogy, but it is a good one. Decorators in DIDComm also have some overlap (but not a direct congruence) with annotations in Java, attributes in C#, and both decorators and annotations in python.

    "},{"location":"concepts/0011-decorators/#simple-example","title":"Simple Example","text":"

    Imagine we are designing a protocol and associated messages to arrange meetings between two people. We might come up with a meeting_proposal message that looks like this:

    {\n  \"@id\": \"e2987006-a18a-4544-9596-5ad0d9390c8b\",\n  \"@type\": \"did:sov:8700e296a1458aad0d93;spec/meetings/1.0/proposal\",\n  \"proposed_time\": \"2019-12-23 17:00\",\n  \"proposed_place\": \"at the cathedral, Barf\u00fcsserplatz, Basel\",\n  \"comment\": \"Let's walk through the Christmas market.\"\n}\n

    Now we tackle the meeting_proposal_response messages. Maybe we start with something exceedingly simple, like:

    {\n  \"@id\": \"d9390ce2-8ba1-4544-9596-9870065ad08a\",\n  \"@type\": \"did:sov:8700e296a1458aad0d93;spec/meetings/1.0/response\",\n  \"agree\": true,\n  \"comment\": \"See you there!\"\n}\n

    But we quickly realize that the asynchronous nature of messaging will expose a gap in our message design: if Alice receives two meeting proposals from Bob at the same time, there is nothing to bind a response back to the specific proposal it addresses.

    We could extend the schema of our response so it contains an thread that references the @id of the original proposal. This would work. However, it does not satsify the DRY principle of software design, because when we tackle the protocol for negotiating a purchase between buyer and seller next week, we will need the same solution all over again. The result would be a proliferation of schemas that all address the same basic need for associating request and response. Worse, they might do it in different ways, cluttering the mental model for everyone and making the underlying patterns less obvious.

    What we want instead is a way to inject into any message the idea of a thread, such that we can easily associate responses with requests, errors with the messages that triggered them, and child interactions that branch off of the main one. This is the subject of the message threading RFC, and the solution is the ~thread decorator, which can be added to any response:

    {\n  \"@id\": \"d9390ce2-8ba1-4544-9596-9870065ad08a\",\n  \"@type\": \"did:sov:8700e296a1458aad0d93;spec/meetings/1.0/response\",\n  \"~thread\": {\"thid\": \"e2987006-a18a-4544-9596-5ad0d9390c8b\"},\n  \"agree\": true,\n  \"comment\": \"See you there!\"\n}\n
    This chunk of JSON is defined independent of any particular message schema, but is understood to be available in all DIDComm schemas.

    "},{"location":"concepts/0011-decorators/#basic-conventions","title":"Basic Conventions","text":"

    Decorators are defined in RFCs that document a general pattern such as message threading RFC or message localization. The documentation for a decorator explains its semantics and offers examples.

    Decorators are recognized by name. The name must begin with the ~ character (which is reserved in DIDComm messages for decorator use), and be a short, single-line string suitable for use as a JSON attribute name.

    Decorators may be simple key:value pairs \"~foo\": \"bar\". Or they may associate a key with a more complex structure:

    \"~thread\": {\n  \"thid\": \"e2987006-a18a-4544-9596-5ad0d9390c8b\",\n  \"pthid\": \"0c8be298-45a1-48a4-5996-d0d95a397006\",\n  \"sender_order\": 0\n}\n

    Decorators should be thought of as supplementary to the problem-domain-specific fields of a message, in that they describe general communication issues relevant to a broad array of message types. Entities that handle messages should treat all unrecognized fields as valid but meaningless, and decorators are no exception. Thus, software that doesn't recognize a decorator should ignore it.

    However, this does not mean that decorators are necessarily optional. Some messages may intend something tied so tightly to a decorator's semantics that the decorator effectively becomes required. An example of this is the relationship between a general error reporting mechanism and the ~thread decorator: it's not very helpful to report errors without the context that a thread provides.

    Because decorators are general by design and intent, we don't expect namespacing to be a major concern. The community agrees on decorators that everybody will recognize, and they acquire global scope upon acceptance. Their globalness is part of their utility. Effectively, decorator names are like reserved words in a shared public language of messages.

    Namespacing is also supported, as we may discover legitimate uses. When namespaces are desired, dotted name notation is used, as in ~mynamespace.mydecoratorname. We may elaborate this topic more in the future.

    Decorators are orthogonal to JSON-LD constructs in DIDComm messages.

    "},{"location":"concepts/0011-decorators/#versioning","title":"Versioning","text":"

    We hope that community-defined decorators are very stable. However, new fields (a non-breaking change) might need to be added to complex decorators; occasionally, more significant changes might be necessary as well. Therefore, decorators do support semver-style versioning, but in a form that allows details to be ignored unless or until they become important. The rules are:

    1. As with all other aspects of DIDComm messages, unrecognized fields in decorators must be ignored.
    2. Version information can be appended to the name of a decorator, as in ~mydecorator/1. Only a major version (never minor or patch) is used, since:
      • Minor version variations should not break decorator handling code.
      • The dot character . is reserved for namespacing within field names.
      • The extra complexity is not worth the small amount of value it might add.
    3. A decorator without a version is considered to be synonymous with version 1.0, and the version-less form is preferred. This allows version numbers to be added only in the uncommon cases where they are necessary.
    "},{"location":"concepts/0011-decorators/#decorator-scope","title":"Decorator Scope","text":"

    A decorator may be understood to decorate (add semantics) at several different scopes. The discussion thus far has focused on message decorators, and this is by far the most important scope to understand. But there are more possibilities.

    Suppose we wanted to decorate an individual field. This can be done with a field decorator, which is a sibling field to the field it decorates. The name of decorated field is combined with a decorator suffix, as follows:

    {\n  \"note\": \"Let's have a picnic.\",\n  \"note~l10n\": { ... }\n}\n
    In this example, taken from the localization pattern, note~l10n decorates note.

    Besides a single message or a single field, consider the following scopes as decorator targets:

    "},{"location":"concepts/0011-decorators/#reference","title":"Reference","text":"

    This section of this RFC will be kept up-to-date with a list of globally accepted decorators, and links to the RFCs that define them.

    "},{"location":"concepts/0011-decorators/#drawbacks","title":"Drawbacks","text":"

    By having fields that are meaningful yet not declared in core schemas, we run the risk that parsing and validation routines will fail to enforce details that are significant but invisible. We also accept the possibility that interop may look good on paper, but fail due to different understandings of important metadata.

    We believe this risk will take care of itself, for the most part, as real-life usage accumulates and decorators become a familiar and central part of the thinking for developers who work with agent-to-agent communication.

    "},{"location":"concepts/0011-decorators/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    There is ongoing work in the #indy-semantics channel on Rocket.Chat to explore the concept of overlays. These are layers of additional meaning that accumulate above a schema base. Decorators as described here are quite similar in intent. There are some subtle differences, though. The most interesting is that decorators as described here may be applied to things that are not schema-like (e.g., to a message family as a whole, or to a connection, not just to an individual message).

    We may be able to resolve these two worldviews, such that decorators are viewed as overlays and inherit some overlay goodness as a result. However, it is unlikely that decorators will change significantly in form or substance as a result. We thus believe the current mental model is already RFC-worthy, and represents a reasonable foundation for immediate use.

    "},{"location":"concepts/0011-decorators/#prior-art","title":"Prior art","text":"

    See references to similar ../../features in programming languages like Java, C#, and Python, mentiond above.

    See also this series of blog posts about semantic gaps and the need to manage intent in a declarative style: [ Lacunas Everywhere, Bridging the Lacuna Humana, Introducing Marks, Mountains, Molehills, and Markedness ]

    "},{"location":"concepts/0011-decorators/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0011-decorators/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries RFCs: RFC 0008, RFC 0017, RFC 0015, RFC 0023, RFC 0043, RFC 0056, RFC 0075 many implemented RFCs depend on decorators... Indy Cloud Agent - Python message threading Aries Framework - .NET message threading Streetcred.id message threading Aries Cloud Agent - Python message threading, attachments Aries Static Agent - Python message threading Aries Framework - Go message threading Connect.Me message threading Verity message threading Aries Protocol Test Suite message threading"},{"location":"concepts/0013-overlays/","title":"Aries RFC 0013: Overlays","text":""},{"location":"concepts/0017-attachments/","title":"Aries RFC 0017: Attachments","text":""},{"location":"concepts/0017-attachments/#summary","title":"Summary","text":"

    Explains the three canonical ways to attach data to an agent message.

    "},{"location":"concepts/0017-attachments/#motivation","title":"Motivation","text":"

    DIDComm messages use a structured format with a defined schema and a small inventory of scalar data types (string, number, date, etc). However, it will be quite common for messages to supplement formalized exchange with arbitrary data--images, documents, or types of media not yet invented.

    We need a way to \"attach\" such content to DIDComm messages. This method must be flexible, powerful, and usable without requiring new schema updates for every dynamic variation.

    "},{"location":"concepts/0017-attachments/#tutorial","title":"Tutorial","text":""},{"location":"concepts/0017-attachments/#messages-versus-data","title":"Messages versus Data","text":"

    Before explaining how to associate data with a message, it is worth pondering exactly how these two categories of information differ. It is common for newcomers to DIDComm to argue that messages are just data, and vice versa. After all, any data can be transmitted over DIDComm; doesn't that turn it into a message? And any message can be saved; doesn't that make it data?

    While it is true that messages and data are highly related, some semantic differences matter:

    Some examples:

    The line between these two ../../concepts may not be perfectly crisp in all cases, and that is okay. It is clear enough, most of the time, to provide context for the central question of this RFC, which is:

    How do we send data along with messages?

    "},{"location":"concepts/0017-attachments/#3-ways","title":"3 Ways","text":"

    Data can be \"attached\" to DIDComm messages in 3 ways:

    1. Inlining
    2. Embedding
    3. Appending
    "},{"location":"concepts/0017-attachments/#inlining","title":"Inlining","text":"

    In inlining, data is directly assigned as the value paired with a JSON key in a DIDComm message. For example, a message about arranging a rendezvous may inline data about a location:

    This inlined data is in Google Maps pinning format. It has a meaning at rest, outside the message that conveys it, and the versioning of its structure may evolve independently of the versioning of the rendezvous protocol.

    Only JSON data can be inlined, since any other data format would break JSON format rules.

    "},{"location":"concepts/0017-attachments/#embedding","title":"Embedding","text":"

    In embedding, a JSON data structure called an attachment descriptor is assigned as the value paired with a JSON key in a DIDComm message. (Or, an array of attachment descriptors could be assigned.) By convention, the key name for such attachment fields ends with ~attach, making it a field-level decorator that can share common handling logic in agent code. The attachment descriptor structure describes the MIME type and other properties of the data, in much the same way that MIME headers and body describe and contain an attachment in an email message. Given an imaginary protocol that photographers could use to share their favorite photo with friends, the embedded data might manifest like this:

    Embedding is a less direct mechanism than inlining, because the data is no longer readable by a human inspecting the message; it is base64url-encoded instead. A benefit of this approach is that the data can be any MIME type instead of just JSON, and that the data comes with useful metadata that can facilitate saving it as a separate file.

    "},{"location":"concepts/0017-attachments/#appending","title":"Appending","text":"

    Appending is accomplished using the ~attach decorator, which can be added to any message to include arbitrary data. The decorator is an array of attachment descriptor structures (the same structure used for embedding). For example, a message that conveys evidence found at a crime scene might include the following decorator:

    "},{"location":"concepts/0017-attachments/#choosing-the-right-approach","title":"Choosing the right approach","text":"

    These methods for attaching sit along a continuum that is somewhat like the continuum between strong, statically typed languages versus dynamic, duck-typed languages in programming. The more strongly typed the attachments are, the more strongly bound the attachments are to the protocol that conveys them. Each choice has advantages and disadvantages.

    Inlined data is strongly typed; the schema for its associated message must specify the name of the data field, plus what type of data it contains. Its format is always some kind of JSON--often JSON-LD with a @type and/or @context field to provide greater clarity and some independence of versioning. Simple and small data is the best fit for inlining. As mentioned earlier, the Connection Protocol inlines a DID Doc in its connection_request and connection_response messages.

    Embedded data is still associated with a known field in the message schema, but it can have a broader set of possible formats. A credential exchange protocol might embed a credential in the final message that does credential issuance.

    Appended attachments are the most flexible but also the hardest to run through semantically sophisticated processing. They do not require any specific declaration in the schema of a message, although they can be referenced in fields defined by the schema via their nickname (see below). A protocol that needs to pass an arbitrary collection of artifacts without strong knowledge of their semantics might find this helpful, as in the example mentioned above, where scheduling a venue causes various human-usable payloads to be delivered.

    "},{"location":"concepts/0017-attachments/#ids-for-attachments","title":"IDs for attachments","text":"

    The @id field within an attachment descriptor is used to refer unambiguously to an appended (or less ideally, embedded) attachment, and works like an HTML anchor. It is resolved relative to the root @id of the message and only has to be unique within a message. For example, imagine a fictional message type that's used to apply for an art scholarship, that requires photos of art demonstrating techniques A, B, and C. We could have 3 different attachment descriptors--but what if the same work of art demonstrates both technique A and technique B? We don't want to attach the same photo twice...

    What we can do is stipulate that the datatype of A_pic, B_pic, and C_pic is an attachment reference, and that the references will point to appended attachments. A fragment of the result might look like this:

    Another example of nickname use appeared in the first example of appended attachments above, where the notes field refered to the @ids of the various attachments.

    This indirection offers several benefits:

    We could use this same technique with embedded attachments (that is, assign a nickname to an embedded attachment, and refer to that nickname in another field where attached data could be embedded), but this is not considered best practice. The reason is that it requires a field in the schema to have two possible data types--one a string that's a nickname reference, and one an attachment descriptor. Generally, we like fields to have a single datatype in a schema.

    "},{"location":"concepts/0017-attachments/#content-formats","title":"Content Formats","text":"

    There are multiple ways to include content in an attachment. Only one method should be used per attachment.

    "},{"location":"concepts/0017-attachments/#base64url","title":"base64url","text":"

    This content encoding is an obvious choice for any content different than JSON. You can embed content of any type using this method. Examples are plentiful throughout the document. Note that this encoding is always base64url encoding, not plain base64, and that padding is not required. Code that reads this encoding SHOULD tolerate the presence or absence of padding and base64 versus base64url encodings equally well, but code that writes this encoding SHOULD omit the padding to guarantee alignment with encoding rules in the JOSE (JW*) family of specs.

    "},{"location":"concepts/0017-attachments/#json","title":"json","text":"

    If you are embedding an attachment that is JSON, you can embed it directly in JSON format to make access easier, by replacing data.base64 with data.json, where the value assigned to data.json is the attached content:

    This is an overly trivial example of GeoJSON, but hopefully it illustrates the technique. In cases where there is no mime type to declare, it may be helpful to use JSON-LD's @type construct to clarify the specific flavor of JSON in the embedded attachment.

    "},{"location":"concepts/0017-attachments/#links","title":"links","text":"

    All examples discussed so far include an attachment by value--that is, the attachment's bytes are directly inlined in the message in some way. This is a useful mode of data delivery, but it is not the only mode.

    Another way that attachment data can be incorporated is by reference. For example, you can link to the content on a web server by replacing data.base64 or data.json with data.links in an attachment descriptor:

    When you provide such a link, you are creating a logical association between the message and an attachment that can be fetched separately. This makes it possible to send brief descriptors of attachments and to make the downloading of the heavy content optional (or parallelizable) for the recipient.

    The links field is plural (an array) to allow multiple locations to be offered for the same content. This allows an agent to fetch attachments using whichever mechanism(s) are best suited to its individual needs and capabilities.

    "},{"location":"concepts/0017-attachments/#supported-uri-types","title":"Supported URI Types","text":"

    The set of supported URI types in an attachment link is limited to:

    Additional URI types may be added via updates to this RFC.

    If an attachment link with an unsupported URI is received, the agent SHOULD respond with a Problem Report indicated the problem.

    An ecosystem (coordinating set of agents working in a specific business area) may agree to support other URI types within that ecosystem. As such, implementing a mechanism to easily add support for other attachment link URI types might be useful, but is not required.

    "},{"location":"concepts/0017-attachments/#signing-attachments","title":"Signing Attachments","text":"

    In some cases it may be desirable to sign an attachment in addition to or instead of signing the message as a whole. Consider a home-buying protocol; the home inspection needs to be signed even when it is removed from a messaging flow. Attachments may also be signed by a party separate from the sender of the message, or using a different signing key when the sender is performing key rotation.

    Embedded and appended attachments support signatures by the addition of a data.jws field containing a signature in JWS (RFC 7515) format with Detached Content. The payload of the JWS is the raw bytes of the attachment, appropriately base64url-encoded per JWS rules. If these raw bytes are incorporated by value in the DIDComm message, they are already base64url-encoded in data.base64 and are thus directly substitutable for the suppressed data.jws.payload field; if they are externally referenced, then the bytes must be fetched via the URI in data.links and base64url-encoded before the JWS can be fully reconstituted. Signatures over inlined JSON attachments are not currently defined as this depends upon a canonical serialization for the data.

    Sample JWS-signed attachment:

    {\n  \"@type\": \"https://didcomm.org/xhomebuy/1.0/home_insp\",\n  \"inspection_date\": \"2020-03-25\",\n  \"inspection_address\": \"123 Villa de Las Fuentes, Toledo, Spain\",\n  \"comment\": \"Here's that report you asked for.\",\n  \"report~attach\": {\n    \"mime-type\": \"application/pdf\",\n    \"filename\": \"Garcia-inspection-March-25.pdf\",\n    \"data\": {\n      \"base64\": \"eyJ0eXAiOiJKV1QiLA0KICJhbGciOiJIUzI1NiJ... (bytes omitted to shorten)\",\n      \"jws\": {\n        // payload: ...,  <-- omitted: refer to base64 content when validating\n        \"header\": {\n          \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n        },\n        \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n        \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n      }\n    }\n  }\n}\n

    Here, the JWS structure inlines a public key value in did:key format within the unprotected header's kid field. It may also use a DID URL to reference a key within a resolvable DIDDoc. Supported DID URLs should specify a timestamp and/or version for the containing document.

    The JWS protected header consists of at least the following parameter indicating an Edwards curve digital signature:

    {\n  \"alg\": \"EdDSA\"\n}\n

    Additional protected and unprotected header parameters may be included in the JWS and must be ignored by implementations if not specifically supported. Any registered header parameters defined by the JWS RFC must be used according to the specification if present.

    Multiple signatures may be included using the JWS General Serialization syntax. When a single signature is present, the Flattened Serialization syntax should be preferred. Because each JWS contains an unprotected header with the signing key information, the JWS Compact Serialization cannot be supported.

    "},{"location":"concepts/0017-attachments/#size-considerations","title":"Size Considerations","text":"

    DIDComm messages should be small, as a general rule. Just as it's a bad idea to send email messages with multi-GB attachments, it would be bad to send DIDComm messages with huge amounts of data inside them. Remember, a message is about advancing a protocol; usually that can be done without gigabytes or even megabytes of JSON fields. Remember as well that DIDComm messages may be sent over channels having size constraints tied to the transport--an HTTP POST or Bluetooth or NFC or AMQP payload of more than a few MB may be problematic.

    Size pressures in messaging are likely to come from attached data. A good rule of thumb might be to not make DIDComm messages bigger than email or MMS messages--whenever more data needs to be attached, use the inclusion-by-reference technique to allow the data to be fetched separately.

    "},{"location":"concepts/0017-attachments/#security-implications","title":"Security Implications","text":"

    Attachments are a notorious vector for malware and mischief with email. For this reason, agents that support attachments MUST perform input validation on attachments, and MUST NOT invoke risky actions on attachments until such validation has been performed. The status of input validation with respect to attachment data MUST be reflected in the Message Trust Context associated with the data's message.

    "},{"location":"concepts/0017-attachments/#privacy-implications","title":"Privacy Implications","text":"

    When attachments are inlined, they enjoy the same security and transmission guarantees as all agent communication. However, given the right context, a large inlined attachment may be recognizable by its size, even if it is carefully encrypted.

    If attachment content is fetched from an external source, then new complications arise. The security guarantees may change. Data streamed from a CDN may be observable in flight. URIs may be correlating. Content may not be immutable or tamper-resistant.

    However, these issues are not necessarily a problem. If a DIDComm message wants to attach a 4 GB ISO file of a linux distribution, it may be perfectly fine to do so in the clear. Downloading it is unlikely to introduce strong correlation, encryption is unnecessary, and the torrent itself prevents malicious modification.

    Code that handles attachments will need to use wise policy to decide whether attachments are presented in a form that meets its needs.

    "},{"location":"concepts/0017-attachments/#reference","title":"Reference","text":""},{"location":"concepts/0017-attachments/#attachment-descriptor-structure","title":"Attachment Descriptor structure","text":""},{"location":"concepts/0017-attachments/#drawbacks","title":"Drawbacks","text":"

    By providing 3 different choices, we impose additional complexity on agents that will receive messages. They have to handle attachments in 3 different modes.

    "},{"location":"concepts/0017-attachments/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Originally, we only proposed the most flexible method of attaching--appending. However, feedback from the community suggested that stronger binding to schema was desirable. Inlining was independently invented, and is suggested by JSON-LD anyway. Embedding without appending eliminates some valuable ../../features such as unnamed and undeclared ad-hoc attachments. So we ended up wanting to support all 3 modes.

    "},{"location":"concepts/0017-attachments/#prior-art","title":"Prior art","text":"

    Multipart MIME (see RFCs 822, 1341, and 2045) defines a mechanism somewhat like this. Since we are using JSON instead of email messages as the core model, we can't use these mechanisms directly. However, they are an inspiration for what we are showing here.

    "},{"location":"concepts/0017-attachments/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0017-attachments/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Cloud Agent - Python in credential exchange Streetcred.id Commercial mobile and web app built using Aries Framework - .NET"},{"location":"concepts/0020-message-types/","title":"Aries RFC 0020: Message Types","text":""},{"location":"concepts/0020-message-types/#summary","title":"Summary","text":"

    Define structure of message type strings used in agent to agent communication, describe their resolution to documentation URIs, and offer guidelines for protocol specifications.

    "},{"location":"concepts/0020-message-types/#motivation","title":"Motivation","text":"

    A clear convention to follow for agent developers is necessary for interoperability and continued progress as a community.

    "},{"location":"concepts/0020-message-types/#tutorial","title":"Tutorial","text":"

    A \"Message Type\" is a required attribute of all communications sent between parties. The message type instructs the receiving agent how to interpret the content and what content to expect as part of a given message.

    Types are specified within a message using the @type attribute:

    {\n    \"@type\": \"<message type string>\",\n    // other attributes\n}\n

    Message types are URIs that may resolve to developer documentation for the message type, as described in Protocol URIs. We recommend that message type URIs be HTTP URLs.

    "},{"location":"concepts/0020-message-types/#aries-core-message-namespace","title":"Aries Core Message Namespace","text":"

    https://didcomm.org/ is used to namespace protocols defined by the community as \"core protocols\" or protocols that agents should minimally support.

    The didcomm.org DNS entry is currently controlled by the Decentralized Identity Foundation (DIF) based on their role in standardizing the DIDComm Messaging specification.

    "},{"location":"concepts/0020-message-types/#protocols","title":"Protocols","text":"

    Protocols provide a logical grouping for message types. These protocols, along with each type belonging to that protocol, are to be defined in future RFCs or through means appropriate to subprojects.

    "},{"location":"concepts/0020-message-types/#protocol-versioning","title":"Protocol Versioning","text":"

    Version numbering should essentially follow Semantic Versioning 2.0.0, excluding patch version number. To summarize, a change in the major protocol version number indicates a breaking change while the minor protocol version number indicates non-breaking additions.

    "},{"location":"concepts/0020-message-types/#message-type-design-guidelines","title":"Message Type Design Guidelines","text":"

    These guidelines are guidelines on purpose. There will be situations where a good design will have to choose between conflicting points, or ignore all of them. The goal should always be clear and good design.

    "},{"location":"concepts/0020-message-types/#respect-reserved-attribute-names","title":"Respect Reserved Attribute Names","text":"

    Reserved attributes are prefixed with an @ sign, such as @type. Don't use this prefix for an attribute, even if use of that specific attribute is undefined.

    "},{"location":"concepts/0020-message-types/#avoid-ambiguous-attribute-names","title":"Avoid ambiguous attribute names","text":"

    Data, id, and package, are often terrible names. Adjust the name to enhance meaning. For example, use message_id instead of id.

    "},{"location":"concepts/0020-message-types/#avoid-names-with-special-characters","title":"Avoid names with special characters","text":"

    Technically, attribute names can be any valid json key (except prefixed with @, as mentioned above). Practically, you should avoid using special characters, including those that need to be escaped. Underscores and dashes [_,-] are totally acceptable, but you should avoid quotation marks, punctuation, and other symbols.

    "},{"location":"concepts/0020-message-types/#use-attributes-consistently-within-a-protocol","title":"Use attributes consistently within a protocol","text":"

    Be consistent with attribute names between the different types within a protocol. Only use the same attribute name for the same data. If the attribute values are similar, but not exactly the same, adjust the name to indicate the difference.

    "},{"location":"concepts/0020-message-types/#nest-attributes-only-when-useful","title":"Nest Attributes only when useful","text":"

    Attributes do not need to be nested under a top level attribute, but can be to organize related attributes. Nesting all message attributes under one top level attribute is usually not a good idea.

    "},{"location":"concepts/0020-message-types/#design-examples","title":"Design Examples","text":""},{"location":"concepts/0020-message-types/#example-1","title":"Example 1","text":"
    {\n    \"@type\": \"did:example:00000;spec/pizzaplace/1.0/pizzaorder\",\n    \"content\": {\n        \"id\": 15,\n        \"name\": \"combo\",\n        \"prepaid?\": true,\n        \"ingredients\": [\"pepperoni\", \"bell peppers\", \"anchovies\"]\n    }\n}\n

    Suggestions: Ambiguous names, unnecessary nesting, symbols in names.

    "},{"location":"concepts/0020-message-types/#example-1-fixed","title":"Example 1 Fixed","text":"
    {\n    \"@type\": \"did:example:00000;spec/pizzaplace/1.0/pizzaorder\",\n    \"table_id\": 15,\n    \"pizza_name\": \"combo\",\n    \"prepaid\": true,\n    \"ingredients\": [\"pepperoni\", \"bell peppers\", \"anchovies\"]\n}\n
    "},{"location":"concepts/0020-message-types/#reference","title":"Reference","text":""},{"location":"concepts/0020-message-types/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community Aries Framework - .NET .NET framework for building agents of all types Streetcred.id Commercial mobile and web app built using Aries Framework - .NET Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases. Aries Framework - Go For building agents, hubs and other DIDComm ../../features in GoLang. Connect.Me Free mobile app from Evernym. Installed via app store on iOS and Android. Verity Commercially licensed enterprise agent, SaaS or on-prem."},{"location":"concepts/0021-didcomm-message-anatomy/","title":"Aries RFC 0021: DIDComm Message Anatomy","text":""},{"location":"concepts/0021-didcomm-message-anatomy/#summary","title":"Summary","text":"

    Explain the basics of DID communication messages at a high level, and link to other RFCs to promote deeper exploration.

    "},{"location":"concepts/0021-didcomm-message-anatomy/#motivation","title":"Motivation","text":"

    Promote a deeper understanding of the DIDComm message anatomy through a overarching view of the two distinct levels of messages in a single place.

    "},{"location":"concepts/0021-didcomm-message-anatomy/#tutorial","title":"Tutorial","text":"

    DIDComm messages are comprised of the following two main layers, which are not dissimilar to how postal messages occur in the real world.

    "},{"location":"concepts/0021-didcomm-message-anatomy/#envelope-level","title":"Envelope Level","text":"

    As the name suggests, envelope borrows from the analogy of how physical messages are handled in the postal system, this message format level acts as the digital envelope for DIDComm messages.

    There are two main variations of the envelope level format which are defined to cater for the different audiences and use cases DIDComm messages serve.

    "},{"location":"concepts/0021-didcomm-message-anatomy/#1-encrypted","title":"1. Encrypted","text":"

    This format is for when the audience of the message is a DID or DID's known to the sender, in this case the message can be prepared and encrypted with the key information present in the audiences DID docs.

    Within this encrypted format, there are multiple sub-formats which give rise to different properties.

    1. Anonymous Encrypted format This format is when a message is encrypted to a recipient in an anonymous fashion, and it does not include any sender information.
    2. Authenticated Encrypted format This format is when a message is encrypted to a recipient and sender information is included through the use of authenticated encryption. With this format only the true recipient(s) can both decrypt the message and authenticate its content is truly from the sender.
    3. Signed Encrypted format This format is when a message is encrypted to the recipient and sender information is included along with a non-repudiable signature. In this case the recipient(s) is still the only party that can decrypt the message. However, because the underlying message includes non-repudiability, authentication of the decrypted message content can be done by any party who knows the sender.
    "},{"location":"concepts/0021-didcomm-message-anatomy/#2-signed-unencrypted","title":"2. Signed Unencrypted","text":"

    This format is for when the audience of the message is unknown (for example some form of public challenge). This format is signed, so that when a member of the audience receives the message they can authenticate the message with its non-repudiable signature.

    "},{"location":"concepts/0021-didcomm-message-anatomy/#serialization-format","title":"Serialization Format","text":"

    All of the envelope level formats are achieved through JOSE based structures. The encrypted formats uses a JWE structure, whereas the signed unencrypted format uses a JWS structure.

    Details on the encrypted forms are found here

    Details on the signed un-encrypted are TBC

    "},{"location":"concepts/0021-didcomm-message-anatomy/#content-level","title":"Content Level","text":"

    This level to continue the postal metaphor is the content inside the envelope and contains the message.

    At this level, several conventions are defined around how messages are structured, which facilitates in message identification and processing.

    The most important ../../concepts to introduce about these conventions are the following.

    "},{"location":"concepts/0021-didcomm-message-anatomy/#message-type","title":"Message Type","text":"

    Every message contains a message type which allows the context of the message to be established and therefore process the content, see here for more information. It is also important to note that in DIDComm, the message identification does not just identify the message, the message type also identifies the associated protocol. These protocols are essentially a group of related messages that are together required to achieve some form of multi-step flow see here for more information.

    "},{"location":"concepts/0021-didcomm-message-anatomy/#message-id","title":"Message Id","text":"

    Every message contains a message id which is uniquely generated by the sender, this allows unique identification of the message. See here for more information.

    "},{"location":"concepts/0021-didcomm-message-anatomy/#decorators","title":"Decorators","text":"

    DIDComm messages at a content level allow for the support of re-usable conventions that are present across multiple messages in order to handle the same functionality in a consistent manner.

    A relevant analogy for decorators, is that they are like HTTP headers in a HTTP request. The same HTTP header is often reused as a convention across multiple requests to achieve cross cutting functionality.

    See here for more details.

    "},{"location":"concepts/0021-didcomm-message-anatomy/#serialization-format_1","title":"Serialization Format","text":"

    At present all content level messages are represented as JSON. Further more these messages are also JSON-LD sympathetic however they do not have full and direct support for JSON-LD.

    "},{"location":"concepts/0021-didcomm-message-anatomy/#reference","title":"Reference","text":"

    All references are defined inline where required.

    "},{"location":"concepts/0021-didcomm-message-anatomy/#drawbacks","title":"Drawbacks","text":"

    Why should we not do this?

    "},{"location":"concepts/0021-didcomm-message-anatomy/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0021-didcomm-message-anatomy/#prior-art","title":"Prior art","text":""},{"location":"concepts/0021-didcomm-message-anatomy/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0021-didcomm-message-anatomy/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0029-message-trust-contexts/","title":"Aries RFC 0029: Message Trust Contexts","text":""},{"location":"concepts/0029-message-trust-contexts/#summary","title":"Summary","text":"

    Introduces the concept of Message Trust Contexts and describes how they are populated and used.

    "},{"location":"concepts/0029-message-trust-contexts/#motivation","title":"Motivation","text":"

    An important aim of DID Communication is to let parties achieve high trust. Such trust is vital in cases where money changes hands and identity is at stake. However, sometimes lower trust is fine; playing tic-tac-toe ought to be safe through agents, even with a malicious stranger.

    We may intuitively understand the differences in these situations, but intuition isn't the best guide when designing a secure ecosystem. Trust is a complex, multidimensional phenomenon. We need a formal way to analyze it, and to test its suitability in particular circumstances.

    "},{"location":"concepts/0029-message-trust-contexts/#tutorial","title":"Tutorial","text":"

    When Alice sends a message to Bob, how much should Bob trust it?

    This is not a binary question, with possible answers of \"completely\" or \"not at all\". Rather, it is a nuanced question that should consider many factors. Some clarifying questions might include:

    "},{"location":"concepts/0029-message-trust-contexts/#message-trust-contexts","title":"Message Trust Contexts","text":"

    The DID Communication ecosystem formalizes the idea of a Message Trust Context (MTC) to expose such questions, make their answers explicit, and encourage thoughtful choices based on the answers.

    An MTC is an object that holds trust context for a message. This context follows a message throughout its processing journey inside the agent that receives it, and it should be analyzed and updated for decision-making purposes throughout.

    Protocols should be designed with standard MTCs in mind. Thus, it is desirable that all implementations share common names for certain ../../concepts, so we can discuss them conveniently in design docs, error messages, logs, and so forth. The standard dimensions of trust tracked in an MTC break down into two groups:

    "},{"location":"concepts/0029-message-trust-contexts/#crypto-related","title":"Crypto-related","text":""},{"location":"concepts/0029-message-trust-contexts/#input-validations","title":"Input validations","text":"

    In code, these types of trust are written using whatever naming convention matches the implementer's programming language, so authenticated_origin and authenticatedOrigin are synonyms of each other and of Authenticated Origin.

    "},{"location":"concepts/0029-message-trust-contexts/#notation","title":"Notation","text":"

    In protocol designs, the requirements of a message trust context should be declared when message types are defined. For example, the credential_offer message in the credential_issuance protocol should not be accepted unless it has Integrity and Authenticated Origin in its MTC (because otherwise a MITM could interfere). The definition of the message type should say this. Its RFC does this by notating:

    mtc: +integrity +authenticated_origin\n

    When a loan is digitally signed, we probably need:

    mtc: +integrity +authenticated_origin +nonrepudiation\n

    The labels for these trust types are long, but they can be shortened if they remain unambiguous. Notice, too, that all of the official MTC fields have unique intial letters. We can therefore abbreviate unambiguously:

    mtc: +i +a +n\n

    Any type of trust that does not appear in MTC notation is assumed to be undefined (meaning no claim is made about it either way, perhaps because it hasn't been evaluated or because it doesn't matter). However, sometimes we need to make a lack of trust explicit. We might claim in a protocol definition that a particular type of trust is definitely not required. Or we might want to show that we evaluated a particular trust at runtime, and had a negative outcome. In such cases, we can do this:

    mtc: +i +a -n\n

    Here, we are explicitly denying that nonrepudiation is part of the trust context.

    For further terseness in our notation, spaces can be omitted:

    mtc: +i+a-n\n

    Finally, an mtc that makes no explicit positive or negative claims (undefined) is written as:

    mtc: ?\n

    This MTC notation is a supplement to SSI Notation and should be treated as equally normative. Such notation might be useful in log files and error messages, among other things. See Using a Message Trust Context at Runtime below.

    "},{"location":"concepts/0029-message-trust-contexts/#custom-trust","title":"Custom Trust","text":"

    Specific agents may make trust distinctions that are helpful in their own problem domains. For example, some agents may evaluate trust on the physical location or IP address of a sender, or on the time of day that a message arrives. Others may use DIDComm for internal processes that have unique trust requirements over and above those that matter in interoperable scenarios, such as whether a message emanates from a machine running endpoint compliance software, or whether it has passed through intrusion detection or data loss prevention filters.

    Agent implementations are encouraged to add their own trust dimensions to their own implementations of a Message Trust Context, as long as they do not redefine the standard labels. In cases where custom trust types introduce ambiguity with trust labels, MTC notation requires enough letters to disambiguate labels. So if a complex custom MTC has fields named intrusion_detect_ok, ipaddr_ok (which both start like the standard integrity), and endpoint_compliance (which has no ambiguity with a standard token) it might be notated as:

    mtc: +c+a+inte+intr+ip-n-p-e\n

    Here, inte matches the standard label integrity, whereas intr and ip are known to be custom because they don't match a standard label; e is custom but only a single letter because it is unambiguous.

    "},{"location":"concepts/0029-message-trust-contexts/#populating-a-message-trust-context-at-runtime","title":"Populating a Message Trust Context at Runtime","text":"

    A Message Trust Context comes into being when it arrives on the wire at the receiving agent and begins its processing flow.

    The first step may be an input validation to confirm that the message doesn't exceed a max size. If so, the empty MTC is updated with +s.

    Another early step is decryption. This should allow population of the confidentiality and authenticated_origin dimensions, at least.

    Subsequent layers of code that do additional analysis should update the MTC as appropriate. For example, if a signature is not analyzed and validated until after the decryption step, the signature's presence or absence should cause nonrepudiation and maybe integrity to be updated. Similarly, once the plaintext of a message is known to be a valid enough to deserialize into an object, the MTC acquires +deserialize_ok. Later, when the fields of the message's native object representation have been analyzed to make sure they conform to a particular structure, it should be updated again with +key_ok. And so forth.

    "},{"location":"concepts/0029-message-trust-contexts/#using-a-message-trust-context-at-runtime","title":"Using a Message Trust Context at Runtime","text":"

    As message processing happens, the MTC isn't just updated. It should constantly be queried, and decisions should be made on the basis of what the MTC says. These decisions can vary according to the preferences of agent developers and the policies of agent owners. Some agents may choose not to accept any messages that are -a, for example, while others may be content to talk with anonymous senders. The recommendations of protocol designers should never be ignored, however; it is probably wrong to accept a -n message that signs a loan, even if agent policy is lax about other things. Formally declared MTCs in a protocol design may be linked to security proofs...

    Part of the intention with the terse MTC notation is that conversations about agent trust should be easy and interoperable. When agents send one another problem-report messages, they can turn MTCs into human-friendly text, but also use this notation: \"Unable to accept a payment from message that lacks Integrity guarantees (-i).\" This notation can help diagnose trust problems in logs. It may also be helpful with message tracing, feature discovery, and agent testing.

    "},{"location":"concepts/0029-message-trust-contexts/#attachments","title":"Attachments","text":"

    MTCs apply to the entirety of the associated message's attributes. However, embedded and appended message attachments present the unique situation of nested content with the potential for a trust context that differs from the parent message.

    The attachment descriptor, used for both embedded and appended attachments, shares the same MTC as the parent message. Unpacked attachment data have their own Trust Contexts populated as appropriate depending on how the data was retrieved, whether the attachment is signed, whether an integrity checksum was provided and verified, etc.

    Attachments delivered by the parent message, i.e. as base64url-encoded data, inherit relevant trust contexts from the parent, such as confidentiality and authenticated_origin, when the message was delivered as an authenticated encrypted message.

    Attachments retrieved from a remote resource populate their trust context as relevant to the retrieval mechanism.

    "},{"location":"concepts/0029-message-trust-contexts/#reference","title":"Reference","text":"

    A complete reference implementation of MTCs in python is attached to this RFC (see mtc.py). It could easily be extended with custom trust dimensions, and it would be simple to port to other programming languages. Note that the implementation includes unit tests written in pytest style, and has only been tested on python 3.x.

    "},{"location":"concepts/0029-message-trust-contexts/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes MTC reference impl Reference impl in python, checked in with RFC. Includes unit tests. Aries Protocol Test Suite Aries Static Agent - Python Largely inspired by reference implementation; MTC populated and made available to handlers."},{"location":"concepts/0046-mediators-and-relays/","title":"Aries RFC 0046: Mediators and Relays","text":""},{"location":"concepts/0046-mediators-and-relays/#summary","title":"Summary","text":"

    The mental model for agent-to-agent messaging (A2A) messaging includes two important communication primitives that have a meaning unique to our ecosystem: mediator and relay.

    A mediator is a participant in agent-to-agent message delivery that must be modeled by the sender. It has its own keys and will deliver messages only after decrypting an outer envelope to reveal a forward request. Many types of mediators may exist, but two important ones should be widely understood, as they commonly manifest in DID Docs:

    1. A service that hosts many cloud agents at a single endpoint to provide herd privacy (an \"agency\") is a mediator.
    2. A cloud-based agent that routes between/among the edges of a sovereign domain is a mediator.

    A relay is an entity that passes along agent-to-agent messages, but that can be ignored when the sender considers encryption choices. It does not decrypt anything. Relays can be used to change the transport for a message (e.g., accept an HTTP POST, then turn around and emit an email; accept a Bluetooth transmission, then turn around and emit something in a message queue). Mix networks like TOR are an important type of relay.

    Read on to explore how agent-to-agent communication can model complex topologies and flows using these two primitives.

    "},{"location":"concepts/0046-mediators-and-relays/#motivation","title":"Motivation","text":"

    When we describe agent-to-agent communication, it is convenient to think of an interaction only in terms of Alice and Bob and their agents. We say things like: \"Alice's agent sends a message to Bob's agent\" -- or perhaps \"Alice's edge agent sends a message to Bob's cloud agent, which forwards it to Bob's edge agent\".

    Such statements adopt a useful level of abstraction--one that's highly recommended for most discussions. However, they make a number of simplifications. By modeling the roles of mediators and relays in routing, we can support routes that use multiple transports, routes that are not fully known (or knowable) to the sender, routes that pass through mix networks, and other advanced and powerful ../../concepts.

    "},{"location":"concepts/0046-mediators-and-relays/#tutorial","title":"Tutorial","text":""},{"location":"concepts/0046-mediators-and-relays/#key-concepts","title":"Key Concepts","text":"

    Let's define mediators and relays by exploring how they manifest in a series of communication scenarios between Alice and Bob.

    "},{"location":"concepts/0046-mediators-and-relays/#scenario-1-base","title":"Scenario 1 (base)","text":"

    Alice and Bob are both employees of a large corporation. They work in the same office, but have never met. The office has a rule that all messages between employees must be encrypted. They use paper messages and physical delivery as the transport. Alice writes a note, encrypts it so only Bob can read it, puts it in an envelope addressed to Bob, and drops the envelope on a desk that she has been told belongs to Bob. This desk is in fact Bob's, and he later picks up the message, decrypts it, and reads it.

    In this scenario, there is no mediator, and no relay.

    "},{"location":"concepts/0046-mediators-and-relays/#scenario-2-a-gatekeeper","title":"Scenario 2: a gatekeeper","text":"

    Imagine that Bob hires an executive assistant, Carl, to filter his mail. Bob won't open any mail unless Carl looks at it and decides that it's worthy of Bob's attention.

    Alice has to change her behavior. She continues to package a message for Bob, but now she must account for Carl as well. She take the envelope for Bob, and places it inside a new envelope addressed to Carl. Inside the outer envelope, and next to the envelope destined for Bob, Alice writes Carl an encrypted note: \"This inner envelope is for Bob. Please forward.\"

    Here, Carl is acting as a mediator. He is mostly just passing messages along. But because he is processing a message himself, and because Carl is interposed between Alice and Bob, he affects the behavior of the sender. He is a known entity in the route.

    "},{"location":"concepts/0046-mediators-and-relays/#scenario-3-transparent-indirection","title":"Scenario 3: transparent indirection","text":"

    All is the same as the base scenario (Carl has been fired), except that Bob is working from home when Alice's message lands on his desk. Bob has previously arranged with his friend Darla, who lives near him, to pick up any mail that's on his desk and drop it off at his house at the end of the work day. Darla sees Alice's note and takes it home to Bob.

    In this scenario, Darla is acting as a relay. Note that Bob arranges for Darla to do this without notifying Alice, and that Alice does not need to adjust her behavior in any way for the relay to work.

    "},{"location":"concepts/0046-mediators-and-relays/#scenario-4-more-indirection","title":"Scenario 4: more indirection","text":"

    Like scenario 3, Darla brings Bob his mail at home. However, Bob isn't at home when his mail arrives. He's had to rush out on an errand, but he's left instructions with his son, Emil, to open any work mail, take a photo of the letter, and text him the photo. Emil intends to do this, but the camera on his phone misfires, so he convinces his sister, Francis, to take the picture on her phone and email it to him. Then he texts the photo to Bob, as arranged.

    Here, Emil and Francis are also acting as relays. Note that nobody knows about the full route. Alice thinks she's delivering directly to Bob. So does Darla. Bob knows about Darla and Emil, but not about Francis.

    Note, too, how the transport is changing from physical mail to email to text.

    To the party immediately upstream (closer to the sender), a relay is indistinguishable from the next party downstream (closer to the recipient). A party anywhere in the chain can insert one or more relays upstream from themselves, as long as those relays are not upstream of another named party (sender or mediator).

    "},{"location":"concepts/0046-mediators-and-relays/#more-scenarios","title":"More Scenarios","text":"

    Mediators and relays can be combined in any order and any amount in variations on our fictional scenario. Bob could employ Carl as a mediator, and Carl could work from home and arrange delivery via George, then have his daughter Hannah run messages back to Bob's desk at work. Carl could hire his own mediator. Darla could arrange or Ivan to substitute for her when she goes on vacation. And so forth.

    "},{"location":"concepts/0046-mediators-and-relays/#more-traditional-usage","title":"More Traditional Usage","text":"

    The scenarios used above are somewhat artificial. Our most familiar agent-to-agent scenarios involve edge agents running on mobile devices and accessible through bluetooth or push notification, and cloud agents that use electronic protocols as their transport. Let's see how relays and mediators apply there.

    "},{"location":"concepts/0046-mediators-and-relays/#scenario-5-traditional-base","title":"Scenario 5 (traditional base)","text":"

    Alice's cloud agent wants to talk to Bob's cloud agent. Bob's cloud agent is listening at http://bob.com/agent. Alice encrypts a message for Bob and posts it to that URL.

    In this scenario, we are using a direct transport with neither a mediator nor a relay.

    If you are familiar with common routing patterns and you are steeped in HTTP, you are likely objecting at this point, pointing out ways that this description diverges from best practice, including what's prescribed in other RFC. You may be eager to explain why this is a privacy problem, for example.

    You are not wrong, exactly. But please suspend those concerns and hang with me. This is about what's theoretically possible in the mental model. Besides, I would note that virtually the same diagram could be used for a Bluetooth agent conversation:

    "},{"location":"concepts/0046-mediators-and-relays/#scenario-6-herd-hosting","title":"Scenario 6: herd hosting","text":"

    Let's tweak Scenario 5 slightly by saying that Bob's agent is one of thousands that are hosted at the same URL. Maybe the URL is now http://agents-r-us.com/inbox. Now if Alice wants to talk to Bob's cloud agent, she has to cope with a mediator. She wraps the encrypted message for Bob's cloud agent inside a forward message that's addressed to and encrypted for the agent of agents-r-us that functions as a gatekeeper.

    This scenario is one that highlights an external mediator--so-called because the mediator lives outside the sovereign domain of the final recipient.

    "},{"location":"concepts/0046-mediators-and-relays/#scenario-7-intra-domain-dispatch","title":"Scenario 7: intra-domain dispatch","text":"

    Now let's subtract agents-r-us. We're back to Bob's cloud agent listening directly at http://bob.com/agent. However, let's say that Alice has a different goal--now she wants to talk to the edge agent running on Bob's mobile device. This agent doesn't have a permanent IP address, so Bob uses his own cloud agent as a mediator. He tells Alice that his mobile device agent can only be reached via his cloud agent.

    Once again, this causes Alice to modify her behavior. Again, she wraps her encrypted message. The inner message is enclosed in an outer envelope, and the outer envelope is passed to the mediator.

    This scenario highlights an internal mediator. Internal and external mediators introduce similar ../../features and similar constraints; the relevant difference is that internal mediators live within the sovereign domain of the recipient, and may thus be worthy of greater trust.

    "},{"location":"concepts/0046-mediators-and-relays/#scenario-8-double-mediation","title":"Scenario 8: double mediation","text":"

    Now let's combine. Bob's cloud agent is hosted at agents-r-us, AND Alice wants to reach Bob's mobile:

    This is a common pattern with HTTP-based cloud agents plus mobile edge agents, which is the most common deployment pattern we expect for many users of self-sovereign identity. Note that the properties of the agency and the routing agent are not particularly special--they are just an external and an internal mediator, respectively.

    "},{"location":"concepts/0046-mediators-and-relays/#related-concepts","title":"Related Concepts","text":""},{"location":"concepts/0046-mediators-and-relays/#routes-are-one-way-not-duplex","title":"Routes are One-Way (not duplex)","text":"

    In all of this discussion, note that we are analyzing only a flow from Alice to Bob. How Bob gets a message back to Alice is a completely separate question. Just because Carl, Darla, Emil, Francis, and Agents-R-Us may be involved in how messages flow from Alice to Bob, does not mean they are involved in flow the opposite direction.

    Note how this breaks the simple assumptions of pure request-response technologies like HTTP, that assume the channel in (request) is also the channel out (response). Duplex request-response can be modeled with A2A, but doing so requires support that may not always be available, plus cooperative behavior governed by the ~thread decorator.

    "},{"location":"concepts/0046-mediators-and-relays/#conventions-on-direction","title":"Conventions on Direction","text":"

    For any given one-way route, the direction of flow is always from sender to receiver. We could use many different metaphors to talk about the \"closer to sender\" and \"closer to receiver\" directions -- upstream and downstream, left and right, before and after, in and out. We've chosen to standardize on two:

    "},{"location":"concepts/0046-mediators-and-relays/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Connect.Me Free mobile app from Evernym. Installed via app store on iOS and Android. Verity Commercially licensed enterprise agent, SaaS or on-prem. DIDComm mediator Open source cloud-based mediator with Firebase support."},{"location":"concepts/0047-json-ld-compatibility/","title":"Aries RFC 0047: JSON-LD Compatibility","text":""},{"location":"concepts/0047-json-ld-compatibility/#summary","title":"Summary","text":"

    Explains the goals of DID Communication with respect to JSON-LD, and how Aries proposes to accomplish them.

    "},{"location":"concepts/0047-json-ld-compatibility/#motivation","title":"Motivation","text":"

    JSON-LD is a familiar body of conventions that enriches the expressive power of plain JSON. It is natural for people who arrive in the DID Communication (DIDComm) ecosystem to wonder whether we are using JSON-LD--and if so, how. We need a coherent answer that clarifies our intentions and that keeps us true to those intentions as the ecosystem evolves.

    "},{"location":"concepts/0047-json-ld-compatibility/#tutorial","title":"Tutorial","text":"

    The JSON-LD spec is a recommendation work product of the W3C RDF Working Group Since it was formally recommended as version 1.0 in 2014, the JSON for Linking Data Community Group has taken up not-yet-standards-track work on a 1.1 update.

    JSON-LD has significant gravitas in identity circles. It gives to JSON some capabilities that are sorely needed to model the semantic web, including linking, namespacing, datatyping, signing, and a strong story for schema (partly through the use of JSON-LD on schema.org).

    However, JSON-LD also comes with some conceptual and technical baggage. It can be hard for developers to master its subtleties; it requires very flexible parsing behavior after built-in JSON support is used to deserialize; it references a family of related specs that have their own learning curve; the formality of its test suite and libraries may get in the way of a developer who just wants to read and write JSON and \"get stuff done.\"

    In addition, the problem domain of DIDComm is somewhat different from the places where JSON-LD has the most traction. The sweet spot for DIDComm is small, relatively simple JSON documents where code behavior is strongly bound to the needs of a specific interaction. DIDComm needs to work with extremely simple agents on embedded platforms. Such agents may experience full JSON-LD support as an undue burden when they don't even have a familiar desktop OS. They don't need arbitrary semantic complexity.

    If we wanted to use email technology to send a verifiable credential, we would model the credential as an attachment, not enrich the schema of raw email message bodies. DIDComm invites a similar approach.

    "},{"location":"concepts/0047-json-ld-compatibility/#goal","title":"Goal","text":"

    The DIDComm messaging effort that began in the Indy community wants to benefit from the accessibility of ordinary JSON, but leave an easy path for more sophisticated JSON-LD-driven patterns when the need arises. We therefore set for ourselves this goal:

    Be compatible with JSON-LD, such that advanced use cases can take advantage of it where it makes sense, but impose no dependencies on the mental model or the tooling of JSON-LD for the casual developer.

    "},{"location":"concepts/0047-json-ld-compatibility/#what-the-casual-developer-needs-to-know","title":"What the Casual Developer Needs to Know","text":"

    That's it.

    "},{"location":"concepts/0047-json-ld-compatibility/#details","title":"Details","text":"

    Compatibility with JSON-LD was evaluated against version 1.1 of the JSON-LD spec, current in early 2019. If material changes in the spec are forthcoming, a new analysis may be worthwhile. Our current understanding follows.

    "},{"location":"concepts/0047-json-ld-compatibility/#type","title":"@type","text":"

    The type of a DIDComm message, and its associated route or handler in dispatching code, is given by the JSON-LD @type property at the root of a message. JSON-LD requires this value to be an IRI. DIDComm DID references are fully compliant. Instances of @type on any node other than a message root have JSON-LD meaning, but no predefined relevance in DIDComm.

    "},{"location":"concepts/0047-json-ld-compatibility/#id","title":"@id","text":"

    The identifier for a DIDComm message is given by the JSON-LD @id property at the root of a message. JSON-LD requires this value to be an IRI. DIDComm message IDs are relative IRIs, and can be converted to absolute form as described in RFC 0217: Linkable Message Paths. Instances of @id on any node other than a message root have JSON-LD meaning, but no predefined relevance in DIDComm.

    "},{"location":"concepts/0047-json-ld-compatibility/#context","title":"@context","text":"

    This is JSON-LD\u2019s namespacing mechanism. It is active in DIDComm messages, but can be ignored for simple processing, in the same way namespaces in XML are often ignored for simple tasks.

    Every DIDComm message has an associated @context, but we have chosen to follow the procedure described in section 6 of the JSON-LD spec, which focuses on how ordinary JSON can be intepreted as JSON-LD by communicating @context out of band.

    DIDComm messages communicate the context out of band by specifying it in the protocol definition (e.g., RFC) for the associated message type; thus, the value of @type indirectly gives the relevant @context. In advanced use cases, @context may appear in a DIDComm message, supplementing this behavior.

    "},{"location":"concepts/0047-json-ld-compatibility/#ordering","title":"Ordering","text":"

    JSON-LD specifies that the order of items in arrays is NOT significant, and notes (correctly) that this is the opposite of the standard assumption for plain JSON. This makes sense when viewed through the lens of JSON-LD\u2019s role as a transformation of RDF.

    Since we want to violate as few assumptions as possible for a developer with general knowledge of JSON, DIDComm messages reverse this default, making arrays an ordered construct, as if all DIDComm message @contexts contained something like:

    \"each field\": { \"@container\": \"@list\"}\n
    To contravene the default, use a JSON-LD construction like this in @context:

    \"myfield\": { \"@container\": \"@set\"}\n
    "},{"location":"concepts/0047-json-ld-compatibility/#decorators","title":"Decorators","text":"

    Decorators are JSON fragments that can be included in any DIDComm message. They enter the formally defined JSON-LD namespace via a JSON-LD fragment that is automatically imputed to every DIDComm message:

    \"@context\": {\n  \"@vocab\": \"https://github.com/hyperledger/aries-rfcs/\"\n}\n

    All decorators use the reserved prefix char ~ (tilde). For more on decorators, see the Decorator RFC.

    "},{"location":"concepts/0047-json-ld-compatibility/#signing","title":"Signing","text":"

    JSON-LD is associated but not strictly bound to a signing mechanism, LD-Signatures. It\u2019s a good mechanism, but it comes with some baggage: you must canonicalize, which means you must resolve every \u201cterm\u201d (key name) to its fully qualified form by expanding contexts before signing. This raises the bar for JSON-LD sophistication and library dependencies.

    The DIDComm community is not opposed to using LD Signatures for problems that need them, but has decided not to adopt the mechanism across the board. There is another signing mechanism that is far simpler, and adequate for many scenarios. We\u2019ll use whichever scheme is best suited to circumstances.

    "},{"location":"concepts/0047-json-ld-compatibility/#type-coercion","title":"Type Coercion","text":"

    DIDComm messages generally do not need this feature of JSON-LD, because there are well understood conventions around date-time datatypes, and individual RFCs that define each message type can further clarify such subtleties. However, it is available on a message-type-definition basis (not ad hoc).

    "},{"location":"concepts/0047-json-ld-compatibility/#node-references","title":"Node References","text":"

    JSON-LD lets one field reference another. See example 93 (note that the ref could have just been \u201c#me\u201d instead of the fully qualified IRI). We may need this construct at some point in DIDComm, but it is not in active use yet.

    "},{"location":"concepts/0047-json-ld-compatibility/#internationalization-and-localization","title":"Internationalization and Localization","text":"

    JSON-LD describes a mechanism for this. It has approximately the same ../../features as the one described in Aries RFC 0043, with a few exceptions:

    Because of these misalignments, the DIDComm ecosystem plans to use its own solution to this problem.

    "},{"location":"concepts/0047-json-ld-compatibility/#additional-json-ld-constructs","title":"Additional JSON-LD Constructs","text":"

    The following JSON-LD keywords may be useful in DIDComm at some point in the future: @base, @index, @container (cf @list and @set), @nest, @value, @graph, @prefix, @reverse, @version.

    "},{"location":"concepts/0047-json-ld-compatibility/#drawbacks","title":"Drawbacks","text":"

    By attempting compatibility but only lightweight usage of JSON-LD, we are neither all-in on JSON-LD, nor all-out. This could cause confusion. We are making the bet that most developers won't need to know or care about the details; they'll simply learn that @type and @id are special, required fields on messages. Designers of protocols will need to know a bit more.

    "},{"location":"concepts/0047-json-ld-compatibility/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0047-json-ld-compatibility/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0047-json-ld-compatibility/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0049-repudiation/","title":"Aries RFC 0049: Repudiation","text":""},{"location":"concepts/0049-repudiation/#summary","title":"Summary","text":"

    Explain DID Communication's perspective on repudiation, and how this influences the DIDComm approach to digital signatures.

    "},{"location":"concepts/0049-repudiation/#motivation","title":"Motivation","text":"

    A very common mistake among newcomers to cryptography is to assume that digital signatures are the best way to prove the origin of data. While it is true that digital signatures can be used in this way, over-signing creates a digital exhaust that can lead to serious long-term privacy problems. We do use digital signatures, but we want to be very deliberate about when and why--and by default, we want to use a more limited technique called authenticated encryption. This doc explains the distinction and its implications.

    "},{"location":"concepts/0049-repudiation/#tutorial","title":"Tutorial","text":"

    If Carol receives a message that purports to come from Alice, she may naturally ask:

    Do I know that this really came from Alice?

    This is a fair question, and an important one. There are two ways to answer it:

    Both of these approaches can answer Carol's question, but they differ in who can trust the answer. If Carol knows Alice is the sender, but can't prove it to anybody else, then we say the message is publicly repudiable; if Carol can prove the origin to others, then we say the message is non-repudiable.

    The repudiable variant is accomplished with a technique called authenticated encryption.

    The non-repudiable variant is accomplished with digital signatures.

    "},{"location":"concepts/0049-repudiation/#how-authenticated-encryption-works","title":"How Authenticated Encryption Works","text":"

    Repudiable sending may sound mysterious, but it's actually quite simple. Alice and Carol can negotiate a shared secret and trust one another not to leak it. Thereafter, if Alice sends Carol a message that uses the shared secret (e.g., it's encrypted by a negotiated symmetric encryption key), then Carol knows the sender must be Alice. However, she can't prove it to anyone, because Alice's immediate counter-response could be, \"Carol could have encrypted this herself. She knows the key, too.\" Notice that this only works in a pairwise channel.

    "},{"location":"concepts/0049-repudiation/#signatures","title":"Signatures","text":"

    Non-repudiable messages are typically accomplished with digital signatures. With signatures, everyone can examine a signature to verify its provenance.

    Fancy signature schemes such as ring signatures may represent intermediate positions, where the fact that a signature was provided by a member of a group is known--but not which specific member did the signing.

    "},{"location":"concepts/0049-repudiation/#why-and-when-to-use-each-strategy","title":"Why and When To Use Each Strategy","text":"

    A common mistake is to assume that digital signatures should be used everywhere because they give the most guarantees. This is a misunderstanding of who needs which guarantees under which conditions.

    If Alice tells a secret to Carol, who should decide whether the secret is reshared--Alice, or Carol?

    In an SSI paradigm, the proper, desirable default is that a sender of secrets should retain the ability to decide if their secrets are shareable, not give that guarantee away.

    If Alice sends a repudiable message, she gets a guarantee that Carol can't reshare it in a way that damages Alice. On the other hand, if she sends a message that's digitally signed, she has no control over where Carol shares the secret and proves its provenance. Hopefully Carol has Alice's best interests at heart, and has good judgment and solid cybersecurity...

    There are certainly cases where non-repudiation is appropriate. If Alice is entering into a borrower:lender relationship with Carol, Carol needs to prove to third parties that Alice, and only Alice, incurred the legal obligation.

    DIDComm supports both modes of communication. However, properly modeled interactions tend to favor repudiable messages; non-repudiation must be a deliberate choice. For this reason, we assume repudiable until an explicit signature is required (in which case the sign() crypto primitive is invoked). This matches the physical world, where most communication is casual and does not carry the weight of legal accountability--and should not.

    "},{"location":"concepts/0049-repudiation/#unknown-recipients","title":"Unknown Recipients","text":"

    Imagine that Alice wants to broadcast a message. She doesn't know who will receive it, so she can't use authenticated encryption. Yet she wants anyone who receives it to know that it truly comes from her.

    In this situation, digital signatures are required. Note, however, that Alice is trading some privacy for her ability to publicly prove message origin.

    "},{"location":"concepts/0049-repudiation/#reference","title":"Reference","text":"

    Authenticated encryption is not something we invented. It is well described in the documentation for libsodium. It is implemented there, and also in the pure javascript port, TweetNacl.

    "},{"location":"concepts/0049-repudiation/#drawbacks","title":"Drawbacks","text":"

    The main reason not to emphasize authenticated encryption over digital signatures is that we seem to encounter a steady impedance from people who are signature-oriented. It is hard and time-consuming to reset expectations. However, we have concluded that the gains in privacy are worth the effort.

    "},{"location":"concepts/0049-repudiation/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0050-wallets/","title":"Aries RFC 0050: Wallets","text":""},{"location":"concepts/0050-wallets/#summary","title":"Summary","text":"

    Specify the external interfaces of identity wallets in the Indy ecosystem, as well as some background ../../concepts, theory, tradeoffs, and internal implementation guidelines.

    "},{"location":"concepts/0050-wallets/#motivation","title":"Motivation","text":"

    Wallets are a familiar component metaphor that SSI has adopted from the world of cryptocurrencies. The translation isn't perfect, though; crypto wallets have only a subset of the ../../features that an identity wallet needs. This causes problems, as coders may approach wallets in Indy with assumptions that are more narrow than our actual design target.

    Since wallets are a major vector for hacking and cybersecurity issues, casual or fuzzy wallet requirements are a recipe for frustration or disaster. Divergent and substandard implementations could undermine security more broadly. This argues for as much design guidance and implementation help as possible.

    Wallets are also a unit of identity portability--if an identity owner doesn't like how her software is working, she should be able to exercise her self- sovereignty by taking the contents of her wallet to a new service. This implies that wallets need certain types of interoperability in the ecosystem, if they are to avoid vendor lock-in.

    All of these reasons--to clarify design scope, to provide uniform high security, and to guarantee interop--suggest that we need a formal RFC to document wallet architecture.

    "},{"location":"concepts/0050-wallets/#tutorial","title":"Tutorial","text":"

    (For a slide deck that gives a simplified overview of all the content in this RFC, please see http://bit.ly/2JUcIiT. The deck also includes a link to a recorded presentation, if you prefer something verbal and interactive.)

    "},{"location":"concepts/0050-wallets/#what-is-an-identity-wallet","title":"What Is an Identity Wallet?","text":"

    Informally, an identity wallet (preferably not just \"wallet\") is a digital container for data that's needed to control a self-sovereign identity. We borrow this metaphor from physical wallets:

    Notice that we do not carry around in a physical wallet every document, key, card, photo, piece of currency, or credential that we possess. A wallet is a mechanism of convenient control, not an exhaustive repository. A wallet is portable. A wallet is worth safeguarding. Good wallets are organized so we can find things easily. A wallet has a physical location.

    What does suggest about identity wallets?

    "},{"location":"concepts/0050-wallets/#types-of-sovereign-data","title":"Types of Sovereign Data","text":"

    Before we give a definitive answer to that question, let's take a detour for a moment to consider digital data. Actors in a self-sovereign identity ecosystem may own or control many different types of data:

    ...and much more. Different subsets of data may be worthy of different protection efforts:

    The data can also show huge variety in its size and in its richness:

    Because of the sensitivity difference, the size and richness difference, joint ownership, and different needs for access in different circumstances, we may store digital data in many different locations, with different backup regimes, different levels of security, and different cost profiles.

    "},{"location":"concepts/0050-wallets/#whats-out-of-scope","title":"What's Out of Scope","text":""},{"location":"concepts/0050-wallets/#not-a-vault","title":"Not a Vault","text":"

    This variety suggests that an identity wallet as a loose grab-bag of all our digital \"stuff\" will give us a poor design. We won't be able to make good tradeoffs that satisfy everybody; some will want rigorous, optimized search; others will want to minimize storage footprint; others will be concerned about maximizing security.

    We reserve the term vault to refer to the complex collection of all an identity owner's data:

    Note that a vault can contain an identity wallet. A vault is an important construct, and we may want to formalize its interface. But that is not the subject of this spec.

    "},{"location":"concepts/0050-wallets/#not-a-cryptocurrency-wallet","title":"Not A Cryptocurrency Wallet","text":"

    The cryptocurrency community has popularized the term \"wallet\"--and because identity wallets share with crypto wallets both high-tech crypto and a need to store secrets, it is tempting to equate these two ../../concepts. However, an identity wallet can hold more than just cryptocurrency keys, just as a physical wallet can hold more than paper currency. Also, identity wallets may need to manage hundreds of millions of relationships (in the case of large organizations), whereas most crypto wallets manage a small number of keys:

    "},{"location":"concepts/0050-wallets/#not-a-gui","title":"Not a GUI","text":"

    As used in this spec, an identity wallet is not a visible application, but rather a data store. Although user interfaces (superb ones!) can and should be layered on top of wallets, from indy's perspective the wallet itself consists of a container and its data; its friendly face is a separate construct. We may casually refer to an application as a \"wallet\", but what we really mean is that the application provides an interface to the underlying wallet.

    This is important because if a user changes which app manages his identity, he should be able to retain the wallet data itself. We are aiming for a better portability story than browsers offer (where if you change browsers, you may be able to export+import your bookmarks, but you have to rebuild all sessions and logins from scratch).

    "},{"location":"concepts/0050-wallets/#personas","title":"Personas","text":"

    Wallets have many stakeholders. However, three categories of wallet users are especially impactful on design decisions, so we define a persona for each.

    "},{"location":"concepts/0050-wallets/#alice-individual-identity-owner","title":"Alice (individual identity owner)","text":"

    Alice owns several devices, and she has an agent in the cloud. She has a thousand relationships--some with institutions, some with other people. She has a couple hundred credentials. She owns three different types of cryptocurrency. She doesn\u2019t issue or revoke credentials--she just uses them. She receives proofs from other entities (people and orgs). Her main tool for exercising a self-sovereign identity is an app on a mobile device.

    "},{"location":"concepts/0050-wallets/#faber-intitutional-identity-owner","title":"Faber (intitutional identity owner)","text":"

    Faber College has an on-prem data center as well as many resources and processes in public and private clouds. It has relationships with a million students, alumni, staff, former staff, applicants, business partners, suppliers, and so forth. Faber issues credentials and must manage their revocation. Faber may use crypto tokens to sell and buy credentials and proofs.

    "},{"location":"concepts/0050-wallets/#the-org-book-trust-hub","title":"The Org Book (trust hub)","text":"

    The Org Book holds credentials (business licenses, articles of incorporation, health permits, etc) issued by various government agencies, about millions of other business entities. It needs to index and search credentials quickly. Its data is public. It serves as a reference for many relying parties--thus its trust hub role.

    "},{"location":"concepts/0050-wallets/#use-cases","title":"Use Cases","text":"

    The specific uses cases for an identity wallet are too numerous to fully list, but we can summarize them as follows:

    As an identity owner (any of the personas above), I want to manage identity and its relationships in a way that guarantees security and privacy:

    "},{"location":"concepts/0050-wallets/#managing-secrets","title":"Managing Secrets","text":"

    Certain sensitive things require special handling. We would never expect to casually lay an ebola zaire sample on the counter in our bio lab; rather, it must never leave a special controlled isolation chamber.

    Cybersecurity in wallets can be greatly enhanced if we take a similar tack with high-value secrets. We prefer to generate such secrets in their final resting place, possibly using a seed if we need determinism. We only use such secrets in their safe place, instead of passing them out to untrusted parties.

    TPMs, HSMs, and so forth follow these rules. Indy\u2019s current wallet interface does, too. You can\u2019t get private keys out.

    "},{"location":"concepts/0050-wallets/#composition","title":"Composition","text":"

    The foregoing discussions about cybersecurity, the desirability of design guidance and careful implementation, and wallet data that includes but is not limited to secrets motivates the following logical organization of identity wallets in Indy:

    The world outside a wallet interfaces with the wallet through a public interface provided by indy-sdk, and implemented only once. This is the block labeled encryption, query (wallet core) in the diagram. The implementation in this layer guarantees proper encryption and secret-handling. It also provides some query ../../features. Records (items) to be stored in a wallet are referenced by a public handle if they are secrets. This public handle might be a public key in a key pair, for example. Records that are not secrets can be returned directly across the API boundary.

    Underneath, this common wallet code in libindy is supplemented with pluggable storage-- a technology that provides persistence and query ../../features. This pluggable storage could be a file system, an object store, an RDBMS, a NoSQL DB, a Graph DB, a key~value store, or almost anything similar. The pluggable storage is registered with the wallet layer by providing a series of C-callable functions (callbacks). The storage layer doesn't have to worry about encryption at all; by the time data reaches it, it is encrypted robustly, and the layer above the storage takes care of translating queries to and from encrypted form for external consumers of the wallet.

    "},{"location":"concepts/0050-wallets/#tags-and-queries","title":"Tags and Queries","text":"

    Searchability in wallets is facilitated with a tagging mechanism. Each item in a wallet can be associated with zero or more tags, where a tag is a key=value pair. Items can be searched based on the tags associated with them, and tag values can be strings or numbers. With a good inventory of tags in a wallet, searching can be robust and efficient--but there is no support for joins, subqueries, and other RDBMS-like constructs, as this would constrain the type of storage plugin that could be written.

    An example of the tags on a wallet item that is a credential might be:

      item-name = \"My Driver's License\"\n  date-issued = \"2018-05-23\"\n  issuer-did = \"ABC\"\n  schema = \"DEF\"\n

    Tag names and tag values are both case-sensitive.

    Because tag values are normally encrypted, most tag values can only be tested using the $eq, $neq or $in operators (see Wallet Query Language, next). However, it is possible to force a tag to be stored in the wallet as plain text by naming it with a special prefix, ~ (tilde). This enables operators like $gt, $lt, and $like. Such tags lose their security guarantees but provide for richer queries; it is up to applications and their users to decide whether the tradeoff is appropriate.

    "},{"location":"concepts/0050-wallets/#wallet-query-language","title":"Wallet Query Language","text":"

    Wallets can be searched and filtered using a simple, JSON-based query language. We call this Wallet Query Language (WQL). WQL is designed to require no fancy parsing by storage plugins, and to be easy enough for developers to learn in just a few minutes. It is inspired by MongoDB's query syntax, and can be mapped to SQL, GraphQL, and other query languages supported by storage backends, with minimal effort.

    Formal definition of WQL language is the following:

    query = {subquery}\nsubquery = {subquery, ..., subquery} // means subquery AND ... AND subquery\nsubquery = $or: [{subquery},..., {subquery}] // means subquery OR ... OR subquery\nsubquery = $not: {subquery} // means NOT (subquery)\nsubquery = \"tagName\": tagValue // means tagName == tagValue\nsubquery = \"tagName\": {$neq: tagValue} // means tagName != tagValue\nsubquery = \"tagName\": {$gt: tagValue} // means tagName > tagValue\nsubquery = \"tagName\": {$gte: tagValue} // means tagName >= tagValue\nsubquery = \"tagName\": {$lt: tagValue} // means tagName < tagValue\nsubquery = \"tagName\": {$lte: tagValue} // means tagName <= tagValue\nsubquery = \"tagName\": {$like: tagValue} // means tagName LIKE tagValue\nsubquery = \"tagName\": {$in: [tagValue, ..., tagValue]} // means tagName IN (tagValue, ..., tagValue)\n
    "},{"location":"concepts/0050-wallets/#sample-wql-query-1","title":"Sample WQL Query 1","text":"

    Get all credentials where subject like \u2018Acme%\u2019 and issue_date > last week. (Note here that the name of the issue date tag begins with a tilde, telling the wallet to store its value unencrypted, which makes the $gt operator possible.)

    {\n  \"~subject\": {\"$like\": \"Acme%\"},\n  \"~issue_date\": {\"$gt\": 2018-06-01}\n}\n
    "},{"location":"concepts/0050-wallets/#sample-wql-query-2","title":"Sample WQL Query 2","text":"

    Get all credentials about me where schema in (a, b, c) and issuer in (d, e, f).

    {\n  \"schema_id\": {\"$in\": [\"a\", \"b\", \"c\"]},\n  \"issuer_id\": {\"$in\": [\"d\", \"e\", \"f\"]},\n  \"holder_role\": \"self\"\n}\n
    "},{"location":"concepts/0050-wallets/#encryption","title":"Encryption","text":"

    Wallets need very robust encryption. However, they must also be searchable, and the encryption must be equally strong regardless of which storage technology is used. We want to be able to hide data patterns in the encrypted data, such that an attacker cannot see common prefixes on keys, or common fragments of data in encrypted values. And we want to rotate the key that protects a wallet without having to re-encrypt all its content. This suggests that a trivial encryption scheme, where we pick a symmetric key and encrypt everything with it, is not adequate.

    Instead, wallet encryption takes the following approach:

    The 7 \"column\" keys are concatenated and encrypted with a wallet master key, then saved into the metadata of the wallet. This allows the master key to be rotated without re-encrypting all the items in the wallet.

    Today, all encryption is done using ChaCha20-Poly1305, with HMAC-SHA256. This is a solid, secure encryption algorithm, well tested and widely supported. However, we anticipate the desire to use different cipher suites, so in the future we will make the cipher suite pluggable.

    The way the individual fields are encrypted is shown in the following diagram. Here, data is shown as if stored in a relational database with tables. Wallet storage may or may not use tables, but regardless of how the storage distributes and divides the data, the logical relationships and the encryption shown in the diagram apply.

    "},{"location":"concepts/0050-wallets/#pluggable-storage","title":"Pluggable Storage","text":"

    Although Indy infrastructure will provide only one wallet implementation it will allow to plug different storages for covering of different use cases. Default storage shipped with libindy will be sqlite based and well suited for agents running on edge devices. The API endpoint register_wallet_storage will allow Indy Developers to register a custom storage implementation as a set of handlers.

    A storage implementation does not need any special security ../../features. It stores data that was already encrypted by libindy (or data that needs no encryption/protection, in the case of unencrypted tag values). It searches data in whatever form it is persisted, without any translation. It returns data as persisted, and lets the common wallet infrastructure in libindy decrypt it before return it to the user.

    "},{"location":"concepts/0050-wallets/#secure-enclaves","title":"Secure Enclaves","text":"

    Secure Enclaves are purposely designed to manage, generate, and securely store cryptographic material. Enclaves can be either specially designed hardware (e.g. HSM, TPM) or trusted execution environments (TEE) that isolate code and data from operating systems (e.g. Intel SGX, AMD SVE, ARM Trustzone). Enclaves can replace common cryptographic operations that wallets perform (e.g. encryption, signing). Some secrets cannot be stored in wallets like the key that encrypts the wallet itself or keys that are backed up. These cannot be stored in enclaves as keys stored in enclaves cannot be extracted. Enclaves can still protect these secrets via a mechanism called wrapping.

    "},{"location":"concepts/0050-wallets/#enclave-wrapping","title":"Enclave Wrapping","text":"

    Suppose I have a secret, X, that needs maximum protection. However, I can\u2019t store X in my secure enclave because I need to use it for operations that the enclave can\u2019t do for me; I need direct access. So how to I extend enclave protections to encompass my secret?

    I ask the secure enclave to generate a key, Y, that will be used to protect X. Y is called a wrapping key. I give X to the secure enclave and ask that it be encrypted with wrapping key Y. The enclave returns X\u2019 (ciphertext of X, now called a wrapped secret), which I can leave on disk with confidence; it cannot be decrypted to X without involving the secure enclave. Later, when I want to decrypt, I give wrapped secret X\u2019 to the secure enclave and ask it to give me back X by decrypting with wrapping key Y.

    You could ask whether this really increases security. If you can get into the enclave, you can wrap or unwrap at will.

    The answer is that an unwrapped secret is protected by only one thing--whatever ACLs exist on the filesystem or storage where it resides. A wrapped secret is protected by two things--the ACLs and the enclave. OS access may breach either one, but pulling a hard drive out of a device will not breach the enclave.

    "},{"location":"concepts/0050-wallets/#paper-wallets","title":"Paper Wallets","text":"

    It is possible to persist wallet data to physical paper (or, for that matter, to etched metal or other physical media) instead of a digital container. Such data has attractive storage properties (e.g., may survive natural disasters, power outages, and other challenges that would destroy digital data). Of course, by leaving the digital realm, the data loses its accessibility over standard APIs.

    We anticipate that paper wallets will play a role in backup and recovery, and possibly in enabling SSI usage by populations that lack easy access to smartphones or the internet. Our wallet design should be friendly to such usage, but physical persistence of data is beyond the scope of Indy's plugin storage model and thus not explored further in this RFC.

    "},{"location":"concepts/0050-wallets/#backup-and-recovery","title":"Backup and Recovery","text":"

    Wallets need a backup and recovery feature, and also a way to export data and import it. Indy's wallet API includes an export function and an import function that may be helpful in such use cases. Today, the export is unfiltered--all data is exported. The import is also all-or-nothing and must be to an empty wallet; it is not possible to import selectively or to update existing records during import.

    A future version of import and export may add filtering, overwrite, and progress callbacks. It may also allow supporting or auxiliary data (other than what the wallet directly persists) to be associated with the export/import payload.

    For technical details on how export and import work, please see the internal design docs.

    "},{"location":"concepts/0050-wallets/#reference","title":"Reference","text":""},{"location":"concepts/0050-wallets/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    We could implement wallets purely as built already in the cryptocurrency world. This would give us great security (except for crypto wallets that are cloud based), and perhaps moderately good usability.

    However, it would also mean we could not store credentials in wallets. Indy would then need an alternate mechanism to scan some sort of container when trying to satisfy a proof request. And it would mean that a person's identity would not be portable via a single container; rather, if you wanted to take your identity to a new place, you'd have to copy all crypto keys in your crypto wallet, plus copy all your credentials using some other mechanism. It would also fragment the places where you could maintain an audit trail of your SSI activities.

    "},{"location":"concepts/0050-wallets/#prior-art","title":"Prior art","text":"

    See comment about crypto wallets, above.

    "},{"location":"concepts/0050-wallets/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0050-wallets/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy SDK Most agents that implement wallets get their wallet support from Indy SDK. These are not listed separately."},{"location":"concepts/0051-dkms/","title":"Aries RFC 0051: Decentralized Key Management","text":""},{"location":"concepts/0051-dkms/#summary","title":"Summary","text":"

    Describes a general approach to key management in a decentralized, self-sovereign world. We expect Aries to embody the principles described here; this doc is likely to color numerous protocols and ecosystem ../../features.

    "},{"location":"concepts/0051-dkms/#motivation","title":"Motivation","text":"

    A decentralized key management system (DKMS) is an approach to cryptographic key management where there is no central authority. DKMS leverages the security, immutability, availability, and resiliency properties of distributed ledgers to provide highly scalable key distribution, verification, and recovery.

    Key management is vital to exercising sovereignty in a digital ecosystem, and decentralization is a vital principle as well. Therefore, we need a coherent and comprehensive statement of philosophy and architecture on this vital nexus of topics.

    "},{"location":"concepts/0051-dkms/#tutorial","title":"Tutorial","text":"

    The bulk of the content for this RFC is located in the official architecture documentation -- dkms-v4.md; readers are encouraged to go there to learn more. Here we present only the highest-level background context, for those who may be unaware of some basics.

    "},{"location":"concepts/0051-dkms/#background-concepts","title":"Background Concepts","text":""},{"location":"concepts/0051-dkms/#key-types","title":"Key Types","text":"

    DKMS uses the following key types: 1. Master keys: Keys that are not cryptographically protected. They are distributed manually or initially installed and protected by procedural controls and physical or electronic isolation. 2. Key encrypting keys: Symmetric or public keys used for key transport or storage of other keys. 3. Data keys: Used to provide cryptographic operations on user data (e.g., encryption, authentication).

    The keys at one level are used to protect items at a lower level. Consequently, special measures are used to protect master keys, including severely limiting access and use, hardware protection, and providing access to the key only under shared control.

    "},{"location":"concepts/0051-dkms/#key-loss","title":"Key Loss","text":"

    Key loss means the owner no longer controls the key and it can assume there is no further risk of compromise. For example devices unable to function due to water, electricity, breaking, fire, hardware failure, acts of God, etc.

    "},{"location":"concepts/0051-dkms/#compromise","title":"Compromise","text":"

    Key compromise means that private keys and/or master keys have become or can become known either passively or actively.

    "},{"location":"concepts/0051-dkms/#recovery","title":"Recovery","text":"

    In decentralized identity management, recovery is important since identity owners have no \u201chigher authority\u201d to turn to for recovery. 1. Offline recovery uses physical media or removable digital media to store recovery keys. 2. Social recovery employs entities trusted by the identity owner called \"trustees\" who store recovery data on an identity owners behalf\u2014typically in the trustees own agent(s).

    These methods are not exclusive and should be combined with key rotation and revocation for proper security.

    "},{"location":"concepts/0051-dkms/#reference","title":"Reference","text":"
    1. Design and architecture
    2. Public Registry for Agent Authorization Policy. An identity owner creates a policy on the ledger that defines its agents and their authorizations. Agents while acting on the behalf of the identity owner need to prove that they are authorised. More details
    3. Shamir Secret
    4. Trustee Protocols
    "},{"location":"concepts/0051-dkms/#drawbacks-rationale-and-alternatives-prior-art-unresolved-questions","title":"Drawbacks, Rationale and alternatives, Prior art, Unresolved Questions","text":"

    The material that's normally in these sections of a RFC appears in the official architecture documentation -- dkms-v4.md.

    "},{"location":"concepts/0051-dkms/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy SDK partial: backup Connect.Me partial: backup, sync to cloud"},{"location":"concepts/0051-dkms/agent-authz-policy-ledger-interactions/","title":"Agent Authz policy (changes for ledger)","text":"

    Objective: Prove agents are authorized to provide proof of claims and authorize and de-authorize other agents

    "},{"location":"concepts/0051-dkms/agent-authz-policy-ledger-interactions/#assumptions","title":"Assumptions","text":"
    1. The ledger maintains a global accumulator that holds commitments sent by the agents.
    2. The global accumulator is maintained by the each node so every node knows the accumulator private key
    3. Agent auth policy txns are stored at the identity ledger.
    4. Each auth policy is uniquely identified by a policy address I.
    5. One agent can belong to several authz policies, thus several different I's.
    6. An agent can have several authorizations. Following are the list of authorizations:
    7. PROVE
    8. PROVE_GRANT
    9. PROVE_REVOKE
    10. ADMIN
    "},{"location":"concepts/0051-dkms/agent-authz-policy-ledger-interactions/#transactions","title":"Transactions","text":""},{"location":"concepts/0051-dkms/agent-authz-policy-ledger-interactions/#agent_authz","title":"AGENT_AUTHZ","text":"

    An authz policy is created/updated by an AGENT_AUTHZ transaction. A transaction creating a new authz policy:

    {\n    identifier: <transaction sender's verification key>\n    signature: <signature created by the sender's public key>,\n    req_id: <a nonce>,\n    operation: {\n        type: AGENT_AUTHZ,\n        address: <policy address, I>,\n        verkey: <optional, verification key of the agent>,\n        authorization: <optional, a bitset>,\n        commitment: <optional>\n    }\n} \n
    address: The policy address, this is a unique identifier of an authz policy. Is a large number (size/range TBD). If the ledger has never seen the provided policy address, it considers the transaction a creation of a new authz policy else it is considered an update of an existing policy identifier by the address. verkey: An ed25519 verkey of the agent to which the authorization corresponds. This is optional when a new policy is being created as identifier is sufficient. This verkey should be kept different from any DID verkey to avoid correlation. authorization: A bitset indicating which authorizations are being given to the agent, it is ignored when creating a new policy (the ledger does not know I). The various bits indicate different authorizations:

    0 None (revoked)\n1 ADMIN (all)\n2 PROVE\n3 PROVE_GRANT\n4 PROVE_REVOKE\n5 \"Reserved for future\"\n6 \"Reserved for future\"\n7  ... \n   ... \n

    While creating a new policy, this field's value is ignored and the creator agent has all authorizations. For any subsequent policy transactions, the ledger checks if the sender (author to be precise, since anyone can send a transaction once a signature has been done) of transaction has the authorization to make the transaction, eg. The author of txn has PROVE_GRANT if it is giving a PROVE authorization to another agent. Future Work: When we support m-of-n authorization, verkey would be a map stating the policy and the verkeys

    commitment: This is a number (size/range TBD) given by the agent when it is being given a PROVE authorization. Thus this field is only needed when a policy is being created or an agent is being given the PROVE authorization. The ledger upon receiving this commitment checks if the commitment is prime and if it is then it updates the global accumulator with this commitment. Efficient primality testing algorithms like BPSW or ECPP can be used but the exact algorithm is yet to be decided. If the commitment is not prime (in case of creation or update of policy address) then the transaction is rejected. The ledger rejects the transaction if it has already seen the commitment as part of another transaction. In case of creation of new policy or an agent being given PROVE authorization, the ledger responds with the accumulator value after the update with this commitment.

    "},{"location":"concepts/0051-dkms/agent-authz-policy-ledger-interactions/#get_agent_authz","title":"GET_AGENT_AUTHZ","text":"

    This query is sent by any client to check what the authz policy of any address I is

    {\n    ...,\n    operation: {\n        type: GET_AGENT_AUTHZ,\n        address: <policy address, I>,\n    }\n} \n

    The ledger replies with all the agents, their associated authorizations and the commitments of the address I.

    "},{"location":"concepts/0051-dkms/agent-authz-policy-ledger-interactions/#get_agent_authz_accum","title":"GET_AGENT_AUTHZ_ACCUM","text":"

    This query is sent by anyone to get the value of the accumulator.

    {\n    ...,\n    operation: {\n        type: GET_AGENT_AUTHZ_ACCUM,\n    accum_id: <id of either the provisioned agents accumulator or the revoked agent accumulator>\n    }\n} \n
    The ledger returns the global accumulator with the id. Both accumulators are add only; the client checks that commitment is present in one accumulator AND not present in other accumulator.

    "},{"location":"concepts/0051-dkms/agent-authz-policy-ledger-interactions/#data-structures","title":"Data structures","text":""},{"location":"concepts/0051-dkms/agent-authz-policy-ledger-interactions/#ledger","title":"Ledger","text":"

    Each authz transaction goes in the identity ledger.

    "},{"location":"concepts/0051-dkms/agent-authz-policy-ledger-interactions/#state-trie","title":"State trie.","text":"

    The state stores: 1. Accumulator: The accumulator is stored in the trie at name <special byte denoting an authz prove accumulator> with value as the value of accumulator. 2. Policies: The state stores one name for each policy, the name is <special byte denoting an authz policy>:<policy address>, the value at this name is a hash. The hash is determined deterministically serializing (RLP encoding from ethereum, we already use this) this data structure:

    [\n  [<agent verkey1>, <authorization bitset>, [commitment>]],\n  [<agent verkey2>, <authorization bitset>, [commitment>]],\n  [<agent verkey3>, <authorization bitset>, [commitment>]],\n]\n

    The hash of above can then be used to lookup (it is not, more on this later) the exact authorization policy in a separate name-value store. This is done to keep the database backing the state (trie) smaller.

    "},{"location":"concepts/0051-dkms/agent-authz-policy-ledger-interactions/#caches","title":"Caches","text":"

    There is an agent_authz cache used for optimisations are: The cache is a name-value store (leveldb) and offers a constant lookup time for lookup by name. 1. Policy values: The authorization of each agent per policy. The values for the keys are rlp encoding of the list of at most 2 items, authorization bitset with each bit respresenting a different auth, commitment is optional and relevant only when agent has the PROVE authorization.

    {\n  <policy address 1><delimiter><agent verkey 1>: <authorization bitset>:<commitment>,\n  <policy address 1><delimiter><agent verkey 2>: <authorization bitset>:<commitment>,\n  <policy address 1><delimiter><agent verkey 3>: <authorization bitset>:<commitment>,\n  <policy address 2><delimiter><agent verkey 1>: <authorization bitset>:<commitment>,\n  <policy address 2><delimiter><agent verkey 2>: <authorization bitset>:<commitment>,\n  ....\n}\n
    These names are used by the nodes during processing any transaction.

    1. Accumulator value: Value of each accumulator is stored corresponding to the special byte indicating the global accumulator.
      {\n  <special_byte>: <accumulator value>,\n}\n

    During processing of any write transaction, the node updates the ledger, state and caches after the txn is successful but for querying (client as well as its own like validation, etc) it only uses caches since caches are more efficient than state trie. The state trie is only used for state proofs.

    1. TODO: Maintaining a set of commitments: Each node maintains a set of see commitments and does not allow duplicate commitments. Its kept in key value store with constant lookup time for commitment lookup.
    "},{"location":"concepts/0051-dkms/agent-authz-policy-ledger-interactions/#code-organisation","title":"Code organisation.","text":"

    These changes would be implemented as a separate plugin. The plugin will not introduce new ledger or state but will introduce the cache described above. The plugin will introduce a new request handler which will subclass the DomainRequestHandler. The plugin's new request handler will introduce 1 write_type and 2 query_types and methods to handle those.

    "},{"location":"concepts/0051-dkms/dkms-v4/","title":"DKMS (Decentralized Key Management System) Design and Architecture V4","text":"

    2019-03-29

    Authors: Drummond Reed, Jason Law, Daniel Hardman, Mike Lodder

    Contributors: Christopher Allen, Devin Fisher, Nathan George, Lovesh Harchandani, Dmitry Khovratovich, Corin Kochenower, Brent Zundel, Nathan George

    Advisors: Stephen Wilson

    STATUS: This design and architecture for a decentralized key management system (DKMS) has been developed by Evernym Inc. under a contract with the U.S. Department of Homeland Security Science & Technology Directorate. This fourth draft is being released on 29 Mar 2019 to begin an open public review and comment process in preparation for DKMS to be submitted to a standards development organization such as OASIS for formal standardization.

    Acknowledgements:

    Table of Contents

    1. Introduction
    2. Design Goals and Requirements
    3. High Level Architecture
    4. Ledger Architecture
    5. Key Management Architecture
    6. Recovery Methods
    7. Recovery From Key Loss
    8. Recovery From Key Compromise
    9. DKMS Protocol
    10. Protocol Flows
    11. Open Issues and Future Work
    12. Future Standardization
    "},{"location":"concepts/0051-dkms/dkms-v4/#1-introduction","title":"1. Introduction","text":""},{"location":"concepts/0051-dkms/dkms-v4/#11-overview","title":"1.1. Overview","text":"

    DKMS (Decentralized Key Management System) is a new approach to cryptographic key management intended for use with blockchain and distributed ledger technologies (DLTs) where there are no centralized authorities. DKMS inverts a core assumption of conventional PKI (public key infrastructure) architecture, namely that public key certificates will be issued by centralized or federated certificate authorities (CAs). With DKMS, the initial \"root of trust\" for all participants is any distributed ledger or decentralized protocol that supports a new form of root identity record called a DID (decentralized identifier).

    A DID is a globally unique identifier that is generated cryptographically and self-registered with the identity owner\u2019s choice of a DID-compatible distributed ledger or decentralized protocol so no central registration authority is required. Each DID points to a DID document\u2014a JSON or JSON-LD object containing the associated public verification key(s) and addresses of services such as off-ledger agent(s) supporting secure peer-to-peer interactions with the identity owner. For more on DIDs, see the DID Primer. For more on peer-to-peer interactions, see the DID Communication explainer.

    Since no third party is involved in the initial registration of a DID and DID document, it begins as \"trustless\". From this starting point, trust between DID-identified peers can be built up through the exchange of verifiable credentials\u2014credentials about identity attributes that include cryptographic proof of authenticity of authorship. These proofs can be verified by reference to the issuer\u2019s DID and DID document. For more about verifiable credentials, see the Verifiable Credentials Primer.

    This decentralized web of trust model leverages the security, immutability, availability, and resiliency properties of distributed ledgers to provide highly scalable key distribution, verification, and recovery. This inversion of conventional public key infrastructure (PKI) into decentralized PKI (DPKI) removes centralized gatekeepers, making the benefits of PKI accessible to everyone. However this lack of centralized authorities for DKMS shifts the majority of responsibility for key management directly to participating identity owners. This demands the decentralized equivalent of the centralized cryptographic key management systems (CKMS) that are the current best practice in most enterprises. The purpose of this document is to specify a design and architecture that fulfills this market need.

    "},{"location":"concepts/0051-dkms/dkms-v4/#12-market-need","title":"1.2. Market Need","text":"

    X.509 public key certificates, as used in the TLS/SSL protocol for HTTPS secure Web browsing, have become the most widely adopted PKI in the world. However this system requires that all certificates be obtained from a relatively small list of trusted authorities\u2014and that any changes to these certificates also be approved by someone in this chain of trust.

    This creates political and structural barriers to establishing and updating authoritative data. This friction is great enough that only a small fraction of Internet users are currently in position to use public/private key cryptography for their own identity, security, privacy, and trust management. This inability for people and organizations to interact privately as independent, verifiable peers on their own terms has many consequences:

    1. It forces individuals and smaller organizations to rely on large federated identity providers and certificate authorities who are in a position to dictate security, privacy and business policies.

    2. It restricts the number of ways in which peers can discover each other and build new trust relationships\u2014which in turn limits the health and resiliency of the digital economy.

    3. It discourages the use of modern cryptography for increased security and privacy, weakening our cybersecurity infrastructure.

    Decentralized technologies such as distributed ledgers and edge protocols can remove these barriers and make it much easier to share and verify public keys. This enables each entity to manage its own authoritative key material without requiring approval from other parties. Furthermore, those changes can be seen immediately by the entity\u2019s peers without requiring them to change their software or \"certificate store\".

    Maturing DLTs and protocols will bring DPKI into the mainstream\u2014a combination of DIDs for decentralized identification and DKMS for decentralized key management. DPKI will provide a simple, secure, way to generate strong public/private key pairs, register them for easy discovery and verification, and rotate and retire them as needed to maintain strong security and privacy.

    "},{"location":"concepts/0051-dkms/dkms-v4/#13-benefits","title":"1.3. Benefits","text":"

    DKMS architecture and DPKI provides the following major benefits:

    1. No single point of failure. With DKMS, there is no central CA or other registration authority whose failure can jeopardize large swaths of users.

    2. Interoperability. DKMS will enable any two identity owners and their applications to perform key exchange and create encrypted P2P connections without reliance on proprietary software, service providers, or federations.

    3. Portability. DKMS will enable identity owners to avoid being locked into any specific implementation of a DKMS-compatible wallet, agent, or agency. Identity owners should\u2014with the appropriate security safeguards\u2014be able to use the DKMS protocol itself to move the contents of their wallet (though not necessarily the actual cryptographic keys) between compliant DKMS implementations.

    4. Resilient trust infrastructure. DKMS incorporates all the advantages of distributed ledger technology for decentralized access to cryptographically verifiable data. It then adds on top of it a distributed web of trust where any peer can exchange keys, form connections, and issue/accept verifiable credentials from any other peer.

    5. Key recovery. Rather than app-specific or domain-specific key recovery solutions, DKMS can build robust key recovery directly into the infrastructure, including agent-automated encrypted backup, DKMS key escrow services, and social recovery of keys, for example by backing up or sharding keys across trusted DKMS connections and agents.

    "},{"location":"concepts/0051-dkms/dkms-v4/#2-design-goals-and-requirements","title":"2. Design Goals and Requirements","text":""},{"location":"concepts/0051-dkms/dkms-v4/#21-conventional-ckms-requirements-nist-800-130-analysis","title":"2.1. Conventional CKMS Requirements: NIST 800-130 Analysis","text":"

    As a general rule, DKMS requirements are a derivation of CKMS requirements, adjusted for the lack of centralized authorities or systems for key management operations. Evernym\u2019s DKMS team and subcontractors performed an extensive analysis of the applicability of conventional CKMS requirements to DKMS using NIST Special Publication 800-130: A Framework for Designing Cryptographic Key Management Systems. For a summary of the results, see:

    The most relevant special requirements are highlighted in the following sections.

    "},{"location":"concepts/0051-dkms/dkms-v4/#22-decentralization","title":"2.2. Decentralization","text":"

    The DKMS design MUST NOT assume any reliance on a centralized authority for the system as a whole. The DKMS design MUST assume all participants are independent actors identified with DIDs conformant with the Decentralized Identifiers (DID) specification but otherwise acting in their own decentralized security and privacy domains. The DKMS design MUST support options for decentralized key recovery.

    What distinguishes DKMS from conventional CKMS is the fact that the entire design assumes decentralization: outside of the \"meta-policies\" established by the DKMS specification itself, there is no central authority to dictate policies that apply to all users. So global DKMS infrastructure must achieve interoperability organically based on a shared set of specifications, just like the Internet.

    Note that the need to maintain decentralization is most acute when it comes to key recovery: the advantages of decentralization are nullified if key recovery mechanisms reintroduce centralization.

    "},{"location":"concepts/0051-dkms/dkms-v4/#23-privacy-and-pseudonymity","title":"2.3. Privacy and Pseudonymity","text":"

    The DKMS design MUST NOT introduce new means of correlating participants by virtue of using the DKMS standards. The DKMS design SHOULD increase privacy and security by enabling the use of pseudonyms, selective disclosure, and encrypted private channels of communication.

    Conventional PKI and CKMS rarely have anti-correlation as a primary requirement. DKMS should ensure that participants will have more, not less, control over their privacy as well as their security. This facet of DKMS requires an vigilant application of all the principles of Privacy by Design.

    "},{"location":"concepts/0051-dkms/dkms-v4/#24-usability","title":"2.4. Usability","text":"

    DIDs and DKMS components intended to be used by individual identity owners MUST be safely usable without any special training or knowledge of cryptography or key management.

    In many ways this follows from decentralization: in a DKMS, there is no central authority to teach everyone how to use it or require specific user training. It must be automated and intuitive to a very high degree, similar to the usability achieved by modern encrypted OTT messaging products like Whatsapp, iMessage, and Signal.

    According to the BYU Internet Security Research Lab, this level of usability is a necessary property of any successfully deployed system. \"We spent the 1990s building and deploying security that wasn\u2019t really needed, and now that it\u2019s actually desirable, we\u2019re finding that nobody can use it\" [Guttman and Grigg, IEEE Security and Privacy, 2005]. The DKMS needs to be able to support a broad spectrum of applications, with both manual and automatic key management, in order to satisfy the numerous security and usability requirements of those applications.

    Again, this requirement is particularly acute when it comes to key recovery. Because there is no central authority to fall back on, the key recovery options must not only be anticipated and implemented in advance, but they must be easy enough for a non-technical user to employ while still preventing exploitation by an attacker.

    "},{"location":"concepts/0051-dkms/dkms-v4/#25-automation","title":"2.5. Automation","text":"

    To maximize usability, the DKMS design SHOULD automate as many key management functions as possible while still meeting security and privacy requirements.

    This design principle follows directly from the usability requirement, and also from the inherent complexity of maintaining the security, privacy, and integrity of cryptographic primitives combined with the general lack of knowledge of most Internet users about any of these subjects.

    "},{"location":"concepts/0051-dkms/dkms-v4/#26-key-derivation","title":"2.6. Key Derivation","text":"

    In DKMS design it is NOT RECOMMENDED to copy private keys directly between wallets, even over encrypted connections. It is RECOMMENDED to use derived keys whenever possible to enable agent-specific and device-specific revocation.

    This design principle is based on security best practices, and also the growing industry experience with the BIP32 standard for management of the large numbers of private keys required by Bitcoin and other cryptocurrencies. However DKMS architecture can also accomplish this goal in other ways, such as using key signing keys (\"key endorsement\").

    "},{"location":"concepts/0051-dkms/dkms-v4/#27-delegation-and-guardianship","title":"2.7. Delegation and Guardianship","text":"

    The DKMS design MUST enable key management to be delegated by one identity owner to another, including the DID concept of delegation.

    Although DKMS infrastructure enables \"self-sovereign identity\"\u2014digital identifiers and identity wallets that are completely under the control of an identity owner and cannot be taken away by a third-party\u2014not all individuals have the ability to be self-sovereign. They may be operating at a physical, economic, or network disadvantage that requires another identity owner (individual or org) to act as an agent on their behalf.

    Other identity owners may simply prefer to have others manage their keys for purposes of convenience, efficiency, or safety. In either case, this means DKMS architecture needs to incorporate the concept of delegation as defined in the Decentralized Identifiers (DID) specification and in the Sovrin Glossary.

    "},{"location":"concepts/0051-dkms/dkms-v4/#28-portability","title":"2.8. Portability","text":"

    The DKMS design MUST enable an identity owner\u2019s DKMS-compliant key management capabilities to be portable across multiple DKMS-compliant devices, applications, and service providers.

    While the NIST 800-130 specifications have an entire section on interoperability, those requirements are focused primarily on interoperability of CKMS components with each other and with external CKMS systems. They do not encompass the need for a decentralized identity owner to be able to port their key management capabilities from one CKMS device, application, or service provider to another.

    This is the DID and DKMS equivalent of telephone number portability, and it is critical not only for the general acceptance of DKMS infrastructure, but to support the ability of DID owners to act with full autonomy and independence. As with telephone number portability, it also helps ensure a robust and competitive marketplace for DKMS-compliant products and services. (NOTE: Note that \"portability\" here refers to the ability of a DID owner to use the same DID across multiple devices, software applications, service providers, etc. It does not mean that a particular DID that uses a particular DID method is portable across different distributed ledgers. DID methods are ledger-specific.)

    "},{"location":"concepts/0051-dkms/dkms-v4/#29-extensibility","title":"2.9. Extensibility","text":"

    The DKMS design SHOULD be capable of being extended to support new cryptographic algorithms, keys, data structures, and modules, as well as new distributed ledger technologies and other security and privacy innovations.

    Section 7 of NIST 800-130 includes several requirements for conventional CKMS to be able to transition to newer and stronger cryptographic algorithms, but it does not go as far as is required for DKMS infrastructure, which must be capable of adapting to evolving Internet security and privacy infrastructure as well as rapid advances in distributed ledger technologies.

    It is worth noting that the DKMS specifications will not themselves include a trust framework (also called a governance framework; rather, one or more trust frameworks can be layered over them to formalize certain types of extensions. This provides a flexible and adaptable method of extending DKMS to meet the needs of specific communities.

    "},{"location":"concepts/0051-dkms/dkms-v4/#210-simplicity","title":"2.10. Simplicity","text":"

    Given the inherent complexity of key management, the DKMS design SHOULD aim to be as simple and interoperable as possible by pushing complexity to the edges and to extensions.

    Simplicity and elegance of design are common traits of most successful decentralized systems, starting with the packet-based design of the Internet itself. The less complex a system is, the easier it is to debug, evaluate, and adapt to future changes. Especially in light of the highly comprehensive scope of NIST 800-130, this requirement highlights a core difference with conventional CKMS design: the DKMS specification should NOT try to do everything, e.g., enumerate every possible type of key or role of user or application, but let those be defined locally in a way that is interoperable with the rest of the system.

    "},{"location":"concepts/0051-dkms/dkms-v4/#211-open-system-and-open-standard","title":"2.11. Open System and Open Standard","text":"

    The DKMS design MUST be an open system based on open, royalty-free standards.

    While many CKMS systems are deployed using proprietary technology, the baseline DKMS infrastructure must, like the Internet itself, be an open, royalty-free system. It may, of course, have many proprietary extensions and solutions built on top of it.

    "},{"location":"concepts/0051-dkms/dkms-v4/#3-high-level-architecture","title":"3. High-Level Architecture","text":"

    At a high level, DKMS architecture consists of three logical layers:

    1. The DID layer is the foundational layer consisting of DIDs registered and resolved via distributed ledgers and/or decentralized protocols.

    2. The cloud layer consists of server-side agents and wallets that provide a means of communicating and mediating between the DID layer and the edge layer. This layer enables encrypted peer-to-peer communications for exchange and verification of DIDs, public keys, and verifiable credentials.

    3. The edge layer consists of the local devices, agents, and wallets used directly by identity owners to generate and store most private keys and perform most key management operations.

    Figure 1 is an overview of this three-layer architecture:

    Figure 1: The high-level three-layer DKMS architecture

    Figure 2 is a more detailed picture of the relationship between the different types of agents and wallets in DKMS architecture.

    Figure 2: Diagram of the types of agents and connections in DKMS architecture.

    "},{"location":"concepts/0051-dkms/dkms-v4/#31-the-did-decentralized-identifier-layer","title":"3.1. The DID (Decentralized Identifier) Layer","text":"

    The foundation for DKMS is laid by the DID specification. DIDs can work with any decentralized source of truth such as a distributed ledger or edge protocol for which a DID method\u2014a way of creating, reading, updating, and revoking a DID\u2014has been specified. As globally unique identifiers, DIDs are patterned after URNs (Uniform Resource Names): colon-delimited strings consisting of a scheme name followed by a DID method name followed by a method-specific identifier. Here is an example DID that uses the Sovrin DID method:

    did:sov:21tDAKCERh95uGgKbJNHYp

    Each DID method specification defines:

    1. The specific source of truth against which the DID method operates;

    2. The format of the method-specific identifier;

    3. The CRUD operations (create, read, update, delete) for DIDs and DID documents on that ledger.

    DID resolver code can then be written to perform these CRUD operations on the target system with respect to any DID conforming to that DID method specification. Note that some distributed ledger technologies (DLTs) and distributed networks are better suited to DIDs than others. The DID specification itself is neutral with regard to DLTs; it is anticipated that those DLTs that are best suited for the purpose of DIDs will see the highest adoption rates.there will be Darwinian selection of the DLTs that are best fit for the purpose of DIDs.

    From a digital identity perspective, the primary problem that DIDs and DID documents solve is the need for a universally available, decentralized root of trust that any application or system can rely upon to discover and verify credentials about the DID subject. Such a solution enables us to move \"beyond federation\" into a world where any peer can enter into trusted interactions with any other peer, just as the Internet enabled any two peers to connect and communicate.

    "},{"location":"concepts/0051-dkms/dkms-v4/#32-the-cloud-layer-cloud-agents-and-cloud-wallets","title":"3.2. The Cloud Layer: Cloud Agents and Cloud Wallets","text":"

    While the DID specification covers the bottom layer of a decentralized public key infrastructure, the DKMS spec will concentrate on the two layers above it. The first of these, the cloud layer, is the server-side infrastructure that mediates between the ultimate peers\u2014the edge devices used directly by identity owners\u2014and the DID layer.

    While not strictly necessary from a pure logical point-of-view, in practice this server-side DKMS layer plays a similar role in DID infrastructure as email servers play in SMTP email infrastructure or Web servers play in Web infrastructure. Like email or Web servers, cloud agents and cloud wallets are designed to be available 24 x 7 to send and receive communications on behalf of their identity owners. They are also designed to perform communications, encryption, key management, data management, and data storage and backup processes that are not typically feasible for edge devices given their typical computational power, bandwidth, storage capacity, reliability and/or availability.

    Cloud agents and wallets will typically be hosted by a service provider called an agency. Agencies could be operated by any type of service provider\u2014ISPs, telcos, search engines, social networks, banks, utility companies, governments, etc. A third party agency is not a requirement of DKMS architecture\u2014any identity owner can also host their own cloud agents.

    From an architectural standpoint, it is critical that the cloud layer be designed so that it does not \"recentralize\" any aspect of DKMS. In other words, even if an identity owner chooses to use a specific DKMS service provider for a specific set of cloud agent functions, the identity owner should be able to substitute another DKMS service provider at a later date and retain complete portability of her DKMS keys, data and metadata.

    Another feature of the cloud layer is that cloud agents can use DIDs and DID documents to automatically negotiate mutually authenticated secure connections with each other using DID Communication, a protocol being designed for this purpose.

    "},{"location":"concepts/0051-dkms/dkms-v4/#33-the-edge-layer-edge-agents-and-edge-wallets","title":"3.3. The Edge Layer: Edge Agents and Edge Wallets","text":"

    The edge layer is vital to DKMS because it is where identity owners interact directly with computing devices, operating systems, and applications. This layer consists of DKMS edge agents and edge wallets that are under the direct control of identity owners. When designed and implemented correctly, edge devices, agents, and wallets can also be the safest place to store private keys and other cryptographic material. They are the least accessible for network intrusion, and even a successful attack on any single client device would yield the private data for only a single user or at most a small family of users.

    Therefore, the edge layer is where most DKMS private keys and link secrets are generated and where most key operations and storage are performed. To meet the security and privacy requirements, DKMS architecture makes the following two assumptions:

    1. A DKMS agent is always installed in an environment that includes a secure element or Trusted Platform Module (for simplicity, this document will use the term \"secure element\" or \u201cSE\u201d for this module).

    2. Private keys used by the agent never leave the secure element.

    By default edge agents are always paired with a corresponding cloud agent due to the many DKMS operations that a cloud agent enables, including communications via the DKMS protocol to other edge and cloud agents. However this is not strictly necessary. As shown in Figure 1, edge agents could also communicate directly, peer-to-peer, via a protocol such as Bluetooth, NFC, or another mesh network protocol. Edge agents may also establish secure connections with cloud agents or with others using DID Communication.

    "},{"location":"concepts/0051-dkms/dkms-v4/#34-verifiable-credentials","title":"3.4. Verifiable Credentials","text":"

    By themselves, DIDs are \"trustless\", i.e., they carry no more inherent trust than an IP address. The primary difference is that they provide a mechanism for resolving the DID to a DID document containing the necessary cryptographic keys and endpoints to bootstrap secure communications with the associated agent.

    To achieve a higher level of trust, DKMS agents may exchange digitally signed credentials called verifiable credentials. Verifiable credentials are being standardized by the W3C Working Group of the same name. The purpose is summarized in the charter:

    It is currently difficult to express banking account information, education qualifications, healthcare data, and other sorts of machine-readable personal information that has been verified by a 3rd party on the Web. These sorts of data are often referred to as verifiable credentials. The mission of the Verifiable Credentials Working Group is to make expressing, exchanging, and verifying credentials easier and more secure on the Web.

    The following diagram from the W3C Verifiable Claims Working Group illustrates the primary roles in the verifiable credential ecosystem and the close relationship between DIDs and verifiable credentials.

    Figure 3: The W3C Verifiable Credentials ecosystem

    Note that what is being verified in a verifiable credential is the signature of the credential issuer. The strength of the actual credential depends on the degree of trust the verifier has in the issuer. For example, if a bank issues a credential saying that the subject of the credential has a certain credit card number, a merchant can rely on the credential if the merchant has a high degree of trust in the bank.

    The Verifiable Claims Working Group is standardizing both the format of credentials and of digital signatures on the credentials. Different digital signature formats require different cryptographic key material. For example, credentials that use a zero-knowledge signature format such as Camenisch-Lysyanskaya (CL) signatures require a \"master secret\" or \u201clink secret\u201d that enables the prover (the identity owner) to make proofs about the credential without revealing the underlying data or signatures in the credential (or the prover's DID with respect to the credential issuer). This allows for \"credential presentations\" that are unlinkable to each other. Link secrets are another type of cryptographic key material that must be stored in DKMS wallets.

    "},{"location":"concepts/0051-dkms/dkms-v4/#4-ledger-architecture","title":"4. Ledger Architecture","text":"

    A fundamental feature of DIDs and DKMS is that they will work with any modern blockchain, distributed ledger, distributed database, or distributed file system capable of supporting a DID method (which has a relatively simple set of requirements\u2014see the DID specification). For simplicity, this document will refer to all of these systems as \"ledgers\".

    There are a variety of ledger designs and governance models as illustrated in Figure 4.

    Figure 4: Blockchain and distributed ledger governance models

    Public ledgers are available for anyone to access, while private ledgers have restricted access. Permissionless ledgers allow anyone to run a validator node of the ledger (a node that participates in the consensus protocol), and thus require proof-of-work, proof-of-stake, or other protections against Sybil attacks. Permissioned ledgers restrict who can run a validator node, and thus can typically operate at a higher transaction rate.

    For decentralized identity management, a core requirement of DIDs and DKMS is that they can interoperate with any of these ledgers. However for privacy and scalability reasons, certain types of ledgers play specific roles in DKMS architecture.

    "},{"location":"concepts/0051-dkms/dkms-v4/#41-public-ledgers","title":"4.1. Public Ledgers","text":"

    Public ledgers, whether permissionless or permissioned, are crucial to DKMS infrastructure because they provide an open global root of trust. To the extent that a particular public ledger has earned the public\u2019s trust that it is strong enough to withstand attacks, tampering, or censorship, it is in a position to serve as a strong, universally-available root of trust for DIDs and the DID documents necessary for decentralized key management.

    Such a publicly available root of trust is particularly important for:

    1. Public DIDs (also called \"anywise DIDs\") that need to be recognized as trust anchors by a large number of verifiers.

    2. Schema and credential definitions needed for broad semantic interoperability of verifiable credentials.

    3. Revocation registries needed for revocation of verifiable credentials that use proofs.

    4. Policy registries needed for authorization and revocation of DKMS agents (see section 9.2).

    5. Anchoring transactions posted for verification or coordination purposes by smart contracts or other ledgers, including microledgers (below).

    "},{"location":"concepts/0051-dkms/dkms-v4/#42-private-ledgers","title":"4.2. Private Ledgers","text":"

    Although public ledgers may also be used for private DIDs\u2014DIDs that are intended for use only by a restricted audience\u2014this requires that their DID documents be carefully provisioned and managed to avoid any information that can be used for attack or correlation. This threat is lessened if private DIDs are registered and managed on a private ledger that has restricted access. However the larger the ledger, the more it will require the same precautions as a public ledger.

    "},{"location":"concepts/0051-dkms/dkms-v4/#43-microledgers","title":"4.3. Microledgers","text":"

    From a privacy perspective\u2014and particularly for compliance with privacy regulations such as the EU General Data Protection Regulation (GDPR)\u2014the ideal identifier is a pairwise pseudonymous DID. This DID (and its corresponding DID document) is only known to the two parties in a relationship.

    Because pairwise pseudonymous DID documents contain the public keys and service endpoints necessary for the respective DKMS agents to connect and send encrypted, signed messages to each other, there is no need for pairwise pseudonymous DIDs to be registered on a public ledger or even a conventional private ledger. Rather they can use microledgers.

    A microledger is essentially identical to a conventional private ledger except it has only as many nodes as it has parties to the relationship. The same cryptographic steps are used:

    1. Transactions are digitally signed by authorized private key(s).

    2. Transactions are cryptographically ordered and tamper evident.

    3. Transactions are replicated efficiently across agents using simple consensus protocols. These protocol, and the microledgers that provide their persistent state, constitute a root of trust for the relationship.

    Microledgers are effectively permissionless because anyone can operate one in cooperation with anyone else\u2014only the parties to the microledger relationship need to agree. If there is a danger of the parties to the microledger getting \"out of sync\" (e.g., if an attacker has compromised one party's agents such that the party's state is deadlocked, or one party's agents have all been lost so that the party is unable to receive a change-of-state from the other), the party\u2019s agents can register a dead drop point. This is a pre-established endpoint and keys both parties can use to re-sync their microledgers and restore their connection.

    Microledgers play a special role in DKMS architecture because they are used to maintain pairwise pseudonymous connections between DKMS agents. The use of microledgers also helps enormously with the problems of scale\u2014they can significantly reduce the load on public ledgers by moving management of pairwise pseudonymous DIDs and DID documents directly to DKMS agents.

    The protocols associated with microledgers include:

    Today, the only known example of this approach is the did:peer method. It is possible that alternative implementations will emerge.

    "},{"location":"concepts/0051-dkms/dkms-v4/#5-key-management-architecture","title":"5. Key Management Architecture","text":"

    DKMS adheres to the principle of key separation where keys for different purposes should be cryptographically separated. This avoids use of the same key for multiple purposes. Keys are classified based on usage and the nature of information being protected. Any change to a key requires that the relevant DID method ensure that the change comes from the identity owner or her authorized delegate. All requests by unauthorized entities must be ignored or flagged by the DKMS agent. If anyone else can change any key material, the security of the system is compromised.

    DKMS architecture addresses what keys are needed, how they are used, where they should be stored and protected, how long they should live, and how they are revoked and/or recovered when lost or compromised.

    "},{"location":"concepts/0051-dkms/dkms-v4/#51-key-types-and-key-descriptions","title":"5.1. Key Types and Key Descriptions","text":"

    NIST 800-130 framework requirement 6.1 requires a CKMS to specify and define each key type used. The following key layering and policies can be applied.

    1. Master keys:

      1. Keys at the highest level, in that they themselves are not cryptographically protected. They are distributed manually or initially installed and protected by procedural controls and physical or electronic isolation.

      2. MAY be used for deriving other keys;

      3. MUST NOT ever be stored in cleartext.

      4. SHOULD never be stored in a single encrypted form, but only:

        1. Saved in secure offline storage;

        2. Secured by high secure encrypted vaults, such as a secure element, TPM, or TEE.

        3. Distributed using a technique such as Shamir secret sharing;

        4. Derived from secure multiparty computation.

        5. Saved somewhere that requires secure interactions to access (which could mean slower retrieval times).

      5. SHOULD be used only for creating signatures as proof of delegation for other keys.

      6. MUST be forgotten immediately after use\u2013securely erased from memory, disk, and every location that accessed the key in plain text.

    2. Key encrypting keys

      1. Symmetric or public keys used for key transport or storage of other keys.

      2. MAY themselves be secured under other keys.

      3. If they are not ephemeral, they SHOULD be stored in secure access-controlled devices, used in those devices and never exposed.

    3. Data keys

      1. Used to provide cryptographic operations on user data (e.g., encryption, authentication). These are generally short-term symmetric keys; however, asymmetric signature private keys may also be considered data keys, and these are usually longer-term keys.

      2. SHOULD be dedicated to specific roles, such as authentication, securing communications, protecting storage, proving authorized delegation, constructing credentials, or generating proofs.

    The keys at one layer are used to protect items at a lower level. This constraint is intended to make attacks more difficult, and to limit exposure resulting from compromise of a specific key. For example, compromise of a key-encrypting-key (of which a master key is a special case) affects all keys protected thereunder. Consequently, special measures are used to protect master keys, including severely limiting access and use, hardware protection, and providing access to the key only under shared control.

    In addition to key layering hierarchy, keys may be classified based on temporal considerations:

    1. Long-term keys. These include master keys, often key-encrypting keys, and keys used to facilitate key agreement.

    2. Short-term keys. These include keys established by key transport or key agreement, often used as data keys or session keys for a single communications session.

    In general, communications applications involve short-term keys, while data storage applications require longer-term keys. Long-term keys typically protect short-term keys.

    The following policies apply to key descriptions:

    1. Any DKMS-compliant key SHOULD use a DID-compliant key description.

    2. This key description MUST be published at least in the governing DID method specification.

    3. This key description SHOULD be aggregated in the Key Description Registry maintained by the W3C Credentials Community Group.

    DKMS key management must encompass the keys needed by different DID methods as well as different verifiable credentials exchange protocols and signature formats. The following list includes the initial key types required by the Sovrin DID Method Spec and the Sovrin protocol for verifiable credentials exchange:

    1. Link secret: (one per entity) A high-entropy 256-bit integer included in every credential in blinded form. Used for proving credentials were issued to the same logical identity. A logical identity only has one link secret. The first DKMS agent provisioned by an identity owner creates this value and stores it in an encrypted wallet or in a secure element if available. Agents that receive credentials and present proofs must know this value. It can be transferred over secure channels between agents as necessary. If the link secret is changed, credentials issued with the new link secret value cannot be correlated with credentials using the old link secret value.

    2. DID keys: (one per relationship per agent) Ed25519 keys used for non-repudiation signing and verification for DIDs. Each agent manages their own set of DID keys.

    3. Agent policy keys: (one per agent) Ed25519 key pairs used with the agent policy registry. See section 9.2. The public key is stored with the agent policy registry. Transactions made to the policy registry are signed by the private key. The keys are used in zero-knowledge during proof presentation to show the agent is authorized by the identity owner to present the proof. Unauthorized agents MUST NOT be trusted by verifiers.

    4. Agent recovery keys: (a fraction per trustee) Ed25519 keys. A public key is stored by the agent and used for encrypting backups. The private key is saved to an offline medium or split into shares and given to trustees. To encrypt a backup, an ephemeral X25519 key pair is created where the ephemeral private key is used to perform a Diffie-Hellman agreement with the public recovery key to create a wallet encryption key. The private ephemeral key is forgotten and the ephemeral public key is stored with the encrypted wallet backup. To decrypt a backup, the private recovery key performs a Diffie-Hellman agreement with the ephemeral public key to create the same wallet encryption key.

    5. Wallet encryption keys: (one per wallet segment) 256 bit symmetric keys for encrypting wallets and backups. The key is generated by an agent then wrapped using secure enclaves (preferred) or derived from user inputs like strong passwords (see section 5.2). It MUST NOT be stored directly in secure enclaves when portability is a requirement.

    6. Wallet permission keys: (one per permission) Symmetric keys or Ed25519 keypairs that allow fine-grained permissions over various data stored in the wallet i.e. wallet read-only access, credential group write access, or write all access.

    "},{"location":"concepts/0051-dkms/dkms-v4/#52-key-generation","title":"5.2. Key Generation","text":"

    NIST 800-130 framework requirement 6.19 requires that a CKMS design shall specify the key-generation methods to be used in the CKMS for each type of key. The following policies can be applied.

    1. For any key represented in a DID document, the generation method MUST be included in the key description specification.

    2. Any parameters necessary to understand the generated key MUST be included in the key description.

    3. The key description SHOULD NOT include any metadata that enables correlation across key pairs.

    4. DKMS key types SHOULD use derivation functions that simplify and standardize key recovery.

    A secure method for key creation is to use a seed value combined with a derivation algorithm. Key derivation functions (KDF), pseudo random number generators (PRNG), and Bitcoin\u2019s BIP32 standard for hierarchical deterministic (HD) keys are all examples of key creation using a seed value with a derivation function or mapping.

    Hardware based key generation (like HSMs or TPMS) is usually more secure as they are typically designed to include more factors like white noise and temperature which are harder to corrupt.

    If KDFs or PRNGs are used, a passphrase, biometric input, or social data from multiple users combined with random salt SHOULD be used as the input to create the seed. Alternately a QR code or words from a list such as the PGP word list can be used. In either case, the input MUST NOT be stored anywhere connected to the Internet.

    "},{"location":"concepts/0051-dkms/dkms-v4/#53-multi-device-management","title":"5.3. Multi-Device Management","text":"

    Each device hosts an edge agent and edge wallet. All keys except for the link secret are unique per device. This allows for fine-grained (e.g., per relationship) control of authorized devices, as well as remote revocation. As part of the process for provisioning an edge agent, owners must choose what capabilities to grant. Capabilities must be flexible so owners can add or remove them depending on their needs.

    Wallet permissions SHOULD be controlled using keys that grant fixed permissions. One example of such a system is Cryptree.

    It is recommended that private keys never be reused across agents. If a secret is shared across agents, then there must be a way to remotely revoke the agent using a distributed ledger such that the secret is rendered useless on that agent. The DKMS architecture uses ledgers and diffused trust to enable fine grained control over individual keys and entire devices. An agent policy registry located on a ledger allows an owner to define agent authorizations and control over those authorizations. (See 9.2 Policy Registries). Agents must notify each other when a new agent is added to an authorized pool or removed in order to warn identity owners of unauthorized or malicious agents with a cloud agent acting as the synchronization hub.

    Techniques like distributed hash tables or gossip protocols SHOULD be employed to keep device data synchronized.

    "},{"location":"concepts/0051-dkms/dkms-v4/#54-key-portability-and-migration","title":"5.4. Key Portability and Migration","text":"

    As mentioned in section 2.8, portability of DKMS wallets and keys is an important requirement\u2014if agencies or other service providers could \"lock-in\" identity owners, DIDs and DKMS would no longer be decentralized. Thus the DKMS protocol MUST support identity owners migrating their edge agents and cloud agents to the agency of their choice (including self-hosting). Agency-to-agency migration is not fully defined in this version of DKMS architecture, but it will be specified in a future version. See section 11.

    "},{"location":"concepts/0051-dkms/dkms-v4/#6-recovery-methods","title":"6. Recovery Methods","text":"

    In key management, key recovery specifies how keys are reconstituted in case of loss or compromise. In decentralized identity management, recovery is even more important since identity owners have no \"higher authority\" to turn to for recovery.

    In this version of DKMS architecture, two recovery methods are recommended:

    1. Offline recovery uses physical media or removable digital media to store recovery keys.

    2. Social recovery employs \"trustees\" who store encrypted recovery data on an identity owners behalf\u2014typically in the trustees own agent(s).

    These methods are not exclusive, i.e., both can be employed for additional safety.

    Both methods operate against encrypted backups of the identity owner\u2019s digital identity wallet. Backups are encrypted by the edge agent with a backup recovery key. See section 5.1. While such backups may be stored in many locations, for simplicity this version of DKMS architecture assumes that cloud agents will provide an automated backup service for their respective edge agents.

    Future versions of this specification MAY specify additional recovery methods, include remote biometric recovery and recovery cooperatives.

    "},{"location":"concepts/0051-dkms/dkms-v4/#61-offline-recovery","title":"6.1. Offline Recovery","text":"

    Offline recovery is the conventional form of backup. It can be performed using many different methods. In DKMS architecture, the standard strategy is to store an encrypted backup of the identity owner\u2019s wallet at the owner\u2019s cloud agent, and then store a private backup recovery key offline. The private backup recovery key can be printed to a paper wallet as one or more QR codes or text strings. It can also be saved to a file on a detachable media device such as a removable disk, hardware wallet or USB key.

    The primary downside to offline recovery is that the identity owner must not only safely store the offline copy, but remember the location and be able to able to access the offline copy when it is needed to recover.

    "},{"location":"concepts/0051-dkms/dkms-v4/#62-social-recovery","title":"6.2. Social Recovery","text":"

    Social recovery has two advantages over offline recovery:

    1. The identity owner does not have to create an offline backup\u2014the social recovery setup process can be accomplished entirely online.

    2. The identity owner does not have to safely store and remember the location of the offline backup.

    However it is not a panacea:

    1. The identity owner still needs to remember her trustees.

    2. Social recovery opens the opportunity, however remote, for an identity owner\u2019s trustees to collude to take over the identity owner\u2019s digital identity wallet.

    A trustee is any person, institution, or service that agrees to assist an identity owner during recovery by (1) securely storing recovery material (called a \"share\") until a recovery is needed, and (2) positively identifying the identity owner and the authenticity of a recovery request before authorizing release of their shares.

    This second step is critical. Trustees MUST strongly authenticate an identity owner during recovery so as to detect if an attacker is trying exploit them to steal a key or secret. Software should aid in ensuring the authentication is strong, for example, confirming the trustee actually conversed with Alice, as opposed to getting an email from her.

    For social recovery, agents SHOULD split keys into shares and distribute them to trustees instead of sending each trustee a full copy. When recovery is needed, trustees can be contacted and the key will be recovered once enough shares have been received. An efficient and secure threshold secret sharing scheme, like Shamir's Secret Sharing, SHOULD be used to generate the shares and recombine them. The number of trustees to use is the decision of the identity owner, however it is RECOMMENDED to use at least three with a threshold of at least two.

    The shares may be encrypted by a key derived from a KDF or PRNG whose input is something only the identity owner knows, has, or is or any combination of these.

    Figure 5: Key sharing using Shamir Secret Sharing

    As the adoption interest in decentralized identity grows, social recovery has become a major focus of additional research and development in the industry. For example, at the Rebooting the Web of Trust #8 conference held in Barcelona 1-3 March 2019, six papers on the topic were submitted (note that several of these also have extensive bibliographies):

    1. A New Approach to Social Key Recovery by Christopher Allen and Mark Friedenbach

    2. Security Considerations of Shamir's Secret Sharing by Peg

    3. Implementing of Threshold Schemes by Daan Sprenkels

    4. Social Key Recovery Design and Implementation by Hank Chiu, Hankuan Yu, Justin Lin & Jon Tsai

    5. SLIP-0039: Shamir's Secret-Sharing for Mnemonic Codes by The TREZOR Team

    In addition, two new papers on the topic were started at the conference and are still in development at the time of publication:

    1. Shamir Secret Sharing Best Practices by Christopher Allen et al.

    2. Evaluating Social Schemes for Recovering Control of an Identifier by Sean Gilligan, Peg, Adin Schmahmann, and Andrew Hughes

    "},{"location":"concepts/0051-dkms/dkms-v4/#7-recovery-from-key-loss","title":"7. Recovery From Key Loss","text":"

    Key loss as defined in this document means the owner can assume there is no further risk of compromise. Such scenarios include devices unable to function due to water, electricity, breaking, fire, hardware failure, acts of God, etc.

    "},{"location":"concepts/0051-dkms/dkms-v4/#71-agent-policy-key-loss","title":"7.1. Agent Policy Key Loss","text":"

    Loss of an agent policy key means the agent no longer has proof authorization and cannot make updates to the agent policy registry on the ledger. Identity owners SHOULD have backup agent policy keys that can revoke the current active agent policy key from the agent policy registry and issue a new agent policy key to the replacement agent.

    "},{"location":"concepts/0051-dkms/dkms-v4/#72-did-key-loss","title":"7.2. DID Key Loss","text":"

    Loss of a DID key means the agent can no longer authenticate over the channel and cannot rotate the key. This key MUST be recoverable from the encrypted backup.

    "},{"location":"concepts/0051-dkms/dkms-v4/#73-link-secret-loss","title":"7.3. Link Secret Loss","text":"

    Loss of the link secret means the owner can no longer generate proofs for the verifiable credentials in her possession or be issued credentials under the same identity. The link secret MUST be recoverable from the encrypted backup.

    "},{"location":"concepts/0051-dkms/dkms-v4/#74-credential-loss","title":"7.4. Credential Loss","text":"

    Loss of credentials requires the owner to contact his credential issuers, reauthenticate, and request the issuers revoke existing credentials, if recovery from a backup is not possible. Credentials SHOULD be recoverable from the encrypted backup.

    "},{"location":"concepts/0051-dkms/dkms-v4/#75-relationship-state-recovery","title":"7.5. Relationship State Recovery","text":"

    Recovery of relationship state due to any of the above key-loss scenarios is enabled via the dead drop mechanism.

    "},{"location":"concepts/0051-dkms/dkms-v4/#8-recovery-from-key-compromise","title":"8. Recovery From Key Compromise","text":"

    Key compromise means that private keys and/or master keys have become or can become known either passively or actively.

    1. \"Passively\" means the identity owner is not aware of the compromise. An attacker may be eavesdropping or have remote communications with the agent but has not provided direct evidence of intrusion or malicious activity, such as impersonating the identity owner or committing fraud.

    2. \"Actively\" means the identity owner knows her keys have been exposed. For example, the owner is locked out of her own devices and/or DKMS agents and wallets, or becomes aware of abuse or fraud.

    To protect from either, there are techniques available: rotation, revocation, and quick recovery. Rotation helps to limit a passive compromise, while revocation and quick recovery help to limit an active one.

    "},{"location":"concepts/0051-dkms/dkms-v4/#81-key-rotation","title":"8.1. Key Rotation","text":"

    Keys SHOULD be changed periodically to limit tampering. When keys are rotated, the previous keys are revoked and new ones are added. It is RECOMMENDED for keys to expire for the following reasons:

    "},{"location":"concepts/0051-dkms/dkms-v4/#82-key-revocation","title":"8.2. Key Revocation","text":"

    DKMS keys MUST be revocable. Verifiers MUST be able to determine the revocation status of a DKMS key. It is not good enough to simply forget a key because that does not protect against key compromise. Control over who can update a revocation list MUST be enforced so attackers cannot maliciously revoke user keys. (Note that a key revoked by an attacker reveals that the attacker knows a secret.)

    "},{"location":"concepts/0051-dkms/dkms-v4/#83-agent-policy-key-compromise","title":"8.3. Agent Policy Key Compromise","text":"

    Compromise of an agent\u2019s policy key means an attacker can use the agent to impersonate the owner for proof presentation and make changes to the agent policy registry. Owners must be able to revoke any of their devices to prevent impersonation. For example, if the owner knows her device has been stolen, she will want to revoke all device permissions so even if the thief manages to break into the agent the DKMS data value is limited. Identity owners SHOULD have backup agent policy keys that are authorized to revoke the compromised key from the agent policy registry and issue a new agent policy key to the replacement agent.

    "},{"location":"concepts/0051-dkms/dkms-v4/#84-did-key-compromise","title":"8.4. DID Key Compromise","text":"

    Compromise of a DID key means an attacker can use the channel to impersonate the owner as well as potentially lock the owner out from further use if the attacker rotates the key before the owner realizes what has happened. This attack surface is minimized if keys are rotated on a regular basis. An identity owner MUST also be able to trigger a rotation manually upon discovery of a compromise. Owners SHOULD implement a diffuse trust model among multiple agents where a single compromised agent is not able to revoke a key because more than one agent is required to approve the action.

    "},{"location":"concepts/0051-dkms/dkms-v4/#85-link-secret-compromise","title":"8.5. Link Secret Compromise","text":"

    Compromise of the owner link secret means an attacker may impersonate the owner when receiving verifiable credentials or use existing credentials for proof presentation. Note that unless the attacker is also able to use an agent that has \"PROVE\" authorization, the verifier will be able to detect an unauthorized agent. At this point the owner SHOULD revoke her credentials and request for them to be reissued with a new link secret.

    "},{"location":"concepts/0051-dkms/dkms-v4/#86-credential-compromise","title":"8.6. Credential Compromise","text":"

    Compromise of a verifiable credential means an attacker has learned the attributes of the credential. Unless the attacker also manages to compromise the link secret and an authorized agent, he is not able to assert the credential, so the only loss is control of the underlying data.

    "},{"location":"concepts/0051-dkms/dkms-v4/#87-relationship-state-recovery","title":"8.7. Relationship State Recovery","text":"

    Recovery of relationship state due to any of the above key-compromise scenarios is enabled via the dead drop mechanism.

    "},{"location":"concepts/0051-dkms/dkms-v4/#9-dkms-protocol","title":"9. DKMS Protocol","text":""},{"location":"concepts/0051-dkms/dkms-v4/#91-microledger-transactions","title":"9.1. Microledger Transactions","text":"

    DKMS architecture uses microledgers to represent the state of the authorized keys in a relationship. Just as with conventional ledgers, the structure is such that the parties to a relationship can verify it at any moment in time, as can a third party for auditing purposes. Microledgers are used between two parties where each party signs transactions using their DID keys. This allow changes to DID keys to be propagated in a secure manner where each transaction is signed with an existing key authorized in earlier transactions.

    "},{"location":"concepts/0051-dkms/dkms-v4/#92-policy-registries","title":"9.2. Policy Registries","text":"

    Each Identity Owner creates an authorization policy on the ledger. The policy allows an agent to have some combination of authorizations. This is a public record, but no information needs to be shared with any other party. Its purpose is to allow for management of device authorization in a flexible way, by allowing for agents to prove in zero knowledge that they are authorized by the identity owner.

    When an agent is granted PROVE authorization, by adding a commitment to the agent's secret value to PROVE section of the authorization policy, the ledger adds the second commitment to the global prover registry. When an agent loses its PROVE authorization, the ledger removes the associated commitment from the prover registry. The ledger can enforce sophisticated owner defined rules like requiring multiple signatures to authorize updates to the Policy.

    An agent can now prove in zero knowledge that it is authorized because the ledger maintains a global registry for all agents with PROVE authorization for all identity owners. An agent can prove that its secret value and the policy address in which that value is given PROVE authorization are part of the global policy registry without revealing the secret value, or the policy address. By using a zero knowledge proof, the global policy registry does not enable correlation of any specific identity owner.

    "},{"location":"concepts/0051-dkms/dkms-v4/#93-authenticated-encryption","title":"9.3. Authenticated Encryption","text":"

    The use of DIDs and microledgers allows communication between agents to use authenticated encryption. Agents use their DID verification keys for authenticating each other whenever a communication channel is established. Microledgers allow DID keys to have rooted mutual authentication for any two parties with a DID. In the sequence diagrams in section 10, all agent-to-agent communications that uses authenticated encryption is indicated by bold blue arrows.

    "},{"location":"concepts/0051-dkms/dkms-v4/#94-recovery-connection","title":"9.4. Recovery connection","text":"

    Each Identity Owner begins a recovery operation by requesting their respective recovery information from trustees. After a trustee has confirmed the request originated with the identity owner and not a malicious party, a recovery connection is formed. This special type of connection is meant only for recovery purposes. Recovery connections are decommissioned when the minimum number of recovery shares have been received and the original encrypted wallet data has been restored. Identity owners can then resume normal connections because their keys have been recovered. Trustees SHOULD only send recovery shares to identity owners over a recovery connection.

    "},{"location":"concepts/0051-dkms/dkms-v4/#95-dead-drops","title":"9.5. Dead Drops","text":"

    In scenarios where two parties to a connection move agencies (and thus service endpoints) at the same time, or one party's agents have been compromised such that it can no longer send or receive relationship state changes, there is a need for recovery not just of keys and agents, but of the state of the relationship. These scenarios may include malicious compromise of agents by an attacker such that neither the party nor the attacker controls enough agents to meet the thresholds set in the DID Document or the Authorization Policy, or complete loss of all agents due to some catastrophic event.

    In some cases, relationship state may be recoverable via encrypted backup of the agent wallets. In the event that this is not possible, the parties can make use of a dead drop to recover their relationship state.

    A dead drop is established and maintained as part of a pairwise relationship. The dead drop consists of a service endpoint and the public keys needed to verify the package that may be retrieved from that endpoint. The keys needed for the dead drop are derived from a combination of a Master key and the pairwise DID of the relationship that is being recovered.

    "},{"location":"concepts/0051-dkms/dkms-v4/#10-protocol-flows","title":"10. Protocol Flows","text":"

    This section contains the UML sequence diagrams for all standard DKMS key management operations that use the DKMS protocol. Diagrams are listed in logical order of usage but may be reviewed in any order. Cross-references to reusable protocol sequences are represented as notes in blue. Other comments are in yellow.

    Table 1 is a glossary of the DKMS key names and types used in these diagrams.

    Key Name Description Apx-sv Agent Policy Secret Value for agent x Apx-svc Agent Policy Secret Value Commitment for agent x Apx-ac Agent Policy Address Commitment for agent x AAx-ID Alice's Agent to Agent Identifier for agent x AAx-vk Alice's Agent to Agent Public Verification Key for agent x AAx-sk Alice's Agent to Agent Private Signing Key for agent x ABDID Alice\u2019s DID for connection with Bob ABx Alice\u2019s key pair for connection with Bob for agent x ABx-vk Alice\u2019s Public Verification Key for connection with Bob for agent x ABx-sk Alice\u2019s Private Signing Key for connection with Bob for agent x AWx-k Wallet Encryption Key for agent x ALS Alice's Link Secret

    Table 1: DKMS key names used in this section

    "},{"location":"concepts/0051-dkms/dkms-v4/#101-edge-agent-start","title":"10.1. Edge Agent Start","text":"

    An identity owner\u2019s experience with DKMS begins with her first installation of a DKMS edge agent. This startup routine is reused by many other protocol sequences because it is needed each time an identity owner installs a new DKMS edge agent.

    The first step after successful installation is to prompt the identity owner whether he/she already has a DKMS identity wallet or is instantiating one for the first time. If the owner already has a wallet, the owner is prompted to determine if the new edge agent installation is for the purpose of adding a new edge agent, or recovering from a lost or compromised edge agent. Each of these options references another protocol pattern.

    "},{"location":"concepts/0051-dkms/dkms-v4/#102-provision-new-agent","title":"10.2. Provision New Agent","text":"

    Any time a new agent is provisioned\u2014regardless of whether it is an edge agent or a cloud agent\u2014the same sequence of steps are necessary to set up the associated wallet and secure communications with the new agent.

    As noted in section 3.3, DKMS architecture recommends that a DKMS agent be installed in an environment that includes a secure element. So the first step is for the edge agent to set up the credential the identity owner will use to unlock the secure element. On modern smartphones this will typically be a biometric, but it could be a PIN, passcode, or other factor, or a combination of factors.

    The edge agent then requests the secure element to create the key pairs necessary to establish the initial agent policies and to secure agent-to-agent communications. The edge agent also generates a ID to uniquely identify the agent across the identity owner\u2019s set of DKMS agents.

    Finally the edge agent requests the secure element to create a wallet encryption key and then uses it to encrypt the edge wallet.

    "},{"location":"concepts/0051-dkms/dkms-v4/#103-first-edge-agent","title":"10.3. First Edge Agent","text":"

    The first time a new identity owner installs an edge agent, it must also set up the DKMS components that enable the identity owner to manage multiple separate DIDs and verifiable credentials as if they were from one logically unified digital identity. It must also lay the groundwork for the identity owner to install additional DKMS agents on other devices, each of which will maintain its own DKMS identity wallet while still enabling the identity owner to act as if they were all part of one logically unified identity wallet.

    Link secrets are defined in section 5.1 and policy registries in section 9.2. The edge agent first needs to generate and store the link secret in the edge wallet. It then needs to generate the policy registry address and store it in the edge wallet. Now it is ready to update the agent policy registry.

    "},{"location":"concepts/0051-dkms/dkms-v4/#104-update-agent-policy-registry","title":"10.4. Update Agent Policy Registry","text":"

    As explained in section 9.2, an agent policy registry is the master control point that an identity owner uses to authorize and revoke DKMS agent proof authorization (edge or cloud).

    Each time the identity owner takes an action to add, revoke, or change the permissions for an agent, the policy registry is updated. For example, at the end of the protocol sequence in section 10.3, the action is to write the first policy registry entries that authorize the first edge agent.

    "},{"location":"concepts/0051-dkms/dkms-v4/#105-add-cloud-agent","title":"10.5. Add Cloud Agent","text":"

    The final step in first-time setup of an edge agent is creation of the corresponding cloud agent. As explained in section 3.3, the default in DKMS architecture is to always pair an edge agent with a corresponding cloud agent due to the many different key management functions this combination can automate.

    The process of registering a cloud agent begins with the edge agent contacting the agency agent. For purposes of this document, we will assume that the edge agent has a relationship with one or more agencies, and has a trusted method (such as a pre-installed DID) for establishing a secure connection using authenticated encryption.

    The target agency first returns a request for the consent required from the identity owner to register the cloud agent together with a request for the authorizations to be granted to the cloud agent. By default, cloud agents have no authorizations other than those granted by the identity owner. This enables identity owners to control what tasks a cloud agent may or may not perform on the identity owner\u2019s behalf.

    Once the identity owner has returned consent and the selected authorizations, the agency agent provisions the new cloud agent and registers the cloud agent\u2019s service endpoint using the agency\u2019s routing extension. Note that this service endpoint is used only in agent-to-agent communications that are internal to the identity owner\u2019s own agent domain. Outward-facing service endpoints are assigned as part of adding connections with their own DIDs.

    Once these tasks are performed, the results are returned to the edge agent and stored security in the edge wallet.

    "},{"location":"concepts/0051-dkms/dkms-v4/#106-add-new-edge-agent","title":"10.6. Add New Edge Agent","text":"

    Each time an identity owner installs a new edge agent after their first edge agent, the process must initialize the new agent and grant it the necessary authorizations to begin acting on the identity owner\u2019s behalf.

    Provisioning of the new edge agent (Edge Agent 2) starts by the identity owner installing the edge agent software (section 10.2) and then receiving instructions about how to provision the new edge agent from an existing edge agent (Edge Agent 1). Note that Edge Agent 1 must the authorization to add a new edge agent (not all edge agents have such authorization). The identity owner must also select the authorizations the edge agent will have (DKMS agent developers will compete to make such policy choices easy and intuitive for identity owners).

    There are multiple options for how the Edge Agent 2 may receive authorization from Edge Agent 1. One common method is for Edge Agent 1 to display a QR code or other machine-readable code scanned by Edge Agent 2. Another way is for Edge Agent 1 to provide a passcode or passphrase that the identity owner types into Edge Agent 2. Another method is sending an SMS or email with a helper URL. In all methods the ultimate result is that Edge Agent 2 must be able to connect via authenticated encryption with Edge Agent 1 in order to verify the connection and pass the new agent-to-agent encryption keys that will be used for secure communications between the two agents.

    Once this is confirmed by both agents, Edge Agent 1 will then use the Update Agent Policy Registry sequence (section 10.4) to add authorizations to the policy registry for Edge Agent 2.

    Once that is confirmed, provisioning of Edge Agent 2 is completed when Edge Agent 1 send the link secret and any verifiable credentials that the identity owner has authorized Edge Agent 2 to handle to Edge Agent 2, which securely stores them in Edge Agent 2\u2019s wallet.

    "},{"location":"concepts/0051-dkms/dkms-v4/#107-add-connection-to-public-did","title":"10.7. Add Connection to Public DID","text":"

    The primary purpose of DIDs and DKMS is to enable trusted digital connections. One of the most common use cases is when an identity owner needs to create a connection to an entity that has a public DID, for example any website that wants to support trusted decentralized identity connections with its users (for registration, authentication, verifiable credentials exchange, secure communications, etc.)

    Note that this sequence is entirely about agent-to-agent communications between DKMS agents to create a shared microledger and populate it with the pairwise pseudonymous DIDs that Alice and Org assign to each other together with the public keys and service endpoints they need to enable their agents to use authenticated encryption.

    First Alice\u2019s edge agent creates the key pair and DID that it will assign to Org and uses those to initialize a new microledger. It then sends a request for Alice\u2019s cloud agent to add its own key pair that Alice authorizes to act on that DID. These are returned to Alice\u2019s edge agent who adds them to the microledger.

    Next Alice\u2019s edge agent creates and sends a connection invitation to Alice\u2019s cloud agent. Alice\u2019s cloud agent resolves Org\u2019s DID to its DID document to discover the endpoint for Org\u2019s cloud agent (this resolution step is not shown in the diagram above). It then forwards the invitation to Org\u2019s cloud agent who in turn forwards it to the system operating as Org\u2019s edge agent.

    Org\u2019s edge agent performs the mirror image of the same steps Alice\u2019s edge agent took to create its own DID and key pair for Alice, adding those to the microledger, and authorizing its cloud agent to act on its behalf in this new relationship.

    When that is complete, Org\u2019s edge agent returns its microledger updates via authenticated encryption to its cloud agent which forwards them to Alice\u2019s cloud agent and finally to Alice\u2019s edge agent. This completes the connection and Alice is notified of success.

    "},{"location":"concepts/0051-dkms/dkms-v4/#108-add-connection-to-private-did-provisioned","title":"10.8. Add Connection to Private DID (Provisioned)","text":"

    The other common use case for trusted connections is private peer-to-peer connections between two parties that do not initially connect via one or the other\u2019s public DIDs. These connections can be initiated any way that one party can share a unique invitation address, i.e., via a URL sent via text, email, or posted on a blog, website, LinkedIn profile, etc.

    The flow in this sequence diagram is very similar to the flow in section 10.8 where Alice is connecting to a public organization. The only difference is that rather than beginning with Alice\u2019s edge agent knowing a public DID for the Org, Alice\u2019s edge agent knows Bob\u2019s invitation address. This is a service, typically provided by an agency, that enables Bob\u2019s cloud agent to accept connection invitations (typically with appropriate spam protections and other forms of connection invitation filtering).

    The end result is the same as in section 10.8: Alice and Bob have established a shared microledger with the pairwise pseudonymous DIDs and the public keys and endpoints they need to maintain their relationship. Note that with DIDs and DKMS, this is the first connection that Alice and Bob can maintain for life (and beyond) that is not dependent on any centralized service provider or registry. And this connection is available for Alice and Bob to use with any application they wish to authorize.

    "},{"location":"concepts/0051-dkms/dkms-v4/#109-add-connection-to-private-did-unprovisioned","title":"10.9. Add Connection to Private DID (Unprovisioned)","text":"

    This sequence is identical to section 10.8 except that Bob does not yet have a DKMS agent or wallet. So it addresses what is necessary for Alice to invite Bob to both start using a DKMS agent and to form a connection with Alice at the same time.

    The only difference between this sequence diagram and section 10.8 is the invitation delivery process. In 10.8, Bob already has a cloud agent, so the invitation can be delivered to an invitation address established at the hosting agency. In this sequence, Bob does not yet have cloud agent, so the invitation must be: a) anchored at a helper URL (typically provided by an agency), and b) delivered to Bob via some out-of-band means (typically an SMS, email, or other medium that can communicate a helper URL).

    When Bob receives the invitation, Bob clicks on the URL to go to the helper page and receive instructions about the invitation and how he can download a DKMS edge agent. He follows the instructions, installs the edge agent, which in turn provisions Bob\u2019s cloud agent. When provisioning is complete, Bob\u2019s edge agent retrieves Alice\u2019s connection invitation from the helper URL. Since Bob is now fully provisioned, the rest of the sequence proceeds identically to section 10.8.

    "},{"location":"concepts/0051-dkms/dkms-v4/#1010-rotate-did-keys","title":"10.10. Rotate DID Keys","text":"

    As described in section 8.1, key rotation is a core security feature of DKMS. This diagram illustrates athe protocol for key rotation.

    Key rotation may be triggered by expiration of a key or by an another event such as agent recovery. The process begins with the identity owner\u2019s edge agent generating its own new keys. If keys also need to be rotated in the cloud agent, the edge agent sends a key change request.

    The identity owner\u2019s agent policy may require that key rotation requires authorization from two or more edge agents. If so, the first edge agent generates a one time passcode or QR code that the identity owner can use to authorize the key rotation at the second edge agent. Once the passcode is verified, the second edge agent signs the key rotation request and sends it to the first edge agent.

    Once the necessary authorizations have been received, the first edge agent writes the changes to the microledger for that DID. It then sends the updates to the microledger to the cloud agent for the other party to the DID relationship (Bob), who forwards it to Bob\u2019s edge agent. Bob\u2019s edge agent verifies the updates and adds the changes to its copy of the microledger.

    Bob\u2019s edge agent then needs to broadcast the changes to Bob\u2019s cloud agent and any other edge agent that Bob has authorized to interact with Alice. Once this is done, Alice and Bob are \"in sync\" with the rotated keys, and their connection is at full strength.

    "},{"location":"concepts/0051-dkms/dkms-v4/#1011-delete-connection","title":"10.11. Delete Connection","text":"

    In decentralized identity, identity owners are always in control of their relationships. This means either party to a connection can terminate the relationship by deleting it. This diagram illustrates Alice deleting the connection she had with Bob.

    All that is required to delete a connection is for the edge agent to add a DISABLE event to the microledger she established with Bob. As always, this change is propagated to Alice\u2019s cloud agent and any other edge agents authorized to interact with the DID she assigned to Bob.

    Note that, just like in the real world, it is optional for Alice to notify Bob of this change in the state of their relationship. If she chooses to do so, her edge agent will propagate the DISABLE event to Bob\u2019s copy of the microledger. If, when, and how Bob is notified by his edge agent(s) depends on Bob\u2019s notification policies.

    "},{"location":"concepts/0051-dkms/dkms-v4/#1012-revoke-edge-agent","title":"10.12. Revoke Edge Agent","text":"

    Key revocation is also a required feature of DKMS architecture as discussed in section 8.2. Revocation of keys for a specific DID is accomplished either through rotation of those keys (section 10.10) or deletion of the connection (section 10.11). However in certain cases, an identity owner may need to revoke an entire edge agent, effectively disabling all keys managed by that agent. This is appropriate if a device is lost, stolen, or suspected of compromise.

    Revoking an edge agent is done from another edge agent that is authorized to revoke agents. If a single edge agent is authorized, the process is straightforward. The revoking edge agent sends a signed request to the policy registry address (section 9.2) on the ledger holding the policy registry. The ledger performs the update. The revoking edge agent then \"removes\" the keys for the revoked edge agent by disabling them.

    As a best practice, this event also should trigger key rotation by the edge agent.

    Note that an identity owner may have a stronger revocation policy, such as requiring two edge agents to authorize revocation of another edge agent. This sequence is very similar to requiring two edge agents to authorize a key rotation as described in section 10.10. However it could also cause Alice to be locked out of her edge agents if an attacker can gain control of enough devices. In this case Alice could use one of her recovery options (sections 10.16 and 10.17).

    "},{"location":"concepts/0051-dkms/dkms-v4/#1013-recovery-setup","title":"10.13. Recovery Setup","text":"

    As discussed in section 6, recovery is a paramount feature of DKMS\u2014in decentralized key management, there is no \"forgot password\" button (and if there were, it would be a major security vulnerability). So it is particularly important that it be easy and natural for an identity owner to select and configure recovery options.

    The process begins with Alice\u2019s edge agent prompting Alice to select among the two recovery options described in section 6: offline recovery and social recovery. Her edge agent then creates a key pair for backup encryption, encrypts a backup of her edge wallet, and stores it with her cloud agent.

    If Alice chooses social recovery, the next step is for Alice to add trustees as described in section 10.14. Once the trustee has accepted Alice\u2019s invitation, Alice\u2019s edge agent creates and shares a recovery data share for each trustee. This is a shard of a file containing a copy of her backup encryption key, her link secret, and the special recovery endpoint that was set up by her cloud agent when the recovery invitation was created (see section 10.14).

    Alice\u2019s edge agent sends this recovery data share to her cloud agent who forwards it to the cloud agent for each of her trustees. Each cloud agent securely stores the share so its identity owner is ready in helping Alice to recover should the need arise. (See sections 10.17 and 10.18 for the actual social recovery process.)

    If Alice chooses offline recovery, her edge agent first creates a \"paper wallet\", which typically consists of a QR code or string of text that encodes the same data as in a recovery data share. Her edge agent then displays that paper wallet data to Alice for printing and storing in a safe place. Note that one of the primary usability challenges with offline recovery methods is Alice:

    1. Following through with storage of the paper wallet.

    2. Properly securing storage of the paper wallet over long periods of time.

    3. Remembering the location of the paper wallet over long periods of time.

    To some extent these can be addressed if the edge agent periodically reminds the identity owner to verify that his/her paper wallet is securely stored in a known location.

    "},{"location":"concepts/0051-dkms/dkms-v4/#1014-add-trustee","title":"10.14. Add Trustee","text":"

    The secret to implementing social recovery in DKMS is using DKMS agents to automate the process of securely storing, sharing, and recovering encrypted backups of DKMS wallets with several of the identity owner\u2019s connections. In DKMS architecture, these connections are currently called trustees. (Note: this is a placeholder term pending further usability research on the best name for this new role.)

    Trustees are selected by the identity owner based on the owner\u2019s trust. For each trustee, the edge agent requests the cloud agent to create a trustee invitation. The cloud agent generates and registers with the agency a unique URL that will be used only for this purpose. The edge agent then creates a recovery data share (defined in 10.13) and shards it as defined by the identity owner\u2019s recovery policy.

    At this point there are two options for delivering the trustee invitation depending on whether the identity owner already has a connection with the trustee or not. If a connection exists, the edge agent sends the invitation to the cloud agent who forwards it to the trustee\u2019s cloud agent who forwards it to an edge agent who notifies the trustee of the invitation.

    If a connection does not exist, the recovery invitation is delivered out of band in a process very similar to adding a connection to a private DID (sections 10.8 and 10.9).

    Once the trustee accepts the invitation, the response is returned to identity owner\u2019s edge agent to complete the recovery setup process (section 10.13).

    "},{"location":"concepts/0051-dkms/dkms-v4/#1015-update-recovery-setup","title":"10.15. Update Recovery Setup","text":"

    With DKMS infrastructure, key recovery is a lifelong process. A DKMS wallet filled with keys, DIDs, and verifiable credentials is an asset constantly increasing in value. Thus it is critical that identity owners be able to update their recovery methods as their circumstances, devices, and connections change.

    For social recovery, an identity owner may wish to add new trustees or delete existing ones. Whenever this happens, the owner\u2019s edge agent must recalculate new recovery data shares to shard among the new set of trustees. This is a two step process: the new share must first be sent to all trustees in the new set and an acknowledgement must be received from all of them. Once that it done, the edge agent can send a commitment message to all trustees in the new set to complete the process.

    Updating offline recovery data is simply a matter of repeating the process of creating and printing out a paper wallet. An edge agent can automatically inform its identity owner of the need to do this when circumstances require it as well as automatically remind its owner to keep such offline information safe and accessible.

    "},{"location":"concepts/0051-dkms/dkms-v4/#1016-offline-recovery","title":"10.16. Offline Recovery","text":"

    One advantage of the offline recovery process is that it can be performed very quickly by the identity owner because it has no dependencies on outside parties.

    The identity owner simply initiates recovery on a newly installed edge agent. The edge agent prompts to scan the paper wallet (or input the text). From this data, it extracts the special recovery endpoint registered in the recovery setup process (section 10.13) and the backup decryption key. It then requests the encrypted backup from the recovery endpoint (which routes to the identity owner\u2019s cloud agent), decrypts it, restores the edge wallet, and replaces the agent keys with new keys. The final steps are to update the agent policy registry and, as a best practice, rotate all DID keys.

    "},{"location":"concepts/0051-dkms/dkms-v4/#1017-social-recovery","title":"10.17. Social Recovery","text":"

    Social recovery, while more complex than offline recovery, is also more automated, flexible, and resilient. The secret to making it easy and intuitive for identity owners is using DKMS agents to automate every aspect of the process except for the most social step: verification of the actual identity of the identity owner by trustees.

    Social recovery, like offline recovery, begins with the installation of a fresh edge agent. The identity owner selects the social recovery option and is prompted for the contact data her edge agent and cloud agent will need to send special new connection requests to her trustees. These special connection requests are then issued as described in section 10.8.

    These special connection requests are able to leverage the same secure DKMS infrastructure as the original connections while at the same time carrying the metadata needed for the trustee\u2019s edge agent to recognize it is a recovery request. At that point, the single most important step in social recovery happens: the trustee verifying that it is really Alice making the recovery request, and not an impersonator using social engineering.

    Once the trustee is satisfied with the verification, the edge agent prompts the trustee to perform the next most important step: select the existing connection with Alice so that the trustee edge agent knows which connection is trying to recover. Only the trustee\u2014a human being\u2014can be trusted to make this association.

    At this point, the edge agent can correlate the old connection to Alice with the new connection to Alice, so it knows which recovery data share to select (see section 10.13). It can then decrypt the recovery data share with the identity owner\u2019s private key, extracts the recovery endpoint, and re-encrypt the recovery data share with the public key of Alice\u2019s new edge agent.

    Now the trustee\u2019s edge agent is ready to return the recovery data share to Alice\u2019s new cloud agent via the recovery endpoint. The cloud agent forwards it to Alice\u2019s new edge agent. Once Alice\u2019s new edge agent has the required set of recovery data shares, it decrypts and assembles them. It then uses that recovery data to complete the same final steps as offline recovery described in section 10.16.

    "},{"location":"concepts/0051-dkms/dkms-v4/#11-open-issues-and-future-work","title":"11. Open Issues and Future Work","text":"
    1. DID specification. The DKMS specification has major dependencies on the DID specification which is still in progress at the W3C Credentials Community Group. Although we are not concerned that the resulting specification will not support DKMS requirements, we cannot be specific about certain details of how DKMS will interact with DIDs until that specification is finalized. However the strong market interest in DIDs led the Credentials Community Group to author an extensive DID Use Cases document and submit a Decentralized Identifier Working Group charter to the W3C for consideration as a full Working Group.

    2. DID methods. The number of DID methods has grown substantially as shown by the unofficial DID Method Registry maintained by the W3C Credentials Community Group. Because different DID methods may support different levels of assurance about DKMS keys, more work may be required to assess about the role of different ledgers as a decentralized source of truth and the requirements of each ledger for the hosting of DIDs and DID documents.

    3. Verifiable credentials interoperability. The W3C Verifiable Claims Working Group is currently preparing its 1.0 Candidate Recommendation. As verifiable credentials mature, we need to say more about how different DKMS wallets and agents from different vendors can support interoperable verifiable credentials, including those with zero-knowledge credentials and proofs. Again, this may need to extend to an adjacent protocol.

    4. DKMS wallet and agent portability. As mentioned in section 5.4, this aspect of the DKMS protocol is not fully specified and needs to be addressed in a subsequent version. This area of work is particularly active in the Hyperledger Indy Agent development community. A recent \"connectathon\" hosted by the Sovrin Foundation had 32 developers testing agent-to-agent protocol interoperability among 9 different code bases.

    5. Secure elements, TPMs, and TEEs. Since DKMS is highly dependent on secure elements, more work is needed to specify how a device can communicate or verify its own security capabilities or its ability to attest to authentication factors for the identity owner.

    6. Biometrics. While they can play a special role in the DKMS architecture because of their ability to intrinsically identify a unique individual, this same quality means a privacy breach of biometric attributes could be disastrous because they may be unrecoverable. So determining the role of biometrics and biometric service providers is a major area of future work.

    7. Spam and DDOS attacks. There are several areas where this must be considered, particularly in relation to connection requests (section 10.7).

    8. DID phishing. DKMS can only enable security, it cannot by itself prevent a malicious actor or agency sending malicious invitations to form malicious connections that appear to be legitimate connection invitations (section 10.9).

    9. Usability testing. Although early research on the usability of DKMS wallets and agents was carried out by BYU Internet Security Research Lab, much more work remains to be done to develop the highly repeatable \"user ceremonies\" necessary for DKMS to succeed in the mass market.

    "},{"location":"concepts/0051-dkms/dkms-v4/#12-future-standardization","title":"12. Future Standardization","text":"

    It is the recommendation of the authors that the work described in this document be carried forward to full Internet standardization. We believe OASIS is a strong candidate for this work due to its hosting of the Key Management Interoperability Protocol (KMIP) at the KMIP Technical Committee since 2010. Please contact the authors if you are interested in contributing to organizing an open standard effort for DKMS.

    "},{"location":"concepts/0051-dkms/shamir_secret/","title":"Shamir secret API (indy-crypto and indy-sdk)","text":"

    Objective: indy-crypto exposes the low level API for generating and reconstructing secrets. indy-sdk uses the underlying indy-crypto and exposes an API to shard a JSON message, store the shards and reconstitute the secret.

    "},{"location":"concepts/0051-dkms/shamir_secret/#indy-crypto","title":"Indy-crypto","text":"
    1. shard_secret(secret: bytes, m: u8, n: u8, sign_shares: Option<bool>) -> Result<Vec<Share>, IndyCryptoError>. Splits the bytes of the secret secret in n different shares and m-of-n shares are required to reconstitute the secret. sign_shares if provided, all shards are signed.
    2. recover_secret(shards: Vec<Share>, verify_signatures: Option<bool>) -> Result<Vec<u8>, IndyCryptoError>. Recover the secret from the given shards. verify_signatures if given verifies the signatures.
    "},{"location":"concepts/0051-dkms/shamir_secret/#indy-sdk","title":"Indy-sdk","text":"
    1. shard_JSON(msg: String, m: u8, n: u8, sign_shares: Option<bool>) -> Result<Vec<String>, IndyError> Takes the message as a JSON string and serialises it to bytes and passes it to shard_secret of indy-crypto. The serialisation has to be deterministic, i.e the same JSON should always serialise to same bytes everytime. The resulting Share given by indy-crypto is converted to JSON before returning.
    2. shard_JSON_with_wallet_data(wallet_handle: i32, msg: String, wallet_keys:Vec<&str>, m: u8, n: u8, sign_shares: Option<bool>) -> Result<Vec<String>, IndyError> Takes the message as a JSON string, updates the JSON with key-values from wallet given by handle wallet_handle, keys present in the vector wallet_keys and passes the resulting JSON to shard_JSON.
    3. recover_secret(shards: Vec<String>, verify_signatures: Option<bool>) -> Result<String, IndyError> Takes a collection of shards each encoded as JSON, deserialises them into Shares and passes them to recover_secret from indy-crypto. It converts the resulting secret back to JSON before returning it.
    4. shard_JSON_and_store_shards(wallet_handle: i32, msg: String, m: u8, n: u8, sign_shares: Option<bool>) -> Result<String, IndyError> Shards the given JSON using shard_JSON and store shards as a JSON array (each shard is an object in itself) in the wallet given by wallet_handle. Returns the wallet key used to store the shards.
    "},{"location":"concepts/0051-dkms/trustee_protocols/","title":"Trustee Setup Protocol","text":"

    Objective: Provide the messages and data formats so an identity owner can choose, update, remove trustees and their delegated capabilities.

    "},{"location":"concepts/0051-dkms/trustee_protocols/#assumptions","title":"Assumptions","text":"
    1. An identity owner selects a connection to become a trustee
    2. Trustees can be granted various capabilities by identity owners
      1. Safeguarding a recovery share. This will be the most common
      2. Revoke an authorized agent on behalf of an identity owner
      3. Provision a new agent on behalf of an identity owner
      4. Be an administrator for managing identity owner agents
    3. Trustees agree to any new specified capabilities before any action is taken
    4. Trustees will safeguard recovery shares. Their app will encrypt the share and not expose it to anyone else
    5. Trustees authenticate out-of-band an identity owner when a recovery event occurs
    6. The Trustees' app should only send a recovery share to an identity owner after they have been authenticated
    7. All messages will use a standard DIDComm Envelope.
    "},{"location":"concepts/0051-dkms/trustee_protocols/#messages-and-structures","title":"Messages and Structures","text":"

    Messages are formatted as JSON. All binary encodings use base64url. All messages include the following fields:

    1. version \\<string>: The semantic version of the message data format.
    2. type \\<string>: The message type.
    "},{"location":"concepts/0051-dkms/trustee_protocols/#capabilty_offer","title":"CAPABILTY_OFFER","text":"

    Informs a connection that the identity owner wishes to make them a trustee. The message includes information about what capabilities the identity owner has chosen to grant a trustee and how long the offer is valid. This message adds the following fields

    expires \\<string>: 64-bit unsigned big-endian integer. The number of seconds elapsed between January 1, 1970 UTC and the time the offer will expire if no request message is received. This value is purely informative.\\ capabilities \\<list[string]>: A list of capabilities that the trustee will be granted. They can include

    1. RECOVERY_SHARE: The trustee will be given a recovery share
    2. REVOKE_AUTHZ: The trustee can revoke agents
    3. PROVISION_AUTHZ: The trustee can provision new agents
    4. ADMIN_AUTHZ: The trustee is an administrator of agents
    {\n  \"version\": \"0.1\",\n  \"type\": \"CAPABILITY_OFFER\",\n  \"capabilities\": [\"RECOVERY_SHARE\", \"REVOKE_AUTHZ\", \"PROVISION_AUTHZ\"]\n  \"expires\": 1517428815\n}\n
    "},{"location":"concepts/0051-dkms/trustee_protocols/#capabilty_request","title":"CAPABILTY_REQUEST","text":"

    Sent to an identity owner in response to a TRUSTEE_OFFER message. The message includes includes information for which capabilities the trustee has agreed. This message adds the following fields

    for_id \\<string>: The nonce sent in the TRUSTEE_OFFER message.\\ capabilities \\<object[string,string]>: A name value object that contains the trustee's response for each privilege.\\ authorizationKeys \\<list[string]>: The public keys that the trustee will use to verify her actions with the authz policy registry on behalf of the identity owner.

    {\n  \"version\": \"0.1\",\n  \"type\": \"CAPABILITY_REQUEST\",\n  \"authorizationKeys\": [\"Rtna123KPuQWEcxzbNMjkb\"]\n  \"capabilities\": [\"RECOVERY_SHARE\", \"REVOKE_AUTHZ\"]\n}\n
    "},{"location":"concepts/0051-dkms/trustee_protocols/#capability_response","title":"CAPABILITY_RESPONSE","text":"

    Sends the identity owner policy address and/or recovery data and metadata to a recovery trustee. A trustee should send a confirmation message that this message was received.

    address \\<string>: The identity owner's policy address. Only required if the trustee has a key in the authz policy registry.\\ share \\<object>: The actual recovery share data in the format given in the next section. Only required if the trustee has the RECOVERY_SHARE privilege.

    {\n  \"version\": \"0.1\",\n  \"type\": \"CAPABILITY_RESPONSE\",\n  \"address\": \"b3AFkei98bf3R2s\"\n  \"share\": {\n    ...\n  }\n}\n
    "},{"location":"concepts/0051-dkms/trustee_protocols/#trust_ping","title":"TRUST_PING","text":"

    Authenticates a party to the identity owner for out of band communication.

    challenge \\<object>: A message that a party should respond to so the identity owner can be authenticated. Contains a question field for the other party to answer and a list of valid_responses.

    {\n  \"version\": \"0.1\",\n  \"type\": \"TRUST_PING\",\n  \"challenge\": {\n    ...\n  }\n}\n

    challenge will look like the example below but allows for future changes as needed.\\ question \\<string>: The question for the other party to answer.\\ valid_responses \\<list[string]>: A list of valid responses that the party can give in return.

    {\n    \"question\": \"Are you on a call with CULedger?\",\n    \"valid_responses\": [\"Yes\", \"No\"]\n}\n
    "},{"location":"concepts/0051-dkms/trustee_protocols/#trust_pong","title":"TRUST_PONG","text":"

    The response message for the TRUST_PING message.

      \"version\": \"0.1\",\n  \"type\": \"TRUST_PONG\",\n  \"answer\": {\n    \"answerValue\": \"Yes\"\n  }\n
    "},{"location":"concepts/0051-dkms/trustee_protocols/#key_heartbeat_request","title":"KEY_HEARTBEAT_REQUEST","text":"

    Future_Work: Verifies a trustee/agent has and is using the public keys that were given to the identity owner. These keys

    authorizationKeys \\<list[string]>: Public keys the identity owner knows that belong to the trustee/agent.

    {\n  \"version\": \"0.1\",\n  \"type\": \"KEY_HEARTBEAT_REQUEST\",\n  \"authorizationKeys\": [\"Rtna123KPuQWEcxzbNMjkb\"]\n}\n
    "},{"location":"concepts/0051-dkms/trustee_protocols/#key_heartbeat_response","title":"KEY_HEARTBEAT_RESPONSE","text":"

    Future_Work: The updated keys sent back from the trustee/agent

    "},{"location":"concepts/0051-dkms/trustee_protocols/#recovery_share_response","title":"RECOVERY_SHARE_RESPONSE","text":"

    Future_Work: After an identity owner receives a challenge from a trustee, an application prompts her to complete the challenge. This message contains her response.

    for_id \\<string>: The nonce sent in the RECOVERY_SHARE_CHALLENGE message.\\ response \\<object>: The response from the identity owner.

    {\n  \"version\": \"0.1\",\n  \"type\": \"RECOVERY_SHARE_RESPONSE\",\n  \"response\": {\n    ...\n  }\n}\n

    response will look like the example below but allows for future changes as needed.

    {\n  \"pin\": \"3qA5h7\"\n}\n
    "},{"location":"concepts/0051-dkms/trustee_protocols/#recovery-share-data-structure","title":"Recovery Share Data Structure","text":"

    Recovery shares are formatted in JSON with the following fields:

    1. version \\<string>: The semantic version of the recovery share data format.
    2. source_did: The identity owner DID that sent this share to the trustee
    3. tag \\<string>: A value used to verify that all the shares are for the same secret. The identity owner compares this to every share to make sure they are the same.
    4. shareValue \\<string>: The share binary value.
    5. hint \\<object>: Hint data that contains the following fields:
      1. trustees \\<list[string]>: A list of all the recovery trustee names associated with this share. These names are only significant to the identity owner. Helps to aid in recovery by providing some metadata for the identity owner and the application.
      2. threshold \\<integer>: The minimum number of shares needed to recover the key. Helps to aid in recovery by providing some metadata for the identity owner and the application.
    {\n  \"version\": \"0.1\",\n  \"source_did\": \"did:sov:asbdfa32135\"\n  \"tag\": \"ze4152Bsxo90\",\n  \"shareValue\": \"abcdefghijkmnopqrstuvwxyzABCDEFGHJKLMNPQRSTUVWXYZ123456789\"\n  \"hint\": {\n    \"theshold\": 3,\n    \"trustees\": [\"Mike L\", \"Lovesh\", \"Corin\", \"Devin\", \"Drummond\"]\n  }\n}\n
    "},{"location":"concepts/0074-didcomm-best-practices/","title":"Aries RFC 0074: DIDComm Best Practices","text":""},{"location":"concepts/0074-didcomm-best-practices/#summary","title":"Summary","text":"

    Identifies some conventions that are generally accepted as best practice by developers of DIDComm software. Explains their rationale. This document is a recommendation, not normative.

    "},{"location":"concepts/0074-didcomm-best-practices/#motivation","title":"Motivation","text":"

    By design, DIDComm architecture is extremely flexible. Besides adapting well to many platforms, programming languages, and idioms, this let us leave matters of implementation style in the hands of developers. We don't want framework police trying to enforce rigid paradigms.

    However, some best practices are worth documenting. There is tribal knowledge in the community that represents battle scars. Collaboration is fostered if learning curves don't have to proliferate. Therefore, we offer the following guidelines.

    "},{"location":"concepts/0074-didcomm-best-practices/#tutorial","title":"Tutorial","text":""},{"location":"concepts/0074-didcomm-best-practices/#normative-language","title":"Normative language","text":"

    RFCs about protocols and DIDComm behaviors follow commonly understood conventions about normative language, including words like \"MUST\", \"SHOULD\", and \"MAY\". These conventions are documented in IETF's RFC 2119. Existing documents that were written before we clarified our intention to follow these conventions are grandfathered but should be updated to conform.

    "},{"location":"concepts/0074-didcomm-best-practices/#names","title":"Names","text":"

    Names show up in lots of places in our work. We name RFCs, ../../concepts defined in those RFCs, protocols, message types, keys in JSON, and much more.

    The two most important best practices with names are:

    These are so common-sense that we won't argue them. But a few other points are worthy of comment.

    "},{"location":"concepts/0074-didcomm-best-practices/#snake_case-and-variants","title":"snake_case and variants","text":"

    Nearly all code uses multi-word tokens as names. Different programming ecosystems have different conventions for managing them: camelCase, TitleCase, snake_case, kabob-case, SHOUT_CASE, etc. We want to avoid a religious debate about these conventions, and we want to leave developers the freedom to choose their own styles. However, we also want to avoid random variation that makes it hard to predict the correct form. Therefore, we try to stay idiomatic in the language we're using, and many of our tokens are defined to compare case-insensitive with punctuation omitted, so the differences melt away. This is the case with protocol names and message type names, for example; it means that you should interpret \"TicTacToe\" and \"tic-tac-toe\" and \"ticTacToe\" as being the same protocol. If you are writing a java function for it, by all means use \"ticTacToe\"; if you are writing CSS, by all means use \"tic-tac-toe\".

    The community tries to use snake_case in JSON key names, even though camelCase is slightly more common. This is not a hard-and-fast rule; in particular, a few constructs from DID Docs leak into DIDComm, and these use the camelCase style that those specs expect. However, it was felt that snake_case was mildly preferable because it didn't raise the questions about acronyms that camelCase does (is it \"zeroOutRAMAlgorithm\", \"zeroOutRamAlgorithm\", or \"zeroOutRAMalgorithm\"?).

    The main rule to follow with respect to case is: Use the same convention as the rest of the code around you, and in JSON that's intended to be interoperable, use snake_case unless you have a good reason not to. Definitely use the same case conventions as the other keys in the same JSON schema.

    "},{"location":"concepts/0074-didcomm-best-practices/#pluralization","title":"Pluralization","text":"

    The names of JSON items that represent arrays should be pluralized whenever possible, while singleton items should not.

    "},{"location":"concepts/0074-didcomm-best-practices/#terminology-and-notation","title":"Terminology and Notation","text":"

    Use terms correctly and consistently.

    The Sovrin Glossary V2 is considered a definitive source of terms. We will probably move it over to Aries at some point as an officially sponsored artifact of this group. RFC 0006: SSI Notation is also a definitive reference.

    RFCs in general should make every effort to define new terms only when needed, to be clear about the ../../concepts they are labeling, and use prior work consistently. If you find a misalignment in the terminology or notation used by RFCs, please open a github issue.

    "},{"location":"concepts/0074-didcomm-best-practices/#terseness-and-abbreviations","title":"Terseness and abbreviations","text":"

    We like obvious abbreviations like \"ipaddr\" and \"inet\" and \"doc\" and \"conn\". We also formally define abbreviations or acronyms for terms and then use the short forms as appropriate.

    However, we don't value terseness so much that we are willing to give up clarity. Abbreviating \"wallet\" as \"wal\" or \"agent\" as \"ag\" is quirky and discouraged.

    "},{"location":"concepts/0074-didcomm-best-practices/#rfc-naming","title":"RFC naming","text":"

    RFCs that define a protocol should be named in the form <do something>-protocol, where <do-something> is a verb phrase like issue-credential, or possibly a noun phrase like did-exchange--something that makes the theme of the protocol obvious. The intent is to be clear; a protocol name like \"connection\" is too vague because you can do lots of things with connections.

    Protocol RFCs need to be versioned thoughtfully. However, we do not put version numbers in a protocl RFC's folder name. Rather, the RFC folder contains all versions of the protocol, with the latest version documented in README.md, and earlier versions documented in subdocs named according to version, as in version-0.9.md or similar. The main README.md should contain a section of links to previous versions. This allows the most natural permalink for a protocol to be a link to the current version, but it also allows us to link to previous versions explicitly if we need to.

    RFCs that define a decorator should be named in the form <decorator name>-decorator, as in timing-decorator or trace-decorator.

    "},{"location":"concepts/0074-didcomm-best-practices/#json","title":"JSON","text":"

    Json is a very flexible data format. This can be nice, but it can also lead to data modeled in ways that cause a lot of bother for some programming languages. Therefore, we recommend the following choices.

    "},{"location":"concepts/0074-didcomm-best-practices/#no-variable-type-arrays","title":"No Variable Type Arrays","text":"

    Every element in an array should be the same data type. This is helpful for statically and strongly typed programming languages that want arrays of something more specific than a base Object class. A violating example:

    [\n   {\n    \"id\":\"324234\",\n    \"data\":\"1/3/2232\"\n   },\n   {\n    \"x_pos\":3251,\n    \"y_pos\":11,\n    \"z_pos\":55\n   }\n]\n
    Notice that the first object and the second object in the array have no structure in common.

    Although the benefit of this convention is especially obvious for some programming languages, it is helpful in all languages to keep parsing logic predictable and reducing branching cod epaths.

    "},{"location":"concepts/0074-didcomm-best-practices/#dont-treat-objects-as-associative-arrays","title":"Don't Treat Objects as Associative Arrays","text":"

    Many loosely typed programming languages conflate the concept of an associative array (dict, map) with the concept of object. In python, for example, an object is just a dict with some syntactic sugar, and python's JSON serialization handles the two interchangeably when serializing.

    This makes it tempting to do the same thing in JSON. An unhappy example:

    {\n    \"usage\": {\n        \"194.52.101.254\": 34,\n        \"73.183.146.222\": 55,\n        \"149.233.52.170\": 349\n    }\n}\n

    Notice that the keys of the usage object are unbounded; as the set of IP addresses grows, the set of keys in usage grows as well. JSON is an \"object notation\", and {...} is a JSON object -- NOT a JSON associative array--but this type of modeling ignores that. If we model data this way, we'll end up with an \"object\" that could have dozens, hundreds, thousands, or millions of keys with identical semantics but different names. That's not how objects are supposed to work.

    Note as well that the keys here, such as \"192.52.101.254\", are not appropriate identifiers in most programming languages. This means that unless deserialization code maps the keys to keys in an associative array (dict, map), it will not be able to handle the data at all. Also, this way to model the data assumes that we know how lookups will be done (in this case, ipaddr\u2192number); it doesn't leave any flexibility for other access patterns.

    A better way to model this type of data is as a JSON array, where each item in the array is a tuple of known field types with known field names. This is only slightly more verbose. It allows deserialization to map to one or more lookup data structures per preference, and is handled equally well in strongly, statically typed programming languages and in loosely typed languages:

    {\n    \"usage\": [\n        { \"ip\": \"194.52.101.254\", \"num\": 34 },\n        { \"ip\": \"73.183.146.222\", \"num\": 55 },\n        { \"ip\": \"149.233.52.170\", \"num\": 349 }\n    ]\n}\n
    "},{"location":"concepts/0074-didcomm-best-practices/#numeric-field-properties","title":"Numeric Field Properties","text":"

    Json numeric fields are very flexible. As wikipedia notes in its discussion about JSON numeric primitives:

    Number: a signed decimal number that may contain a fractional part and may use exponential\nE notation, but cannot include non-numbers such as NaN. The format makes no distinction\nbetween integer and floating-point. JavaScript uses a double-precision floating-point format\nfor all its numeric values, but other languages implementing JSON may encode numbers\ndifferently.\n

    Knowing that something is a number may be enough in javascript, but in many other programming languages, more clarity is helpful or even required. If the intent is for the number to be a non-negative or positive-only integer, say so when your field is defined in a protocol. If you know the valid range, give it. Specify whether the field is nullable.

    Per the first guideline above about names, name your numeric fields in a way that makes it clear they are numbers: \"references\" is a bad name in this respect (could be a hyperlink, an array, a string, etc), whereas \"reference_count\" or \"num_of_refs\" is much better.

    "},{"location":"concepts/0074-didcomm-best-practices/#date-time-conventions","title":"Date Time Conventions","text":"

    Representing date- and time-related data in JSON is a source of huge variation, since the datatype for the data isn't obvious even before it's serialized. A quick survey of source code across industries and geos shows that dates, times, and timestamps are handled with great inconsistency outside JSON as well. Some common storage types include:

    Of course, many of these datatypes have special rules about their relationship to timezones, which further complicates matters. And timezone handling is notoriously inconsistent, all on its own.

    Some common names for the fields that store these times include:

    The intent of this RFC is NOT to eliminate all diversity. There are good reasons why these different datatypes exist. However, we would like DIDComm messages to use broadly understood naming conventions that clearly communicate date- and time-related semantics, so that where there is diversity, it's because of different use cases, not just chaos.

    By convention, DIDComm field suffixes communicate datatype and semantics for date- and time-related ideas, as described below. As we've stressed before, conventions are recommendations only. However:

    1. It is strongly preferred that developers not ignore these perfectly usable conventions unless they have a good reason (e.g., a need to measure the age of the universe in seconds in scientific notation, or a need for ancient dates in a genealogy or archeology use case).

    2. Developers should never contradict the conventions. That is, if a developer sees a date- or time-related field that appears to match what's documented here, the assumption of alignment ought to be safe. Divergence should use new conventions, not redefine these.

    Field names like \"expires\" or \"lastmod\" are deprecated, because they don't say enough about what to expect from the values. (Is \"expires\" a boolean? Or is it a date/time? If the latter, what is its granularity and format?)

    "},{"location":"concepts/0074-didcomm-best-practices/#_date","title":"_date","text":"

    Used for fields that have only date precision, no time component. For example, birth_date or expiration_date. Such fields should be represented as strings in ISO 8601 format (yyyy-mm-dd). They should contain a timezone indicator if and only if it's meaningful (see Timezone Offset Notation).

    "},{"location":"concepts/0074-didcomm-best-practices/#_time","title":"_time","text":"

    Used for fields that identify a moment with both date and time precision. For example, arrival_time might communicate when a train reaches the station. The datatype of such fields is a string in ISO 8601 format (yyyy-mm-ddTHH:MM:SS.xxx...) using the Gregorian calendar, and the timezone defaults to UTC. However: * Precision can vary from minute to microsecond or greater. * It is strongly recommended to use the \"Z\" suffix to make UTC explicit: \"2018-05-27 18:22Z\" * The capital 'T' that separates date from time in ISO 8601 can freely vary with a space. (Many datetime formatters support this variation, for greater readability.) * If local time is needed, Timezone Offset Notation is used.

    "},{"location":"concepts/0074-didcomm-best-practices/#_sched","title":"_sched","text":"

    Holds a string that expresses appointment-style schedules such as \"the first Thursday of each month, at 7 pm\". The format of these strings is recommended to follow ISO 8601's Repeating Intervals notation where possible. Otherwise, the format of such strings may vary; the suffix doesn't stipulate a single format, but just the semantic commonality of scheduling.

    "},{"location":"concepts/0074-didcomm-best-practices/#_clock","title":"_clock","text":"

    Describes wall time without reference to a date, as in 13:57. Uses ISO 8601 formatted strings and a 24-hour cycle, not AM/PM.

    "},{"location":"concepts/0074-didcomm-best-practices/#_t","title":"_t","text":"

    Used just like _time, but for unsigned integer seconds since Jan 1, 1970 (with no opinion about whether it's a 32-bit or 64-bit value). Thus, a field that captures a last modified timestamp for a file, as number of seconds since Jan 1, 1970 would be lastmod_t. This suffix was chosen for resonance with Posix's time_t datatype, which has similar semantics.

    "},{"location":"concepts/0074-didcomm-best-practices/#_tt","title":"_tt","text":"

    Used just like _time and _t, but for 100-nanosecond intervals since Jan 1, 1601. This matches the semantics of the Windows FILETIME datatype.

    "},{"location":"concepts/0074-didcomm-best-practices/#_sec-or-subunits-of-seconds-_milli-_micro-_nano","title":"_sec or subunits of seconds (_milli, _micro, _nano)","text":"

    Used for fields that tell how long something took. For example, a field describing how long a system waited before retry might be named retry_milli. Normally, this field would be represented as an unsigned positive integer.

    "},{"location":"concepts/0074-didcomm-best-practices/#_dur","title":"_dur","text":"

    Tells duration (elapsed time) in friendly, calendar based units as a string, using the conventions of ISO 8601's Duration concept. Y = year, M = month, W = week, D = day, H = hour, M = minute, S = second: \"P3Y2M5D11H\" = 3 years, 2 months, 5 days, 11 hours. 'M' can be preceded by 'T' to resolve ambiguity between months and minutes: \"PT1M3S\" = 1 minute, 3 seconds, whereas \"P1M3S\" = 1 month, 3 seconds.

    "},{"location":"concepts/0074-didcomm-best-practices/#_when","title":"_when","text":"

    For vague or imprecise dates and date ranges. Fragments of ISO 8601 are preferred, as in \"1939-12\" for \"December 1939\". The token \"to\" is reserved for inclusive ranges, and the token \"circa\" is reserved to make fuzziness explicit, with \"CE\" and \"BCE\" also reserved. Thus, Cleopatra's birth_when might be \"circa 30 BCE\", and the timing of the Industrial Revolution might have a happened_when of \"circa 1760 to 1840\".

    "},{"location":"concepts/0074-didcomm-best-practices/#timezone-offset-notation","title":"Timezone Offset Notation","text":"

    Most timestamping can and should be done in UTC, and should use the \"Z\" suffix to make the Zero/Zulu/UTC timezone explicit.

    However, sometimes the local time and the UTC time for an event are both of interest. This is common with news events that are tied to a geo, as with the time that an earthquake is felt at its epicenter. When this is the case, rather than use two fields, it is recommended to use timezone offset notation (the \"+0800\" in \"2018-05-27T18:22+08:00\"). Except for the \"Z\" suffix of UTC, timezone name notation is deprecated, because timezones can change their definitions according to the whim of local lawmakers, and because resolving the names requires expensive dictionary lookup. Note that this convention is exactly how ISO 8601 handles the timezone issue.

    "},{"location":"concepts/0074-didcomm-best-practices/#blobs","title":"Blobs","text":"

    In general, blobs are encoded as base64url strings in DIDComm.

    "},{"location":"concepts/0074-didcomm-best-practices/#unicode","title":"Unicode","text":"

    UTF-8 is our standard way to represent unicode strings in JSON and all other contexts. For casual definition, this is sufficient detail.

    For advanced use cases, it may be necessary to understand subtleties like Unicode normalization forms and canonical equivalence. We generally assume that we can compare strings for equality and sort order using a simple binary algorithm. This is approximately but (in some corner cases) not exactly the same as assuming that text is in NFC normalization form with no case folding expectations and no extraneous surrogate pairs. Where more precision is required, the definition of DIDComm message fields should provide it.

    "},{"location":"concepts/0074-didcomm-best-practices/#hyperlinks","title":"Hyperlinks","text":"

    This repo is designed to be browsed as HTML. Browsing can be done directly through github, but we may publish the content using Github Pages and/or ReadTheDocs. As a result, some hyperlink hygiene is observed to make the content as useful as possible:

    These rules are enforced by a unit test that runs code/check_links.py. To run it, go to the root of the repo and run pytest code -- or simply invoke the check_links script directly. Normally, check_links does not test external hyperlinks on the web, because it is too time-consuming; if you want that check, add --full as a command-line argument.

    "},{"location":"concepts/0074-didcomm-best-practices/#security-considerations","title":"Security Considerations","text":""},{"location":"concepts/0074-didcomm-best-practices/#replay-attacks","title":"Replay attacks","text":"

    It should be noted that when defining a protocol that has domain specific requirements around preventing replay attacks an @id property SHOULD be required. Given the @id field is most commonly set to be a UUID, it usually provides sufficient randomness that a nonce would in preventing replay attacks. This means that sufficient care will be needed in processing of the @id field however, to make sure the @id field hasn't been used before. In some cases, nonces require being unpredictable as well. In this case, greater review should be taken as to how the @id field should be used in the domain specific protocol. Additionally, in the event where the @id field is not adequate, it's recommended that an additional nonce field be required by the domain specific protocol specification.

    "},{"location":"concepts/0074-didcomm-best-practices/#reference","title":"Reference","text":""},{"location":"concepts/0074-didcomm-best-practices/#drawbacks","title":"Drawbacks","text":"

    The main concern with this type of RFC is that it will produce more heat than light -- that is, that developers will debate minutiae instead of getting stuff done. We hope that the conventions here feel reasonable and lightweight enough to avoid that.

    "},{"location":"concepts/0074-didcomm-best-practices/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0074-didcomm-best-practices/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0074-didcomm-best-practices/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0094-cross-domain-messaging/","title":"Aries RFC 0094: Cross-Domain Messaging","text":""},{"location":"concepts/0094-cross-domain-messaging/#summary","title":"Summary","text":"

    There are two layers of messages that combine to enable interoperable self-sovereign identity DIDcomm (formerly called Agent-to-Agent) communication. At the highest level are Agent Messages - messages sent between Identities to accomplish some shared goal. For example, establishing a connection between identities, issuing a Verifiable Credential from an Issuer to a Holder or even the simple delivery of a text Instant Message from one person to another. Agent Messages are delivered via the second, lower layer of messaging - encryption envelopes. An encryption envelope is a wrapper (envelope) around an Agent Message to enable the secure delivery of a message from one Agent directly to another Agent. An Agent Message going from its Sender to its Receiver may be passed through a number of Agents, and an encryption envelope is used for each hop of the journey.

    This RFC addresses Cross Domain messaging to enable interoperability. This is one of a series of related RFCs that address interoperability, including DIDDoc Conventions, Agent Messages and Encryption Envelope. Those RFCs should be considered together in understanding DIDcomm messaging.

    In order to send a message from one Identity to another, the sending Identity must know something about the Receiver's domain - the Receiver's configuration of Agents. This RFC outlines how a domain MUST present itself to enable the Sender to know enough to be able to send a message to an Agent in the domain. In support of that, a DIDcomm protocol (currently consisting of just one Message Type) is introduced to route messages through a network of Agents in both the Sender and Receiver's domain. This RFC provides the specification of the \"Forward\" Agent Message Type - an envelope that indicates the destination of a message without revealing anything about the message.

    The goal of this RFC is to define the rules that domains MUST follow to enable the delivery of Agent messages from a Sending Agent to a Receiver Agent in a secure and privacy-preserving manner.

    "},{"location":"concepts/0094-cross-domain-messaging/#motivation","title":"Motivation","text":"

    The purpose of this RFC and its related RFCs is to define a layered messaging protocol such that we can ignore the delivery of messages as we discuss the much richer Agent Messaging types and interactions. That is, we can assume that there is no need to include in an Agent message anything about how to route the message to the Receiver - it just magically happens. Alice (via her App Agent) sends a message to Bob, and (because of implementations based on this series of RFCs) we can ignore how the actual message got to Bob's App Agent.

    Put another way - these RFCs are about envelopes. They define a way to put a message - any message - into an envelope, put it into an outbound mailbox and have it magically appear in the Receiver's inbound mailbox in a secure and privacy-preserving manner. Once we have that, we can focus on letters and not how letters are sent.

    Most importantly for Agent to Agent interoperability, this RFC clearly defines the assumptions necessary to deliver a message from one domain to another - e.g. what exactly does Alice have to know about Bob's domain to send Bob a message?

    "},{"location":"concepts/0094-cross-domain-messaging/#tutorial","title":"Tutorial","text":""},{"location":"concepts/0094-cross-domain-messaging/#core-messaging-goals","title":"Core Messaging Goals","text":"

    These are vital design goals for this RFC:

    1. Sender Encapsulation: We SHOULD minimize what the Receiver has to know about the domain (routing tree or agent infrastructure) of the Sender in order for them to communicate.
    2. Receiver Encapsulation: We SHOULD minimize what the Sender has to know about the domain (routing tree or agent infrastructure) of the Receiver in order for them to communicate.
    3. Independent Keys: Private signing keys SHOULD NOT be shared between agents; each agent SHOULD be separately identifiable for accounting and authorization/revocation purposes.
    4. Need To Know Information Sharing: Information made available to intermediary agents between the Sender and Receiver SHOULD be minimized to what is needed to perform the agent's role in the process.
    "},{"location":"concepts/0094-cross-domain-messaging/#assumptions","title":"Assumptions","text":"

    The following are assumptions upon which this RFC is predicated.

    "},{"location":"concepts/0094-cross-domain-messaging/#terminology","title":"Terminology","text":"

    The following terms are used in this RFC with the following meanings:

    "},{"location":"concepts/0094-cross-domain-messaging/#diddoc","title":"DIDDoc","text":"

    The term \"DIDDoc\" is used in this RFC as it is defined in the DID Specification:

    A DID can be resolved to get its corresponding DIDDoc by any Agent that needs access to the DIDDoc. This is true whether talking about a DID on a Public Ledger, or a pairwise DID (using the did:peer method) persisted only to the parties of the relationship. In the case of pairwise DIDs, it's the (implementation specific) domain's responsibility to ensure such resolution is available to all Agents requiring it within the domain.

    "},{"location":"concepts/0094-cross-domain-messaging/#messages-are-private","title":"Messages are Private","text":"

    Agent Messages sent from a Sender to a Receiver SHOULD be private. That is, the Sender SHOULD encrypt the message with a public key for the Receiver. Any agent in between the Sender and Receiver will know only to whom the message is intended (by DID and possibly keyname within the DID), not anything about the message.

    "},{"location":"concepts/0094-cross-domain-messaging/#the-sender-knows-the-receiver","title":"The Sender Knows The Receiver","text":"

    This RFC assumes that the Sender knows the Receiver's DID and, within the DIDDoc for that DID, the keyname to use for the Receiver's Agent. How the Sender knows the DID and keyname to send the message is not defined within this RFC - that is a higher level concern.

    The Receiver's DID MAY be a public or pairwise DID, and MAY be on a Public Ledger or only shared between the parties of the relationship.

    "},{"location":"concepts/0094-cross-domain-messaging/#example-domain-and-diddoc","title":"Example: Domain and DIDDoc","text":"

    The following is an example of an arbitrary pair of domains that will be helpful in defining the requirements in this RFC.

    In the diagram above:

    "},{"location":"concepts/0094-cross-domain-messaging/#bobs-did-for-his-relationship-with-alice","title":"Bob's DID for his Relationship with Alice","text":"

    Bob\u2019s domain has 3 devices he uses for processing messages - two phones (4 and 5) and a cloud-based agent (6). However, in Bob's relationship with Alice, he ONLY uses one phone (4) and the cloud-based agent (6). Thus the key for device 5 is left out of the DIDDoc (see below).

    Note that the keyname for the Routing Agent (3) is called \"routing\". This is an example of the kind of convention needed to allow the Sender's agents to know the keys for Agents with a designated role in the receiving domain - as defined in the DIDDoc Conventions RFC.

    {\n  \"@context\": \"https://w3id.org/did/v1\",\n  \"id\": \"did:sov:1234abcd\",\n  \"publicKey\": [\n    {\"id\": \"routing\", \"type\": \"RsaVerificationKey2018\",  \"owner\": \"did:sov:1234abcd\",\"publicKeyPem\": \"-----BEGIN PUBLIC X\u2026\"},\n    {\"id\": \"4\", \"type\": \"RsaVerificationKey2018\",  \"owner\": \"did:sov:1234abcd\",\"publicKeyPem\": \"-----BEGIN PUBLIC 9\u2026\"},\n    {\"id\": \"6\", \"type\": \"RsaVerificationKey2018\",  \"owner\": \"did:sov:1234abcd\",\"publicKeyPem\": \"-----BEGIN PUBLIC A\u2026\"}\n  ],\n  \"authentication\": [\n    {\"type\": \"RsaSignatureAuthentication2018\", \"publicKey\": \"did:sov:1234abcd#4\"}\n  ],\n  \"service\": [\n    {\n      \"id\": \"did:example:123456789abcdefghi;did-communication\",\n      \"type\": \"did-communication\",\n      \"priority\" : 0,\n      \"recipientKeys\" : [ \"did:example:1234abcd#4\" ],\n      \"routingKeys\" : [ \"did:example:1234abcd#3\" ],\n      \"serviceEndpoint\" : \"did:example:xd45fr567794lrzti67;did-communication\"\n    }\n  ]\n}\n

    For the purposes of this discussion we are defining the message flow to be:

    1 \u2192 2 \u2192 8 \u2192 9 \u2192 3 \u2192 4

    However, that flow is arbitrary and only one hop is actually required:

    "},{"location":"concepts/0094-cross-domain-messaging/#encryption-envelopes","title":"Encryption Envelopes","text":"

    An encryption envelope is used to transport any Agent Message from one Agent directly to another. In our example message flow above, there are five encryption envelopes sent, one for each hop in the flow. The separate Encryption Envelope RFC covers those details.

    "},{"location":"concepts/0094-cross-domain-messaging/#agent-message-format","title":"Agent Message Format","text":"

    An Agent Message defines the format of messages processed by Agents. Details about the general form of Agent Messages can be found in the Agent Messages RFC.

    This RFC specifies (below) the \"Forward\" message type, a part of the \"Routing\" family of Agent Messages.

    "},{"location":"concepts/0094-cross-domain-messaging/#did-diddoc-and-routing","title":"DID, DIDDoc and Routing","text":"

    A DID owned by the Receiver is resolvable by the Sender as a DIDDoc using either a Public Ledger or using pairwise DIDs based on the did:peer method. The related DIDcomm DIDDoc Conventions RFC defines the required contents of a DIDDoc created by the receiving entity. Notably, the DIDDoc given to the Sender by the Receiver specifies the required routing of the message through an optional set of mediators.

    "},{"location":"concepts/0094-cross-domain-messaging/#cross-domain-interoperability","title":"Cross Domain Interoperability","text":"

    A key goal for interoperability is that we want other domains to know just enough about the configuration of a domain to which they are delivering a message, but no more. The following walks through those minimum requirements.

    "},{"location":"concepts/0094-cross-domain-messaging/#required-the-did-and-diddoc","title":"Required: The DID and DIDDoc","text":"

    As noted above, the Sender of an Agent to Agent Message has the DID of the Receiver, and knows the key(s) from the DIDDoc to use for the Receiver's Agent(s).

    Example: Alice wants to send a message from her phone (1) to Bob's phone (4). She has Bob's B:did@A:B, the DID/DIDDoc Bob created and gave to Alice to use for their relationship. Alice created A:did@A:B and gave that to Bob, but we don't need to use that in this example. The content of the DIDDoc for B:did@A:B is presented above.

    "},{"location":"concepts/0094-cross-domain-messaging/#required-end-to-end-encryption-of-the-agent-message","title":"Required: End-to-End encryption of the Agent Message","text":"

    The Agent Message from the Sender SHOULD be hidden from all Agents other than the Receiver. Thus, it SHOULD be encrypted with the public key of the Receiver. Based on our assumptions, the Sender can get the public key of the Receiver agent because they know the DID#keyname string, can resolve the DID to the DIDDoc and find the public key associated with DID#keyname in the DIDDoc. In our example above, that is the key associated with \"did:sov:1234abcd#4\".

    Most Sender-to-Receiver messages will be sent between parties that have shared pairwise DIDs (using the did:peer method). When that is true, the Sender will (usually) AuthCrypt the message. If that is not the case, or for some other reason the Sender does not want to AuthCrypt the message, AnonCrypt will be used. In either case, the Indy-SDK pack() function handles the encryption.

    If there are mediators specified in the DID service endpoint for the Receiver agent, the Sender must wrap the message for the Receiver in a 'Forward' message for each mediator. It is assumed that the Receiver can determine the from did based on the to DID (or the sender's verkey) using their pairwise relationship.

    {\n  \"@type\" : \"https://didcomm.org/routing/1.0/forward\",\n  \"@id\": \"54ad1a63-29bd-4a59-abed-1c5b1026e6fd\",\n  \"to\"   : \"did:sov:1234abcd#4\",\n  \"msg\"  : { json object from <pack(AgentMessage,valueOf(did:sov:1234abcd#4), privKey(A.did@A:B#1))> }\n}\n

    Notes

    The bullet above about the unpack() function returning the signer's public key deserves some additional attention. The Receiver of the message knows from the \"to\" field the DID to which the message was sent. From that, the Receiver is expected to be able to determine the DID of the Sender, and from that, access the Sender's DIDDoc. However, knowing the DIDDoc is not enough to know from whom the message was sent - which key was used to send the message, and hence, which Agent controls the Sending private key. This information MUST be made known to the Receiver (from unpack()) when AuthCrypt is used so that the Receiver knows which key was used to the send the message and can, for example, use that key in responding to the arriving Message.

    The Sender can now send the Forward Agent Message on its way via the first of the encryption envelope. In our example, the Sender sends the Agent Message to 2 (in the Sender's domain), who in turn sends it to 8. That of course, is arbitrary - the Sender's Domain could have any configuration of Agents for outbound messages. The Agent Message above is passed unchanged, with each Agent able to see the @type, to and msg fields as described above. This continues until the outer forward message gets to the Receiver's first mediator or the Receiver's agent (if there are no mediators). Each agent decrypts the received encrypted envelope and either forwards it (if a mediator) or processes it (if the Receiver Agent). Per the Encryption Envelope RFC, between Agents the Agent Message is pack()'d and unpack()'d as appropriate or required.

    The diagram below shows an example use of the forward messages to encrypt the message all the way to the Receiver with two mediators in between - a shared domain endpoint (aka https://agents-r-us.com) and a routing agent owned by the receiving entity.

    "},{"location":"concepts/0094-cross-domain-messaging/#required-cross-domain-encryption","title":"Required: Cross Domain Encryption","text":"

    While within a domain the Agents MAY choose to use encryption or not when sending messages from Agent to Agent, encryption MUST be used when sending a message into the Receiver's domain. The endpoint agent unpack()'s the encryption envelope and processes the message - usually a forward. Note that within a domain, the agents may use arbitrary relays for messages, unknown to the sender. How the agents within the domain knows where to send the message is implementation specific - likely some sort of dynamic DID-to-Agent routing table. If the path to the receiving agent includes mediators, the message must go through those mediators in order (for example, through 3 in our example) as the message being forwarded has been encrypted for the mediators.

    "},{"location":"concepts/0094-cross-domain-messaging/#required-mediators-process-forward-messages","title":"Required: Mediators Process Forward Messages","text":"

    When a mediator (eventually) receives the message, it determines it is the target of the (current) outer forward Agent Message and so decrypts the message's msg value to reveal the inner \"Forward\" message. Mediators use their (implementation specific) knowledge to map from the to field to deliver the message to the physical endpoint of the next agent to process the message on it's way to the Receiver.

    "},{"location":"concepts/0094-cross-domain-messaging/#required-the-receiver-app-agent-decryptsprocesses-the-agent-message","title":"Required: The Receiver App Agent Decrypts/Processes the Agent Message","text":"

    When the Receiver Agent receives the message, it determines it is the target of the forward message, decrypts the payload and processes the message.

    "},{"location":"concepts/0094-cross-domain-messaging/#exposed-data","title":"Exposed Data","text":"

    The following summarizes the information needed by the Sender's agents:

    The DIDDoc will have a public key entry for each additional Agent message Receiver and each mediator.

    In many cases, the entry for the endpoint agent should be a public DID, as it will likely be operated by an agency (for example, https://agents-r-us.com) rather than by the Receiver entity (for example, a person). By making that a public DID in that case, the agency can rotate its public key(s) for receiving messages in a single operation, rather than having to notify each identity owner and in turn having them update the public key in every pairwise DID that uses that endpoint.

    "},{"location":"concepts/0094-cross-domain-messaging/#data-not-exposed","title":"Data Not Exposed","text":"

    Given the sequence specified above, the following data is NOT exposed to the Sender's agents:

    "},{"location":"concepts/0094-cross-domain-messaging/#message-types","title":"Message Types","text":"

    The following Message Types are defined in this RFC.

    "},{"location":"concepts/0094-cross-domain-messaging/#corerouting10forward","title":"Core:Routing:1.0:Forward","text":"

    The core message type \"forward\", version 1.0 of the \"routing\" family is defined in this RFC. An example of the message is the following:

    {\n  \"@type\" : \"https://didcomm.org/routing/1.0/forward\",\n  \"@id\": \"54ad1a63-29bd-4a59-abed-1c5b1026e6fd\",\n  \"to\"   : \"did:sov:1234abcd#4\",\n  \"msg\"  : { json object from <pack(AgentMessage,valueOf(did:sov:1234abcd#4), privKey(A.did@A:B#1))> }\n}\n

    The to field is required and takes one of two forms:

    The first form is used when sending forward messages across one or more agents that do not need to know the details of a domain. The Receiver of the message is the designated Routing Agent in the Receiver Domain, as it controls the key used to decrypt messages sent to the domain, but not to a specific Agent.

    The second form is used when the precise key (and hence, the Agent controlling that key) is used to encrypt the Agent Message placed in the msg field.

    The msg field calls the Indy-SDK pack() function to encrypt the Agent Message to be forwarded. The Sender calls the pack() with the suitable arguments to AnonCrypt or AuthCrypt the message. The pack() and unpack() functions are described in more detail in the Encryption Envelope RFC.

    "},{"location":"concepts/0094-cross-domain-messaging/#reference","title":"Reference","text":"

    See the other RFCs referenced in this document:

    "},{"location":"concepts/0094-cross-domain-messaging/#drawbacks","title":"Drawbacks","text":"

    N/A

    "},{"location":"concepts/0094-cross-domain-messaging/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    A number of discussions were held about this RFC. In those discussions, the rationale for the RFC evolved into the text, and the alternatives were eliminated. See prior versions of the superseded HIPE (in status section, above) for details.

    A suggestion was made that the following optional parameters could be defined in the \"routing/1.0/forward\" message type:

    The optional parameters have been left off for now, but could be added in this RFC or to a later version of the message type.

    "},{"location":"concepts/0094-cross-domain-messaging/#prior-art","title":"Prior art","text":"

    N/A

    "},{"location":"concepts/0094-cross-domain-messaging/#unresolved-questions","title":"Unresolved questions","text":"

    N/A

    "},{"location":"concepts/0094-cross-domain-messaging/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0103-indirect-identity-control/","title":"Aries RFC 0103: Indirect Identity Control","text":""},{"location":"concepts/0103-indirect-identity-control/#summary","title":"Summary","text":"

    Compares and contrasts three forms of indirect identity control that have much in common and that should be explored together: delegation, guardianship, and controllership. Recommends mechanisms that allow identity technology to model each with flexibility, precision, and safety. These recommendations can be applied to many decentralized identity and credentialing ecosystems--not just to the ones best known in Hyperledger circles.

    "},{"location":"concepts/0103-indirect-identity-control/#motivation","title":"Motivation","text":"

    In most situations, we expect identity owners to directly control their own identities. This is the ideal that gives \"self-sovereign identity\" its name. However, control is not so simple in many situations:

    We need to understand how such situations color the interactions we have in an identity ecosystem.

    "},{"location":"concepts/0103-indirect-identity-control/#tutorial","title":"Tutorial","text":"

    Although the Sovrin Foundation advocates a specific approach to verifiable credentials, its glossary offers a useful analysis of indirect identity control that applies to any approach. Appendix C of the Sovrin Glossary V2 defines three forms of indirect identity control relationship--delegation, guardianship, controllership--matching the three bulleted examples above. Reviewing that document is highly recommended. It is the product of careful collaboration by experts in many fields, includes useful examples, and is clear and thorough.

    Here, we will simply reproduce two diagrams as a summary:

    Note: The type of delegation described in Appendix C, and the type we focus on in this doc, is one that crosses identity boundaries. There is another type that happens within an identity, as Alice delegates work to her various agents. For the time being, ignore this intra-identity delegation; it is explored more carefully near the end of the Delegation Details doc.

    "},{"location":"concepts/0103-indirect-identity-control/#commonalities","title":"Commonalities","text":"

    All of these forms of identity control share the issue of indirectness. All of them introduce risks beyond the ones that dominate in direct identity management. All of them complicate information flows and behavior. And they are inter-related; guardians and controllers often need to delegate, delegates may become controllers, and so forth.

    The solutions for each ought to have much in common, too--and that is the case. These forms of indirect identity control use similarly structured credentials in similar ways, in the context of similarly structured trust frameworks. Understanding and implementing support for one of them should give developers and organizations a massive headstart in implementing the others.

    Before we provide details about solutions, let's explore what's common and unique about each of the three forms of indirect identity control.

    "},{"location":"concepts/0103-indirect-identity-control/#compare-and-contrast","title":"Compare and Contrast","text":""},{"location":"concepts/0103-indirect-identity-control/#delegation","title":"Delegation","text":"

    Delegation can be either transparent or opaque, depending on whether it's obvious to an external party that a delegate is involved. A lawyer that files a court motion in their own name, but on behalf of a client, is a transparent delegate. A nurse who transcribes a doctor's oral instructions may be performing record-keeping as an opaque delegate, if the nurse is unnamed in the record.

    Transparent delegation is safer and provides a better audit trail than opaque delegation. It is closer to the ethos of self-sovereign identity. However, opaque delegation is a fact of life; sometimes a CEO wants her personal assistant to send a note or meeting invitation in a way that impersonates rather than explicitly representing her.

    Delegation needs constraints. These can take many forms, such as:

    "},{"location":"concepts/0103-indirect-identity-control/#constraints","title":"Constraints","text":"

    Delegation needs to be revokable.

    Delegates should not mix identity data for themselves with data that may belong to the delegator.

    The rules of how delegation work need to be spelled out in a trust framework.

    Sometimes, the indirect authority of a delegate should be recursively extensible (allow sub-delegation). Other times, this may be inappropriate.

    Use cases and other specifics of delegation are explored in greater depth in the Delegation Details doc.

    "},{"location":"concepts/0103-indirect-identity-control/#guardianship","title":"Guardianship","text":"

    Guardianship has all the bolded properties of delegation: transparent or opaque styles, constraints, revocation, the need to not mix identity data, the need for a trust framework, and the potential for recursive extensibility. It also adds some unique considerations.

    Since guardianship does not always derive from dependent consent (that is, the dependent is often unable to exercise sovereignty), the dependent in a guardianship relationship is particularly vulnerable to abuse from within.

    Because of this risk, guardianship is the most likely of the three forms of indirect control to require an audit trail and to involve legal formalities. Its trust frameworks are typically the most nuanced and complex.

    Guardianship is also the form of indirect identity control with the most complications related to privacy.

    Guardianship must have a rationale -- a justification that explains why the guardian has that status. Not all rationales are equally strong; a child lacking an obvious parent may receive a temporary guardian, but this guardian's status could change if a parent is found. Having a formal rationale allows conflicting guardianship claims to be adjudicated.

    Either the guardian role or specific guardianship duties may be delegated. An example of the former is when a parent leaves on a long, dangerous trip, and appoints a grandparent to be guardian in their absence. An example of the latter is when a parent asks a grandparent to drive a child to the school to sign up for the soccer team. When the guardian role is delegated, the result is a new guardian. When only guardianship duties are delegated, this is simple delegation and ceases to be guardianship.

    Use cases and other specifics of guardianship are explored in greater depth in the Guardianship Details doc.

    "},{"location":"concepts/0103-indirect-identity-control/#controllership","title":"Controllership","text":"

    Controllership shares nearly all bolded ../../features with delegation. It is usually transparent because things are usually known not to be identity owners in their interactions, and things are assumed not to control themselves.

    Like guardianship, controllership has a rationale. Usually, it is rooted in property ownership, but occasionally it might derive from court appointment. Also like guardianship, either the role or specific duties of controllership may be delegated. When controllership involves animals instead of machines, it may have risks of abuse and complex protections and trust frameworks.

    Unlike guardianship, controlled things usually require minimal privacy. However, things that constantly identify their controller(s) in a correlatable fashion may undermine the privacy of controllers in ways that are unexpected.

    Use cases and other specifics of controllership are explored in greater depth in the Controllership Details doc.

    "},{"location":"concepts/0103-indirect-identity-control/#solution","title":"Solution","text":"

    We recommend that all three forms of indirect identity control be modeled with some common ingredients:

    Here, \"proxy\" is used as a generic cover term for all three forms of indirect identity control. Each ingredient has a variant for each form (e.g., delegate credential, guardian credential, controller credential), and they have minor differences. However, they work so similarly that they'll be described generically, with differences noted where necessary.

    "},{"location":"concepts/0103-indirect-identity-control/#proxy-trust-framework","title":"Proxy Trust Framework","text":"

    A proxy trust framework is a published, versioned document (or collection of documents) that's accessible by URI. Writing one doesn't have to be a massive undertaking; see the sample guardianship trust framework for a simple example).

    It should answer at least the following questions:

    1. What is the trust framework's formal name, version, and URI? (The name cannot include a / character due to how it's paired with version in credential type fields. The version must follow semver rules.)

    2. In what geos and legal jurisdictions is it valid?

    3. On what rationales are proxies appointed? (For guardianship, these might include values like kinship and court_order. Each rationale needs to be formally defined, named, and published at a URI, because proxy credentials will reference them. This question is mostly irrelevant to delegation, where the rationale is always an action of the delegator.)

    4. What are the required and recommended behaviors of a proxy (holder), issuer, and verifier? How will this be enforced?

    5. What permissions vis-a-vis the proxied identity govern proxy actions? (For a delegate, these might include values like sign, pay, or arrange_travel. For a guardian, these might include values like financial, medical, do_not_resuscitate, foreign_travel, or new_relationships. Like rationales, permissions need to be formally defined and referencable by URI.)

    6. What are possible constraints on a proxy? (Constraints are bound to particular proxies, whereas a permission model is bound to the identity that the proxy is controlling; this distinction will make more sense in an example. Some constraints might include geo_radius, jurisdiction, biometric_consent_freshness, and so forth. These values also need to be formally defined and referencable by URI.)

    7. What auditing mechanisms are required, recommended, or allowed?

    8. What appeal mechanisms are required or supported?

    9. What proxy challenge procedures are best practice?

    10. What freshness rules are used for revocation testing and offline mode?

    "},{"location":"concepts/0103-indirect-identity-control/#proxy-credential","title":"Proxy Credential","text":"

    A proxy credential conforms to the Verifiable Credential Data Model 1.0. It can use any style of proof or data format (JSON-LD, JWT, Sovrin ZKP, etc). It is recognizable as a proxy credential by the following characteristics:

    1. Its @context field, besides including the \"https://www.w3.org/2018/credentials/v1\" required of all VCs, also includes a reference to this spec: \"https://github.com/hyperledger/aries-rfcs../../concepts/0103-indirect-identity-control\".

    2. Its type field contains, in addition to \"VerifiableCredential\", a string in the format:

      ...where form is one of the letters D (for Delegation), G (for Guardianship), or C (for controllership), trust framework is the name that a Proxy Trust Framework formally declares for itself, tfver is its version, and variant is a specific schema named in the trust framework. A regex that matches this pattern is: Proxy\\.([DGC])/([^/]+)/(\\d+[^/]*)/(.+), and an example of a matching string is: Proxy.G/UNICEF Vulnerable Populations Trust Framework/1.0/ChildGuardian.

    3. The metadata fields for the credential include trustFrameworkURI (the value of which is a URI linking to the relevant trust framework), auditURI (the value of which is a URI linking to a third-party auditing service, and which may be constrained or empty as specified in the trust framework), and appealURI (the value of which is a URI linking to an arbitration or adjudication authority for the credential, and which may be constrained or empty as specified in the trust framework).

    4. The credentialSubject section of the credential describes a subject called holder and a subject called proxied. The holder is the delegate, guardian, or controller; the proxied is the delegator, dependent, or controlled thing.

    5. credentialSubject.holder.type must be a URI pointing to a schema for credentialSubject.holder as defined in the trust framework. The schema must include the following fields:

      • role: A string naming the role that the holder plays in the permissioning scheme of the dependent. These roles must be formally defined in the trust framework. For example, a guardian credential might identify the holder (guardian) as playing the next_of_kin role, and this next_of_kin role might be granted a subset of all permissions that are possible for the dependent's identity. A controllership credential for a drone might identify the holder (controller) as playing the pilot role, which has different permissions from the maintenance_crew role.

      • rationaleURI: Required for guardian credentials, optional for the other types. This links to a formal definition in the trust framework of a justification for holding identity control status. For guardians, the rationaleURI might point to a definition of the blood_relative or tribal_member rationale, for example. For controllers, the rationaleURI might point to a definition of legal_appointment or property_owner.

      The schema may also include zero or more credentialSubject.holder.constraint.* fields. These fields would be used to limit the time, place, or circumstances in which the proxy may operate.

    6. credentialSubject.proxied.type must be a URI pointing to a schema for credentialSubject.proxied as defined in the trust framework. The schema must include a permissions field. This field contains an array of SGL rules, each of which is a JSON object in the form:

      {\"grant\": privileges, \"when\": condition}\n

      A complete example for a guardianship use case is provided in the SGL tutorial.

    7. The credential MAY or MUST contain additional fields under credentialSubject.holder that describe the holder (e.g., the holder's name, DID, biometric, etc.). If the credential is based on ZKP/link secret technologies, then these may be unnecessary, because the holder can bind their proxy credential to other credentials that prove who they are. If not, then the credential MUST contain such fields.

    8. The credential MUST contain additional fields under credentialSubject.proxied that describe the proxied identity (e.g., a dependent's name or biometric; a pet's RFID tag; a drone's serial number).

    "},{"location":"concepts/0103-indirect-identity-control/#proxy-challenge","title":"Proxy Challenge","text":"

    A proxy challenge is an interaction in which the proxy must justify the control they are exerting over the proxied identity. The heart of the challenge is a request for a verifiable presentation based on a proxy credential, followed by an evaluation of the evidence. This evaluation includes traditional credential verification, but also a comparison of a proxy's role (credentialSubject.holder.role) to permissions (credentialSubject.proxied.permissions), and a comparison of circumstances to constraints (credentialSubject.holder.constraints.*). It may also involve the creation of an audit trail, depending on the value of the auditURI field.

    During the verifiable presentation, the holder MUST disclose all of the following fields:

    In addition, the holder MUST prove that the proxy is the intended holder of the credential, to whatever standard is required by the trust framework. This can be done by disclosing additional fields under credentialSubject.holder, or by proving things about the holder in zero knowledge, if the credential supports ZKPs. In the latter case, proofs about the holder could also come from other credentials in the holder's possession, linked to the proxy credential through the link secret.

    The holder MUST also prove that the proxied identity is correct, to whatever standard is required by the trust framework. This can be done by disclosing additional fields under credentialSubject.proxied, or by proving things about the subject in zero knowledge.

    [TODO: discuss moments when proxy challenges may be vital; see https://docs.google.com/presentation/d/1aq45aUHTOK_WhFEICboXQrp7dalpLm9-MGg77Nsn50s/edit#slide=id.g59fffee7a0_0_39 ]

    [TODO: discuss offline mode, freshness, and revocation]

    "},{"location":"concepts/0103-indirect-identity-control/#reference","title":"Reference","text":"

    A complete sample of a guardianship trust framework and credential schema are attached for reference. Please also see the details about each form of indirect identity control:

    "},{"location":"concepts/0103-indirect-identity-control/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0103-indirect-identity-control/controllership-details/","title":"Controllership Details","text":""},{"location":"concepts/0103-indirect-identity-control/delegation-details/","title":"Delegation Details","text":"

    Three basic approaches to delegation are possible:

    1. Delegate by expressing intent in a DID Doc.
    2. Delegate with verifiable credentials.
    3. Delegate by sharing a wallet.

    The alternative of delegating via the authorization section of a DID Doc (option #1) is unnecessarily fragile, cumbersome, redundant, and expensive to implement. The theory of delegation with DIDs and credentials has been explored thoughtfully in many places (see Prior Art and References). The emergent consensus is:

    "},{"location":"concepts/0103-indirect-identity-control/delegation-details/#use-cases","title":"Use Cases","text":"

    The following use cases are good tests of whether we're implementing delegation properly.

    "},{"location":"concepts/0103-indirect-identity-control/delegation-details/#1-thrift-bank-employees","title":"1. Thrift Bank Employees","text":"

    Thrift Bank wishes to issue employee credentials to its employees, giving them delegated authority to perform certain actions on behalf of the bank (e.g., open their till, unlock the front door, etc). Thrist has a DID, but wishes to grant credential-issuing authority to its Human Resources Department (which has a separate DID). In turn, the HR department wishes to further delegate this authority to the Personnel Division. Inside of the Personnel division, three employees, Cathy, Stan, and Janet will ultimately be responsible for issuing the employee credentials.

    "},{"location":"concepts/0103-indirect-identity-control/delegation-details/#2-u-rent-a-car","title":"2. U-Rent-a-Car","text":"

    U-Rent-a-Car is a multinational company that owns a large fleet of vehicles. Its national headquarters issues a credential, C1, to its regional office in Quebec, authorizing U-Rent-a-Car Quebec to delegate driving privileges to customers, for cars owned by the parent company. Alice rents a car from U-Rent-a-Car Quebec. U-Rent-a-Car Quebec issues a driving privileges credential, C2, to Alice. C2 gives Alice the privilege to drive the car from Monday through Friday of a particular week. Alice climbs in the car and uses her C2 credential to prove to the car (which acts as verifier) that she is an authorized driver. She gets pulled over for speeding on Wednesday and uses C2 to prove to the police that she is the authorized driver of the car. On Thursday night Alice goes to a fancy restaurant. She uses valet parking. She issues credential C3 to the valet, allowing him to drive the car within 100 meters of the restaurant, for the next 2 hours while she is at the restaurant. The valet uses this credential to drive the car to the parking garage. While Alice eats, law enforcement goes to U-Rent-a-Car Quebec with a search warrant for the car. The law enforcement agency has discovered that the previous driver of the car was a criminal. It asks U-Rent-a-Car Quebec to revoke C2, because they don\u2019t want the car to be driven any more, in case evidence is accidentally destroyed. At the end of dinner, Alice goes to the valet and asks for her car to be returned. The valet goes to the car and attempts to open the door using C3. The car tests the validity of the delegation chain of C3, and discovers that C2 has been revoked, making C3 invalid. The car refuses to open the door. Alice has to take Uber to get home. Law enforcement officials take possession of the car.

    "},{"location":"concepts/0103-indirect-identity-control/delegation-details/#3-acme-departments","title":"3. Acme Departments","text":"

    Acme wants its HR department to issue Acme Employment Credentials, its Accounting department to issue Purchase Orders and Letters of Credit, its Marketing department to officially sign press releases, and so forth. All of these departments should be provably associated with Acme and acting under Acme\u2019s name in an official capacity.

    "},{"location":"concepts/0103-indirect-identity-control/delegation-details/#4-members-of-an-llc","title":"4. Members of an LLC","text":"

    Like #3, but simpler. 3 or 4 people each need signing authority for the LLC, so LLC delegates that authority.

    "},{"location":"concepts/0103-indirect-identity-control/delegation-details/#approaches-to-recursive-delegation","title":"Approaches to recursive delegation","text":"

    TODO 1. Root authority delegates directly at every level. 2. Follow the chain 3. Embed the chain

    "},{"location":"concepts/0103-indirect-identity-control/delegation-details/#revocation","title":"Revocation","text":"

    [TODO]

    "},{"location":"concepts/0103-indirect-identity-control/delegation-details/#infra-identity-delegation","title":"Infra-identity Delegation","text":"

    TODO

    "},{"location":"concepts/0103-indirect-identity-control/delegation-details/#prior-art-and-references","title":"Prior Art and References","text":"

    All of the following sources have contributed valuable thinking about delegation:

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-details/","title":"Guardianship Details","text":"

    For a complete walkthrough or demo of how guardianship works, see this demo script.

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-details/#use-cases","title":"Use Cases","text":"

    See https://docs.google.com/presentation/d/1qUYQa7U1jczEFun3a7sB3lKHIprlwd7brfOU9hEJ34U/edit?usp=sharing

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-details/#who-appoints-a-guardian-rationales","title":"Who appoints a guardian (rationales)","text":"

    See https://docs.google.com/presentation/d/1aq45aUHTOK_WhFEICboXQrp7dalpLm9-MGg77Nsn50s/edit#slide=id.g59fffee7a0_0_0

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-details/#transparent-vs-opaque","title":"Transparent vs. Opaque","text":"

    See https://docs.google.com/presentation/d/1aq45aUHTOK_WhFEICboXQrp7dalpLm9-MGg77Nsn50s/edit#slide=id.g59fffee7a0_0_46

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-details/#modes-of-guardianship","title":"Modes of Guardianship","text":"

    Holding-Based, Impersonation, Doc-based

    See https://docs.google.com/presentation/d/1aq45aUHTOK_WhFEICboXQrp7dalpLm9-MGg77Nsn50s/edit#slide=id.g59fffee7a0_0_265

    See also https://docs.google.com/presentation/d/1aq45aUHTOK_WhFEICboXQrp7dalpLm9-MGg77Nsn50s/edit#slide=id.g59fffee7a0_0_280, https://docs.google.com/presentation/d/1aq45aUHTOK_WhFEICboXQrp7dalpLm9-MGg77Nsn50s/edit#slide=id.g59fffee7a0_0_295, https://docs.google.com/presentation/d/1aq45aUHTOK_WhFEICboXQrp7dalpLm9-MGg77Nsn50s/edit#slide=id.g59fffee7a0_0_307

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-details/#guardians-and-wallets","title":"Guardians and Wallets","text":"

    Need to work on \"wallets\" term See https://docs.google.com/presentation/d/1aq45aUHTOK_WhFEICboXQrp7dalpLm9-MGg77Nsn50s/edit#slide=id.g59fffee7a0_0_365

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-details/#guardians-and-delegation","title":"Guardians and Delegation","text":"

    TODO

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-details/#privacy-considerations","title":"Privacy Considerations","text":""},{"location":"concepts/0103-indirect-identity-control/guardianship-details/#diffuse-trust","title":"Diffuse Trust","text":""},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/schema/","title":"Sample Guardianship Schema","text":"

    This document presents a sample schema for a guardian credential appropriate to the IRC-as-guardian-of-Mya-in-a-refugee-camp use case. It is accompanied by a sample trust framework.

    The raw schema is here:

    For general background on guardianship and its associated credentials, see this slide presentation.

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/schema/#how-to-use","title":"How to Use","text":"

    The schema documented here could be passed as the attrs arg to the indy_issuer_create_schema() method in libindy. The \"1.0\" in this document's name refers to the fact that we are using Indy 1.0-style schemas; we aren't trying to use the rich schema constructs that will be available to us when the \"schema 2.0\" effort is mature.

    The actual JSON you would need to pass to the indy_issuer_create_schema() method is given in the attached schema.json file. In code, if you place that file's content in a string variable and pass the variable as the attrs arg, the schema will be registered on the ledger. You might use values like \"Red Cross Vulnerable Populations Guardianship Cred\" and \"1.0\" as the name and version args to that same function. You can see an example of how to make the call by looking at the \"Save Schema and Credential Definition\" How-To in Indy SDK.

    See the accompanying trust framework for an explanation of individual fields.

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/","title":"Sample Guardianship Trust Framework","text":"

    This document describes a sample trust framework for guardianship appropriate to the IRC-as-guardian-of-Mya-in-a-refugee-camp use case. It is accompanied by a sample schema for a guardian credential.

    For general background on guardianship and its associated credentials, see this slide presentation.

    The trust framework shown here is a reasonable starting point, and it demonstrates the breadth of issues well. However, it probably would need significantly more depth to provide enough guidance for developers writing production software, and to be legally robust in many different jurisdictions.

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#name-version-author","title":"Name, Version, Author","text":"

    This is the \"Sovrin ID4All Vulnerable Populations Guardianship Trust Framework\", version \"1.0\". The trust framework is abbreviated in credential names and elsewhere as \"SIVPGTF\". It is maintained by the Sovrin ID4All Working Group. Credentials using the schema described here are known as gcreds.

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#scope","title":"Scope","text":"

    The trust framework applies to situations where NGOs like the International Red Cross/Red Crescent, UNICEF, or Doctors Without Borders are servicing large populations of vulnerable refugees, both children and adults, in formal camps. It assumes that the camps have at least modest, intermittent access to telecommunications, and that that they operate with at least tacit approval from relevant legal authorities. It may not provide enough guidance or protections in situations involving active combat, or in legal jurisdictions where rule of law is very tenuous.

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#rationales-for-guardianship","title":"Rationales for Guardianship","text":"

    In this framework, guardianship is based on one or more of the following formally defined rationales:

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#identifying-a-guardian","title":"Identifying a guardian","text":"

    This framework assumes that credentials will use ZKP technology. Thus, no holder attributes are embedded in a gcred except for the holder's blinded link secret. During a guardian challenge, the holder should include appropriate identifying evidence based on ZKP credential linking.

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#identifying-a-dependent","title":"Identifying a dependent","text":"

    This framework defines the following formal ways to identify a dependent in a gcred:

    These fields should appear in all gcreds. First name should be the name that the dependent acknowledges and answers to, not necessarily the legal first name. Last name may be empty if it is unknown. Birth date may be approximate. Photo is required and must be a color photo of at least 800x800 pixel resolution, taken at the time the guardian credential is issued, showing the dependent only, in good light. At least one of iris and fingerprint are strongly recommended, but neither is required.

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#permissions","title":"Permissions","text":"

    Guardians may be assigned some or all of the following formally defined permissions in this trust framework:

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#constraints","title":"Constraints","text":"

    A guardian's ability to control the dependent may be constrained in the following formal ways by guardian credentials that use this trust framework:

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#boundary","title":"Boundary","text":"

    Guardian can only operate within named boundaries, such as the boundaries of a country, province, city, military command, river, etc. Boundaries are specified as a localized, comma-separated list of strings, where each locale section begins with a | (pipe) character followed by an ISO639 language code followed by a : (colon) character, followed by data. All localized values must describe the same constraints; if one locale's description is more permissive than another's, the most restrictive interpretation must be used. An example might be:

    \"constraints.boundaries\": \"|en: West side of Euphrates river, within Baghdad city limits\n    |es: lado oeste del r\u00edo Eufrates, dentro del centro de Bagdad\n    |fr: c\u00f4t\u00e9 ouest de l'Euphrate, dans les limites de la ville de Bagdad\n    |ar: \u0627\u0644\u062c\u0627\u0646\u0628 \u0627\u0644\u063a\u0631\u0628\u064a \u0645\u0646 \u0646\u0647\u0631 \u0627\u0644\u0641\u0631\u0627\u062a \u060c \u062f\u0627\u062e\u0644 \u062d\u062f\u0648\u062f \u0645\u062f\u064a\u0646\u0629 \u0628\u063a\u062f\u0627\u062f\"\n
    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#point-of-origin-and-radius","title":"Point of Origin and Radius","text":"

    The constraints.point_of_origin and radius fields are an additional or alternative way to specify a geographical constraint. They must be used together. Point of origin is a string that may use latitude/longitude notation (e.g., \"@40.4043328,-111.7761829,15z\"), or a landmark. Landmarks must be localized as described previously. Radius is an integer measured in kilometers.

    \"constraints.point_of_origin\": \"|en: Red Crescent Sunrise Camp\"\n\"constraints.radius_km\": 10\n
    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#jurisdictions","title":"Jurisdictions","text":"

    This is a comma-separated list of legal jurisdictions where the guardianship applies. It is also localized:

    \"constraints.jurisdictions\": \"|en: EU, India, Bangladesh\"\n
    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#trigger-and-circumstances","title":"Trigger and Circumstances","text":"

    These are human-friendly description of circumstances that must apply in order to make the guardian's status active. It may be used in conjunction with a trigger (see next). It is vital that the wording of these fields be carefully chosen to minimize ambiguity; carelessness could invite abuse. Note that each of these fields could be used separately. A trigger by itself would unconditionally confer guardianship status; circumstances without a trigger would require re-evaluation with every guardianship challenge and might be used as long as an adult is unconscious or diagnosed with dementia, or while traveling with a child, for example.

    \"constraints.trigger\": \"|en: Death of parent\"\n\"constraints.circumstances\": \"|en: While a parent or adult sibling is unavailable, and no\n    new guardian has been adjudicated.\n    |ar: \u0641\u064a \u062d\u064a\u0646 \u0623\u0646 \u0623\u062d\u062f \u0627\u0644\u0648\u0627\u0644\u062f\u064a\u0646 \u0623\u0648 \u0627\u0644\u0623\u0634\u0642\u0627\u0621 \u0627\u0644\u0628\u0627\u0644\u063a\u064a\u0646 \u063a\u064a\u0631 \u0645\u062a\u0648\u0641\u0631 \u060c \u0648\u0644\u064a\u0633\n         \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u0627\u0644\u0648\u0635\u064a \u0627\u0644\u062c\u062f\u064a\u062f \u062a\u0645 \u0627\u0644\u0641\u0635\u0644 \u0641\u064a\u0647.\"\n
    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#timing","title":"Timing","text":"

    These allow calendar restrictions. Both start time and end time are expressed as ISO8601 timestamps in UTC timezone, but can be limited to day- instead of hour-and-minute-precision (in which case timezone is irrelevant). Start time is inclusive, whereas end time is exclusive (as soon as the date and time equals or exceeds end time, the guardianship becomes invalid). Either value can be used by itself, in addition to being used in combination.

    \"constraints.startTime\": \"2019-07-01T18:00\"\n\"constraints.endTime\": \"2019-08-01\"\n
    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#auditing","title":"Auditing","text":"

    It is strongly recommended that an audit trail be produced any time a guardian performs any action on behalf of the dependent, except for school and necessaries. Reports of auditable events are accomplished by generating a JSON document in the following format:

    {\n    \"@type\": \"SIVPGTF audit/1.0\",\n    \"event_time\": \"2019-07-25T18:03:26\",\n    \"event_place\": \"@40.4043328,-111.7761829,15z\",\n    \"challenger\": \"amy.smith@redcross.org\",\n    \"witness\": \"fred.jones@redcross.org\",\n    \"guardian\": \"Farooq Abdul Sami\",\n    \"rationale\": \"natural parent\",\n    \"dependent\": \"Isabel Sami, DOB 2009-05-21\",\n    \"event\": \"enroll in class, receive books\",\n    \"justifying_permissions\": \"school, necessaries\"\n    \"evidence\": // base64-encoded photo of Farooq and Isabel\n}\n
    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#appeal","title":"Appeal","text":"

    NGO staff (who receive delegated authority from the NGO that acts as guardian), and a council of 5 grandmothers maintain a balance of powers. Decisions of either group may be appealed to the other. Conformant NGOs must identify a resource that can adjudicate an escalated appeal, and this resource must be independent in all respects--legal, financial, human, and otherwise--from the NGO. This resource must have contact information in the form of a phone number, web site, or email address, and the contact info must be provided in the guardian credential in the appeal_uri field.

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#freshness-and-offline-operation","title":"Freshness and Offline Operation","text":"

    [TODO]

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#revocation","title":"Revocation","text":"

    [TODO]

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#best-practices","title":"Best Practices","text":""},{"location":"concepts/0104-chained-credentials/","title":"Aries RFC 0104: Chained Credentials","text":""},{"location":"concepts/0104-chained-credentials/#note-editable-images","title":"Note: editable images","text":"

    See here for original images used in this RFC.

    "},{"location":"concepts/0104-chained-credentials/#note-terminology-update","title":"Note: terminology update","text":"

    \"Chained credentials\" were previously called \"delegatable credentials.\" The new term is broader and more accurate. Delegation remains a use case for the mechanism, but is no longer its exclusive focus.

    "},{"location":"concepts/0104-chained-credentials/#summary","title":"Summary","text":"

    Describes a set of conventions, collectively called chained credentials, that allows data in a verifiable credential (VC) to be traced back to its origin while retaining its verifiable quality. This chaining alters trust dynamics. It means that issuers late in a chain can skip complex issuer setup, and do not need the same strong, globally recognizable reputation that's important for true roots of trust. It increases the usefulness of offline verification. It enables powerful delegation of privileges, which unlocks many new verifiable credential use cases.

    Chained credentials do not require any modification to the standard data model for verifiable credentials; rather, they leverage the data model in a simple, predictable way. Chaining conventions work (with some feature variations) for any W3C-conformant verifiable credential type, not just the ones developed inside Hyperledger.

    "},{"location":"concepts/0104-chained-credentials/#note-object-capabilities","title":"Note: object capabilities","text":"

    When chained credentials are used to delegate, the result is an object capabilities (OCAP) solution similar to ZCAP-LD in scope, ../../features, and intent. However, such chained capabilities accomplish their goals a bit differently. See here for an explanation of the divergence and redundancy.

    "},{"location":"concepts/0104-chained-credentials/#note-sister-rfc","title":"Note: sister RFC","text":"

    This RFC is complements Aries RFC 0103: Indirect Identity Control. That doc describes how delegation (and related control mechanisms like delegation and controllership) can be represented in credentials and governed; this one describes an underlying infrastructure to enable such a model. The ZKP implementation of this RFC comes from Hyperledger Ursa and depends on cryptography described by Camenisch et al. in 2017.

    "},{"location":"concepts/0104-chained-credentials/#motivation","title":"Motivation","text":"

    There is a tension between the decentralization that we want in a VC ecosystem, and the way that trust tends to centralize because knowledge and reputation are unevenly distributed. We want anyone to be able to attest to anything they like--but we know that verifiers care very much about the reputation of the parties that make those attestations.

    We can say that verifiers will choose which issuers they trust. However, this places a heavy burden on them--verifiers can't afford to vett every potential issuer of credentials they might encounter. The result will be a tendency to accept credentials only from a short list of issuers, which leads back to centralization.

    This tendency also creates problems with delegation. If all delegation has to be validated through a few authorities, a lot of the flexibility and power of delegation is frustrated.

    We'd like a VC landscape where a tiny startup can issue an employment credential with holder attributes taken as seriously as one from a massive global conglomerate--and with no special setup by verifiers to trust them equally. And we'd like parents to be able to delegate childcare decisions to a babysitter on the spur of the moment--and have the babysitter be able to prove it when she calls an ambulance.

    "},{"location":"concepts/0104-chained-credentials/#tutorial","title":"Tutorial","text":""},{"location":"concepts/0104-chained-credentials/#data-provenance","title":"Data provenance","text":"

    Our confidence in data depends on the data's origin and chain of custody--its provenance.

    Journalists and academics cite sources. The highest quality sources explain how primary data was derived, and what inferences are reasonable to draw from it. Better sources, and better links to those sources, create better trust.

    With credentials, the direct reporter of data is the issuer--but the issuer is not always the data's source. When Acme's HR department issues an employment credential that includes Bob the employee's name, the source of Bob's name is probably government-issued ID, not Acme's subjective opinion. Acme is reporting data that originated elsewhere.

    Acme should cite its sources. Even when citations are unstructured and unsigned, they may still be helpful to humans. But we may be able to do better. If the provenance of an employee's name is verifiable in the same way as other credential data, then Acme's reputation with respect to that assertion becomes almost unimportant; the data's ability to foster trust is derived from the reputation of its true source, plus the algorithm that verifies that source.

    This matters.

    One of the challenges with traditional trust on the web is the all-or-nothing model of trust for certificate authorities. A website in an obscure corner of the globe uses an odd CA; browser manufacturers must debate whether that CA deserves to be on the list of globally trusted attesters. If yes, then any cert the CA issues will be silently believed; if no, then none will. UX pressure has often decided the debate in favor of trust by default; the result has been very long lists of trusted CAs, and a corresponding parade of junk certificates and abuse.

    Provenanced data helps verifiable credentials avoid the same conundrum. The set of original sources for a person's legal name is far smaller than the set of secondary entities that might issue credentials containing that data, so verifiers need only a short list of trusted sources for that data, no matter how many issuers they see. When they evaluate an employment credential, they will be able to see that the employee's name comes from a passport issued by the government, while the hire date is directly attested by the company. This lets the verifier nuance trust in granular and useful ways.

    "},{"location":"concepts/0104-chained-credentials/#delegation-as-provenance-of-authority","title":"Delegation as provenance of authority","text":"

    Delegation can be modeled as a data provenance issue, where the data in question is an authorization. Suppose Alice, the CEO of Thrift Bank, has the authority to do many tasks, and that one of them is to negotiate contracts. As the company grows, she decides that the company needs a role called \"Corporate Counsel\", and she hires Carl for the job. She wants to give Carl a credential that says he has the authority to negotiate contracts. The provenance of Carl's authority is Alice's own authority.

    Notice how parallel this diagram is to the previous one.

    "},{"location":"concepts/0104-chained-credentials/#chaining","title":"Chaining","text":"

    Both of the examples given above imagine a single indirection between a data source and the issuer who references it. But of course many use cases will be far more complex. Perhaps the government attests Bob's name; this becomes the basis for Bob's employer's attestation, which in turn becomes the basis for an attestation by the contractor that processes payroll for Bob's employer. Or perhaps authorization from Alice to corporate counsel gets further delegated. In either case, the result will be a data provenance chain:

    This is the basis for the chained credential mechanism that gives this RFC its name. Chained credentials contain information about the provenance of some or all of the data they embody; this allows a verifier to trace the data backward, possibly through several links, to its origin, and to evaluate trust on that basis.

    "},{"location":"concepts/0104-chained-credentials/#use-cases","title":"Use cases","text":"

    Many use cases exist for conveying provenance for the data inside verifiable credentials:

    "},{"location":"concepts/0104-chained-credentials/#acid-test","title":"Acid Test","text":"

    Although these situations sound different, their underlying characteristics are surprisingly similar--and so are those of other use cases we've identified. We therefore chose a single situation as being prototypical. If we address it well, our solution will embody all the characteristics we want. The situation is this:

    "},{"location":"concepts/0104-chained-credentials/#chain-of-provenance-for-authority-delegation","title":"Chain of Provenance for Authority (Delegation)","text":"

    The national headquarters of Ur Wheelz (a car rental company) issues a verifiable credential, C1, to its regional office in Houston, authorizing Ur Wheelz Houston to rent, maintain, sell, drive, and delegate driving privileges to customers, for certain cars owned by the national company.

    Alice rents a car from Ur Wheelz Houston. Ur Wheelz Houston issues a driving privileges credential, C2, to Alice. C2 gives Alice the privilege to drive the car on a particular week, within the state of Texas, and to further delegate that privilege. Alice uses her C2 credential to prove to the car (which is a fancy future car that acts as verifier) that she is an authorized driver; this is what unlocks the door.

    Alice gets pulled over for speeding on Wednesday and uses C2 to prove to the police that she is the authorized driver of the car.

    On Thursday night Alice goes to a fancy restaurant. She uses valet parking. She issues credential C3 to the valet, allowing him to drive the car within 100 meters of the restaurant, for the next 2 hours while she is at the restaurant. Alice chooses to constrain C3 so the valet cannot further delegate. The valet uses C3 to unlock and drive the car to the parking garage.

    "},{"location":"concepts/0104-chained-credentials/#revocation","title":"Revocation","text":"

    While Alice eats, law enforcement officers go to Ur Wheelz Houston with a search warrant for the car. They have discovered that the previous driver of the car was a criminal. They ask Ur Wheelz to revoke C2, because they don\u2019t want the car to be driven any more, in case evidence is accidentally destroyed.

    At the end of dinner, Alice goes to the valet and asks for her car to be returned. The valet goes to the car and attempts to open the door using C3. The car tests the validity of the delegation chain of C3, and discovers that C2 has been revoked, making C3 invalid. The car refuses to open the door. Alice has to take Uber to get home. Law enforcement takes possession of the car.

    "},{"location":"concepts/0104-chained-credentials/#how-chained-credentials-address-this-use-case","title":"How chained credentials address this use case","text":"

    A chained credential is a verifiable credential that contains provenanced data, linking it back to its source. In this case, the provenanced data is about authority, and each credential in the chain functions like a capability token, granting its holder privileges that derive from an upstream issuer's own authority.

    "},{"location":"concepts/0104-chained-credentials/#note-delegate-credentials","title":"Note: delegate credentials","text":"

    We call this subtype of chained credential a delegate credential. We'll try to describe the provenance chain in generic terms as much as possible, but the delegation problem domain will occasionally color our verbiage... All delegate credentials are chained; not all chained credentials are delegate credentials.

    The first entity in the provenance chain for authority (Ur Wheels National, in our acid use case) is called the root attester, and is probably an institution configured for traditional credential issuance (e.g., with a public DID to which reputation attaches; in Indy, this entity also publishes a credential definition). All downstream entities in the provenance chain can participate without special setup. They need not have public DIDs or credential definitions. This is because the strength of the assertion does not depend on their reputation; rather, it depends on the robustness of the algorithm that walks the provenance chain back to its root. Only the root attester needs public reputation.

    "},{"location":"concepts/0104-chained-credentials/#note-contrast-with-acls","title":"Note: contrast with ACLs","text":"

    When chained credentials are used to convey authority (the delegate credential subtype), they are quite different from ACLs. ACLs map an identity to a list of permissions. Delegate credentials entitle their holder to whatever permissions the credential enumerates. Holding may or may not be transferrable. If it is not transferrable, then fraud prevention must be considered. If the credential isn't bound to a holder, then it's a bearer token and is an even more canonical OCAP.

    "},{"location":"concepts/0104-chained-credentials/#special-sauce","title":"Special Sauce","text":"

    A chained credential delivers these ../../features by obeying some special conventions over and above the core requirements of an ordinary VC:

    1. It contains a special field named schema that is a base64url-encoded representation of its own schema. This makes the credential self-contained in the sense that it doesn't depend on a schema or credential definition defined by an external authority (though it could optionally embody one). This field is always disclosed in presentations.

    2. It contains a special field named provenanceProofs. The field is an array, where each member of the array is a tuple (also a JSON array). The first member of each tuple is a list of field names; the second member of each tuple is an embedded W3C verifiable presentation that proves the provenance of the values in those fields. In the case of delegate credentials, provenanceProofs is proving the provenance of a field named authorization.

      Using credentials C1, C2, and C3 from our example use case, the authorization tuple in provenanceProofs of C1 includes a presentation that proves, on the basis of a car title that's a traditional, non-provenanced VC, that Ur Wheelz National had the authority to delegate a certain set of privileges X to Ur Wheelz Houston. The authorization tuple in provenanceProofs of C2 proves that Ur Wheelz Houston had authority to delegate Y (a subset of the authority in X) to Alice, and also that Ur Wheelz Houston derived its authority from Ur Wheelz National, who had the authority to delegate X to Ur Wheelz Houston. Similarly, the authorization tuple in C3's provenanceProofs is an extension of the authorization tuple in C2's provenanceProofs\u2014now proving that Alice had the authority to delegate Z to the valet, plus all the other delegations in the upstream credentials.

      When a presentation is created from a chained credential, provenanceProofs is either disclosed (for non-ZKP proofs), or is used as evidence to prove the same thing (for ZKPs).

    3. It is associated (through a name in its type field array and through a URI in its trustFrameworkURI field) with a trust framework that describes provenancing rules. For general chained credentials, this is optional; for delegate credentails, it is required. The trust framework may partially describe the semantics of some schema variants for a family of chained credentials, as well as how provenance is attenuated or categorized. For example, a trust framework jointly published by Ur Wheelz and other car rental companies might describe delegate credential schemas for car owners, car rental offices, drivers, insurers, maintenance staff, and guest users of cars. It might specify that the permissions delegatable in these credentials include drive, maintain, rent, sell, retire, delegate-further, and so forth. The trust framework would do more than enumerate these values; it would define exactly what they mean, how they interact with one another, and what permissions are expected to be in force in various circumstances.

    4. The reputation of non-root holders in a provenance chain become irrelevant as far as credential trust is concerned--trust is based on an unbroken chain back to a root public attester, not on published, permanent characteristics of secondary issuers. Only the root attester needs to have a public DID. Other issuer keys and DIDs can be private and pairwise.

    5. If it is a delegate credential, it also meets all the requirements to be a proxy credential as described in Aries RFC 0103: Indirect Identity Control. Specifically:

      • It uses credentialSubject.holder.* fields to bind it to a particular holder, if applicable.

      • It uses credentialSubject.proxied.* fields to describe the upstream delegator to whatever extent is required.

      • It uses credentialSubject.holder.role and credentialSubject.proxied.permissions to grant permissions to the holder. See Delegating Permissions for more details.

      • It may use credentialSubject.holder.constraints.* to impose restrictions on how/when/under what circumstances the delegation is appropriate.

    "},{"location":"concepts/0104-chained-credentials/#whats-not-different","title":"What's not different","text":"

    Proof of non-revocation uses the same mechanism as the underlying credentialing system. For ZKPs, this means that merkle tree or accumulator state is checked against the ledger or against any other source of truth that the root attester in the chain specifies; no conferring with upstream issuers is required. See ZKP Revocation in the reference section. For non-ZKP credentials, this probably means consulting revocation lists or similar.

    Offline mode works exactly the same way as it works for ordinary credentials, and with exactly the same latency and caching properties.

    Chained credentials may contain ordinary credential attributes that describe the holder or other subjects, including ZKP-style blinded link secrets. This allows chained credentials to be combined with other VCs in composite presentations.

    "},{"location":"concepts/0104-chained-credentials/#sample-credentials","title":"Sample credentials","text":"

    Here is JSON that might embody credentials C1, C2, and C3 from our use case. Note that these examples suppress a few details that seem uninteresting, and they also introduce some new ../../concepts that are described more fully in the Reference section.

    "},{"location":"concepts/0104-chained-credentials/#c1-delegates-management-of-car-to-ur-wheelz-houston","title":"C1 (delegates management of car to Ur Wheelz Houston)","text":"
    {\n    \"@context\": [\"https://w3.org/2018/credentials/v1\", \"https://github.com/hyperledger/aries-rfcs/tree/main../../concepts/0104-delegatable-credentials\"],\n    \"type\": [\"VerifiableCredential\", \"Proxy.D/CarRentalTF/1.0/subsidiary\"],\n    \"schema\": \"WwogICJAY29udGV4dCIsIC8vSlN... (clipped for brevity) ...ob2x\",\n    \"provenanceProofs\": [\n        [[\"authorization\"], {\n            // proof that Ur Wheelz National owns the car\n            }]\n    ],\n    // Optional. Might be used to identify the car in question.\n    \"credentialSubject.car.VIN\": \"1HGES26721L024785\",\n    \"credentialSubject.proxied.permissions\": {\n        \"grant\": [\"rent\", \"maintain\", \"sell\", \"drive\", \"delegate\"], \n        \"when\": { \"role\": \"regional_office\" } \n    }\n    // Optional. Binds the credential to a business name.\n    \"credentialSubject.holder.name\": \"Ur Wheelz Houston\",\n    // Optional. Binds the credential to the public DID of Houston office.\n    \"credentialSubject.holder.id\": \"did:example:12345\",\n    \"credentialSubject.holder.role\": \"regional_office\",\n}\n
    "},{"location":"concepts/0104-chained-credentials/#c2-delegates-permission-to-alice-to-drive-subdelegate","title":"C2 (delegates permission to Alice to drive, subdelegate)","text":"
    {\n    // @context, type, schema are similar to previous\n    \"provenanceProofs\": {\n        [[\"authorization\"], {\n            // proof that Ur Wheelz Houston could delegate\n            }]\n    },\n    // Optional. Might be used to identify the car in question.\n    \"credentialSubject.car.VIN\": \"1HGES26721L024785\",\n    \"credentialSubject.proxied.permissions\": {\n        \"grant\": [\"drive\", \"delegate\"], \n        \"when\": { \"role\": \"renter\" } \n    }\n    // Optional. Binds the credential to a business name.\n    \"credentialSubject.holder.name\": \"Alice Jones\",\n    // Optional. Binds the credential to the public DID of Houston office.\n    \"credentialSubject.holder.id\": \"did:example:12345\",\n    \"credentialSubject.holder.role\": \"renter\",\n    // Limit dates when delegation is active\n    \"credentialSubject.holder.constraints.startTime\": \"2020-05-20T14:00Z\",\n    \"credentialSubject.holder.constraints.endTime\": \"2020-05-27T14:00Z\",\n    // Provide a boundary within which delegation is active\n    \"credentialSubject.holder.constraints.boundary\": \"USA:TX\"\n}\n
    "},{"location":"concepts/0104-chained-credentials/#c3-delegates-permission-to-valet-to-drive","title":"C3 (delegates permission to valet to drive)","text":"
    {\n    // @context, type, schema are similar to previous\n    \"delegationProof\": {\n        [[\"authorization\"], {\n            // proof that Alice could delegate\n            }]\n    },\n    // Optional. Might be used to identify the car in question.\n    \"credentialSubject.car.VIN\": \"1HGES26721L024785\",\n    \"credentialSubject.proxied.permissions\": {\n        \"grant\": [\"drive\"], \n        \"when\": { \"role\": \"valet\" } \n    }\n    // Optional. Binds the credential to a business name.\n    \"credentialSubject.holder.name\": \"Alice Jones\",\n    // Optional. Binds the credential to the public DID of Houston office.\n    \"credentialSubject.holder.id\": \"did:example:12345\",\n    \"credentialSubject.holder.role\": \"valet\",\n    \"credentialSubject.holder.constraints.startTime\": \"2020-05-25T04:00Z\",\n    \"credentialSubject.holder.constraints.endTime\": \"2020-05-25T06:00Z\",\n    // Give a place where delegation is active.\n    \"credentialSubject.holder.constraints.pointOfOrigin\": \"@29.7690295,-95.5293445,12z\",\n    \"credentialSubject.holder.constraints.radiusKm\": 0.1,\n}\n
    "},{"location":"concepts/0104-chained-credentials/#reference","title":"Reference","text":""},{"location":"concepts/0104-chained-credentials/#delegating-permissions","title":"Delegating Permissions","text":"

    In theory, we could just enumerate permissions in delegate credentials in a special VC field named permissions. To delegate the drive and delegate privileges to Alice, this would mean we'd need a credential field like this:

    {\n    // ... rest of credential fields ...\n\n    \"permissions\": [\"drive\", \"delegate\"]\n}\n

    Such a technique is adequate for many delegation use cases, and is more or less how ZCAP-LD works. However, it has two important limitations:

    To address these additional requirements, delegate credentials split the granting of permissions into two fields instead of one:

    1. The permission model that provides context for the credential is expressed in a special field named credentialSubject.proxied.permissions. This field contains an SGL rule that embodies the semantics of the delegation.
    2. The holder (delegate) is given a named role in that overall permission scheme in a special field named credentialSubject.holder.role. This role has to reference something from ...permissions.

    In our Ur Wheelz / Alice use case, the extra expressive power of these two fields is not especially interesting. The credential that Alice carries might look like this:

    {\n    // ... rest of credential fields ...\n\n    \"credentialSubject.proxied.permissions\": { \n        \"grant\": [\"drive\"], \n        \"when\": { \"role\": \"renter\" } \n    }\n    \"credentialSubject.holder.role\": [\"renter\"]\n}\n

    Since credentialSubject.holder.role says that Alice has the renter role, the grant of drive applies to her. We expect permissions to always apply directly to the holder in simple cases like this.

    But in the case of a corporation that wants to delegate signing privileges to 3 board members, the benefit of the two-field approach is clearer. Each board member gets a delegate credential that looks like this:

    {\n    // ... rest of credential fields ...\n\n    \"credentialSubject.proxied.permissions\": { \n        \"grant\": [\"sign\"], \n        \"when\": { \"role\": \"board\", \"n\": 3 } \n    }\n    \"credentialSubject.holder.role\": [\"board\"]\n}\n

    Now a verifier can say to one credential-holding board member, \"I see that you have part of the signing privilege. Can you find me two other board members who agree with this action?\"

    "},{"location":"concepts/0104-chained-credentials/#privacy-considerations","title":"Privacy Considerations","text":"

    Non-ZKP-based chained credentials reveal the public identity of the immediate downstream holder to each issuer (delegator) -- and they reveal the public identiy of all upstream members of the chain to the holder.

    ZKP-based chained credentials offer more granular choices. See ZKP Variants and their privacy implications below.

    "},{"location":"concepts/0104-chained-credentials/#embedded-schema","title":"Embedded schema","text":"

    Often, the schema of a chained credential might be decided (or created) by the issuer. In some cases, the schema might be decided by the delegatee or specified fully or partially in a trust framework.

    It is the responsibility of each issuer to ensure that the special schema attribute is present and that the credential matches it.

    "},{"location":"concepts/0104-chained-credentials/#zkp-revocation","title":"ZKP Revocation","text":"

    When a chained credential is issued, a unique credential id is assigned to it by its issuer and then the revocation registry is updated to track which credential id was issued by which issuer. During proof presentation, the prover proves in zero knowledge that its credential is not revoked. When a credential is to be revoked, the issuer of the credential sends a signed message to the revocation registry asking it to mark the credential id as revoked. Note that this allows only the issuer of the credential to revoke the credential and does not allow, for example, the delegator to revoke any credential that was issued by its delegatee. However, this can be achieved by the verifier mandating that each credential in the chain of credentials is non-revoked. When a PCF decides to revoke the PTR credential, every subsequent credential should be considered revoked.

    In practice, there are more attributes associated with the credential id in the revocation registry than just the public key. The registry also tracks the timestamps of issuance and revocation of the credential id and the prover is able to prove in zero knowledge about those data points as well. The way we imagine revocation being implemented is having a merkle tree with each leaf corresponding to a credential id, so for a binary tree of height 8, there are 2^8 = 256 leaves and leaf number 1 will correspond to credential id 1, leaf number 2 will correspond to credential id 2, and so on. The data at the leaf consists of the public key of the issuer, the issuance timestamp and the revocation timestamp. We imagine the use of Bulletproofs merkle tree gadget to do such proofs like we plan to do for the upcoming version of anonymous credentials.

    "},{"location":"concepts/0104-chained-credentials/#zkp-variants-and-their-privacy-implications","title":"ZKP Variants and their privacy implications","text":"

    There are two general categories of chained anonymous credentials, distinguished by the tradeoff they make between privacy and efficiency. Choosing between them should depend on whether privacy between intermediate issuers is required.

    The more efficient category provides no privacy among the issuers but only verifiers. Suppose the holder, say Alice, requests a chained credential from the root attestor, say Acme Corp., which it further attests to a downstream issuer Bob which further delegates to another downstream issuer Carol. Here Carol knows the identity (a public key) of Bob and both Carol and Bob know the identity of Alice but when Carol or Bob uses its credential to create a proof and send it to the verifier, the verifier only learns about the identity of the root attester.

    Less efficient but more private schemes (isolating attestors more completely) also exist.

    The first academic paper in the following list describes a scheme which does not allow for privacy between attestors, but that is more efficient; the second and third papers make the opposite tradeoff.

    1. Practical UC-Secure Delegatable Credentials with Attributes and Their Application to Blockchain.
    2. Delegatable Attribute-based Anonymous Credentials from Dynamically Malleable Signatures
    3. Delegatable Anonymous Credentials from Mercurial Signatures

    In the first scheme, each issuer passes on its received credentials to the issuer it is delegating to. In the above Acme Corp., Alice, Bob and Carol example, if when Alice delegates to Bob, it gives Bob a new credential but also a copy of the credential it received from Acme Corp. And when Bob delegates to Carol, he gives a new credential to Carol but also the copies of credential it got from Alice and the one Alice had got from Acme Corp. The verifier while getting a proof from, say Carol, does not learn the about the Alice, Bob or Carol but learns that there were 2 issuers between Acme Corp and the proof presenter. It also learns the number of attributes in each credential in the chain of credentials.

    In the second and third scheme, during delegation, the delegator gives only one credential to the delegatee derived from its credential but the delegatee randomizes its identity each time. The second scheme's efficiency is comparable to the first scheme's but it has a trusted authority which can deanonymize any issuer given a proof created from that issuer's credential. This might be fine in cases where the PCF can be safely made the trusted authority and is not assumed to colluding with the verifiers to deanonymize the users.

    In the third scheme, another limitation exists that non-root issuers cannot add any more attributes to the credential than the root issuer did.

    "},{"location":"concepts/0104-chained-credentials/#drawbacks","title":"Drawbacks","text":"

    If the trust framework is not properly defined, malicious parties might be able to get credentials from delegators leading to priviledge escalation.

    "},{"location":"concepts/0104-chained-credentials/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    An expensive alternative of delegatable credentials is the holder to get credential directly from the root issuer. The expensiveness of this is not just computational but operational too.

    "},{"location":"concepts/0104-chained-credentials/#prior-art","title":"Prior art","text":"

    Delegatable anonymous credentials have been explored since the last decade and the first efficient (somewhat) came in 2009 by Belenkiy et al. in \"Randomizable proofs and delegatable anonymous credentials\". Although this was a significant efficiency improvement over previous works, it was still impractical. Chase et al. gave a conceptually novel construction of delegatable anonymous credentials in 2013 in \"Complex unary transformations and delegatable anonymous credentials\" but the resulting construction was essentially as inefficient as that of Belenkiy et al.

    "},{"location":"concepts/0104-chained-credentials/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0104-chained-credentials/contrast-zcap-ld/","title":"Contrast zcap ld","text":""},{"location":"concepts/0104-chained-credentials/contrast-zcap-ld/#why-not-zcap-ld","title":"Why not ZCAP-LD?","text":"

    The object capability model is great, and ZCAP-LD is an interesting solution that exposes that goodness to the VC ecosystem. However, we had the following concerns when we first encountered its spec (originally entitled \"OCAP-LD\"):

    For these reasons, we spent some time working out a somewhat similar mechanism. We hope we can reconcile the two at some point. For now, though, this doc just describes our alternative path.

    "},{"location":"concepts/0167-data-consent-lifecycle/","title":"Aries RFC 0167: Data Consent Lifecycle","text":""},{"location":"concepts/0167-data-consent-lifecycle/#table-of-contents","title":"Table of Contents","text":""},{"location":"concepts/0167-data-consent-lifecycle/#summary","title":"Summary","text":"

    This RFC illustrates a reference implementation for generating a consent proof for use with DLT (Distributed Ledger Technology). Presenting a person controlled consent proof data control architecture and supply chain permissions, that is linked to the single consent proof.

    The objective of this RFC is to move this reference implementation, once comments are processed, to a working implementation RFC, demonstrating a proof of consent for DLT.

    This RFC breaks down key components to generate an explicit consent directive with the use of a personal data processing notice (PDP-N) specification which is provided with this RFC as a template for smart privacy. Appendix - PDP - Notice Spec (DLC Extension for CR v2)

    This reference RFC utilises a unified legal data control vocabulary for notification and consent records and receipts (see Appendix A), maintained by the W3C Data Privacy Vocabulary Control Community Group (DPV), where the unified data control vocabulary is actively being maintained.

    This RFC modularizes data capture to make the mappings interchangeable with overlays (OCA -Ref), to facilitate scale of data control sets across contexts, domains and jurisdictions.

    "},{"location":"concepts/0167-data-consent-lifecycle/#motivation","title":"Motivation","text":"

    A key challenge with privacy and personal data sharing and self-initiated consent is to establish trust. There is no trust in the personal data based economy. GDPR Article 25, Data Protection by Design and by Default, lists recommendations on how private data is processed. Here we list the technology changes required to implement that GDPR article. Note the RFC focuses on formalizing the processing agreement associated with the consent, rather than on informal consent dialogue.

    Hyperledger Aries provides the perfect framework for managing personal data, especially personal identifiable information (PII), when necessary data is restricted to protect the identity of the individual or data subject. Currently, the privacy policy that is agreed to when signing up for a new service dictates how personal data is processed and for which purpose. There is no clear technology to hold a company accountable for the privacy policy. By using blockchain and the data consent receipt, accountability of a privacy policy can be reached. The data consent is not limited to a single data controller (or institution) and data subject (or individual), but to a series of institutions that process the data from the original data subject. The beauty of the proposal in this RFC is that accountability is extended to ALL parties using the data subject's personal data. When the data subject withdraws consent, the data consent receipt agreement is withdrawn, too.

    GDPR lacks specifics regarding how technology should be or can be used to enforce obligations. This RFC provides a viable alternative with the mechanisms to bring accountability and at the same time protecting personal data.

    "},{"location":"concepts/0167-data-consent-lifecycle/#overview","title":"Overview","text":"

    Three key components need to be in place:

    1. Schema bases/overlays

    2. Consent Lifecycle

    3. Wallet

    Schema bases/overlays describes a standard approach to data capture that separates raw schema building blocks from additional semantic layers such as data entry business logic and constraints, knowledge about data sensitivity, and so forth (refer to [RFC 0013: Overlays for details). The data consent lifecycle covers the data consent receipt certificate, proof request and revocation. The wallet is where all data is stored which requires a high level of security and control by individual or institution. This RFC will cover the consent lifecycle.

    The Concepts section below explains the RFC in GDPR terms. There is an attempt to align with the vocabulary in the W3C Data Privacy Vocabulary specification.

    The consent lifecycle will be based on self sovereign identity (SSI) to ensure that the individual (data subject) has full control of their personal information. To help illustrate how SSI is applied several use cases along a reference implementation will help show the relation between the data subject, data controller and data processor.

    "},{"location":"concepts/0167-data-consent-lifecycle/#concepts","title":"Concepts","text":"

    These are some ../../concepts that are important to understand when reviewing this RFC.

    Secondary Data Controller: The terms \"data subject\" and \"data controller\" (see GDPR Article 4, items 1 and 7) should be well understood. The data controller is responsible for the data that is shared beyond their control. A data controller which does not itself collect data but receives it from another controller is termed a 'secondary' data controller. Even though the secondary data controller is independent in its processing of personal data, GDPR requires the primary or original data controller to be responsible for sharing data under the given consent. The 3rd party becomes a secondary controller under the responsibility of the original data controller. Important to note that if a 3rd party does not share the collected data back to the original data controller, then the 3rd party is considered an independent data controller (add reference to CIEU).

    Opt-in / Opt-out: These terms describe a request to use personal data beyond the limits of legitimate reasons for conducting a service. If for example the data is shared with a 3rd party a consent or opt-in is required. At any point the data subject may withdraw the consent through an opt-out.

    Expiration: The consent may have time limitations that may require being renewed and does not automatically renew. The data subject may have a yearly subscription or for purposes of a trial there needs to be a mechanism to ensure the consent is limited to the duration of the service.

    Storage limitation: PII data should not be stored indefinitely and need to have a clear storage limitation. Storage limitation as defined by GDPR is limiting how long PII data is kept to fulfill the legitimate reasons of a service.

    Processing TTL: Indy currently supports proof only limited to a specific point in time. For companies that collect data over time to check for proof every minute is not a viable solution. The processing TTL gives allowances for data ingestion to be done for an extended period without requiring performing new proof request. Examples will be given that explain the usage of the term.

    "},{"location":"concepts/0167-data-consent-lifecycle/#use-cases","title":"Use Cases","text":"

    These are the use cases to help understand the implementation guide. A reference implementation will help in the development.

    1. Alice (data subject) gives data consent by accepting a privacy agreement.

    2. Acme (3rd party data controller) requests proof that data consent was given

    3. Alice terminates privacy agreement, thus withdrawing her data consent.

    Note: additional use cases may be developed based on contributions to this RFC.

    "},{"location":"concepts/0167-data-consent-lifecycle/#implementation-guidelines","title":"Implementation Guidelines","text":""},{"location":"concepts/0167-data-consent-lifecycle/#collect-personal-data","title":"Collect Personal Data","text":"

    These are the steps covered with collect personal data:

    The [Blinding Identity Taxonomy] provides a compressive list of data points that are considered sensitive and shall be handled with higher level of security.

    Section will expand terms of the explanation of personal identifiable and quasi-identifiable terms.

    "},{"location":"concepts/0167-data-consent-lifecycle/#personal-data-processing-schema","title":"Personal Data Processing Schema","text":"

    The personal data processing (PDP) schema captures attributes used to defines the conditions of collecting data and conditions how data may be shared or used.

    These are the PDP schema attributes:

    Category Attribute Brief description Comment Data subset DID of associated schema or overlay Data object identifier All data objects Industry Scope [1] A predefined description of the industry scope of the issuer. All data objects Storage (raw) Expiration Date The definitive date on which data revocation throughout the chain of engaged private data lockers of all Data Controllers and sub-Data Controllers will automatically occur. In other words when the PDP expires. Access-Window Limitation (Restricted-Time) How long data is kept in the system before being removed. Different from expiration date attribute limitation indicates how long personal data may be used beyond the PDP expires. Request to be forgotten supersedes the limitation. Access-Window PII pseudonymization Data stored with pseudonymization. Conditions of access to are given under purpose attribute of \"Access\" category. Encryption Method of psuedonymization Specify algorithm used for performing anonymisation that is acceptable. Encryption Geographic restriction The data storage has geo location restrictions (country). Demarcation No share The data shall not be shared outside of the Data Controller responsibility. When set no 3rd party or Secondary Data Controller are allowed. Demarcation Access (1-n) Purpose The purpose for processing data shall be specified (refer to GDPR Article 4, clause 2, for details on processing details). Applies to both a Data Controller and Secondary Data Controller. Access-Window policyUrl Reference to privacy policy URL that describes policy in human readable form. Access-Window Requires 3PP PDP [2] A PDP is required between Data Controller and Secondary Data Controller in the form of code of conduct agreement. Access-Window Single Use The data is shared only for the purpose of completing the interaction at hand. \"Expiration-Date\" is set to the date of interaction completion. Access-Window PII anonymisation Data stored with no PII association. Encryption [3] Method of anonymisation Specify algorithm used for performing anonymisation that is acceptable. Encryption Multi-attribute anonymisation Quasi-identifiable data may be combined create a finger print of the data subject. When set a method of multi-attribute anonymisation is applied on the data Encryption Method of multi-attribute anonymisation Specifify algorithm used for performing anonymisation that is acceptable (K-anonymity). Encryption Ongoing Use The data is shared for repeated use by the recipient, with no end date or end conditions. However, the data subject may alter the terms of use in the future, and if the alteration in terms is unacceptable to the data controller, the data controller acknowledges that it will thereby incur a duty to delete. In other words, the controller uses the data at the ongoing sufferance of its owner. Access-Window Collection Frequency (Refresh) How frequently the data can be accessed. The collection may be limited to once a day or 1 hour. Purpose of attribute is protect data subject to create a profile of behavior. Access-Window Validity TTL If collection is continuous the validity TTL specifies when to perform new verification. Verification is to check customer withdrew consent. Note this is method for revocation. Access-Window No correlation No correlation is allowed for subset. This means no external data shall be combined for example public data record of the data subject. Correlation Inform correlation Correlation is shared with data subject and what data was combined related to them. Correlation Open correlation Correlation is open and does not need to be informed to data subject. Correlation"},{"location":"concepts/0167-data-consent-lifecycle/#notes","title":"Notes","text":""},{"location":"concepts/0167-data-consent-lifecycle/#1","title":"1","text":"

    As the PDP schema may be the only compulsory linked schema specified in every schema metadata block, we have an opportunity to store the \"Framework Description\" - a description of the business framework of the issuer.

    Predefined values could be imported from the GICS \"Description\" entries, or, where missing, NECS \"Description\" entries, courtesy of filtration through the Global Industry Classification Standard (GICS) or New Economy Classification Standard (NECS) ontologies.

    The predefined values could be determined by the next highest level code to the stored GICS \"Sub-industry\" code (or NECS \"SubSector\" code) held in the associated metadata attribute of the primary schema base to add flexibility of choice for the Issuer.

    "},{"location":"concepts/0167-data-consent-lifecycle/#2","title":"2","text":"

    If a PDP is required between the Data Controller (Issuer) and sub-Data Controller, we should have a field(s) to store the Public DID (or Private Data Locker ID) of the sub-Data Controller(s). This will be vital to ensure auto-revocation from all associated private data lockers on the date of expiry.

    "},{"location":"concepts/0167-data-consent-lifecycle/#3","title":"3","text":"

    As the \"PII Attribute\" schema object is already in place for Issuer's to flag sensitive data according to the Blinding Identity Taxonomy (BIT), we already have a mechanism in place for PII. Once flagged, we can obviously encrypt sensitive data. Some considerations post PII flagging: (i.) In the Issuer's Private Data Locker : The default position should be to encrypt all sensitive elements. However, the issuer should be able to specify if any of the flagged sensitive elements should remain unencrypted in their private locker. (ii.) In a Public Data Store : all sensitive elements should always be encrypted

    "},{"location":"concepts/0167-data-consent-lifecycle/#example-schemas","title":"Example: Schemas","text":"

    When defining a schema there will be a consent schema associated with it.

    SCHEMA = {\n      did: \"did:sov:3214abcd\",\n    name: 'Demographics',\n    description: \"Created by Faber\",\n    version: '1.0',\n    # MANDATORY KEYS\n    attr_names: {\n      brthd: Date,\n      ageic: Integer\n    },\n    consent: did:schema:27312381238123  # reference to consent schema\n    # Attributes flagged according to the Blinding Identity Taxonomy\n    # by the issuer of the schema\n    # OPTIONAL KEYS\n    frmsrc: \"DEM\"\n}\n

    The original schema will have a consent schema reference.

    CONSENT_SCHEMA = {\n    did: \"did:schema:27312381238123\",\n    name: 'Consent schema for consumer behaviour data',\n    description: \"Created by Faber\",\n    version: '1.0',\n    # MANDATORY KEYS\n    attr_names: {\n      expiration: Date,\n      limitation: Date,\n      dictatedBy: String,\n      validityTTL: Integer\n    }\n}\n

    The consent schema will have specific attributes for managing data.

    Attribute Purpose Type expiration How long consent valid for Date limitation How long is data kept Date dictatedBy Who sets expiration and limitation String validityTTL Duration proof is valid for purposes of data processing Integer

    The issuer may optionally define an overlay that sets the consent schema values without input from the data subject.

    CONSENT_RECEIPT_OVERLAY = {\n  did: \"did:sov:5678abcd\",\n  type: \"spec/overlay/1.0/consent_entry\",\n  name: \"Consent receipt entry overlay for clinical trial\",\n  default_values: [\n    :expiration => 3 years,\n    :limitation => 2 years,\n    :dictatedBy = <reference to issuer> # ??? Should the DID of the issuer's DID be used?\n    :validityTTL => 1 month\n    ]\n}\n

    If some attributes are identified as sensitive based on the Blinding Identity Taxonomy when a sensitivity overlay is created.

    SENSITIVE_OVERLAY = {\n    did: \"did:sov:12idksjabcd\",\n  type: \"spec/overlay/1.0/bit\",\n  name: \"Sensitive data for private entity\",\n  attributes: [\n      :ageic\n  ]\n}\n

    To finalise a consent a proof schema has to be created which lists which schemas and overlays applied and values. The proof is kept off ledger in the wallet.

    PROOF_SCHEMA = {\n    did: \"did:schema:12341dasd\",\n    name: 'Credential Proof schema',\n    description: \"Created by Rosche\",\n    version: '1.0',\n    # MANDATORY KEYS\n    attr_names: {\n      createdAt: DateTime,           # How long consent valid for.\n      proof_key: \"<crypto asset>\",   # How long data is kept.\n      # Include all the schema did that were agreed upon\n      proof_of: [ \"did:sov:3214abcd\", \"did:sov:1234abcd\"]\n    }\n}\n
    "},{"location":"concepts/0167-data-consent-lifecycle/#blockchain-prerequisites","title":"Blockchain Prerequisites","text":"

    These are the considerations when setting up the ledger:

    "},{"location":"concepts/0167-data-consent-lifecycle/#data-consent-receipt-certificate","title":"Data Consent Receipt Certificate","text":"

    These are the steps covered with data consent receipt certificate:

    "},{"location":"concepts/0167-data-consent-lifecycle/#initial-agreement-of-privacy-agreement","title":"Initial agreement of privacy agreement","text":"

    The following flow diagram for setting up privacy agreement.

    "},{"location":"concepts/0167-data-consent-lifecycle/#proof-request","title":"Proof Request","text":"

    These are the steps covered with proof request:

    The proof request serves multiple purposes. The main one being the conditions of access are auditable. If a data controller encounters a situation they need to show the consent and conditions of accessing data are meet the proof request provides the evidence. The data subject also has more control of the proof request and in situations the revocation of certificate is not performed this becomes an extra safe guard. An important aspect with proof request is that it can be done without requiring to share any personal data.

    "},{"location":"concepts/0167-data-consent-lifecycle/#performing-proof-request","title":"Performing Proof Request","text":"

    The following flow diagram for setting up privacy agreement.

    "},{"location":"concepts/0167-data-consent-lifecycle/#certification-revocation","title":"Certification Revocation","text":"

    These are the steps covered with certification revocation:

    "},{"location":"concepts/0167-data-consent-lifecycle/#implementation-reference","title":"Implementation Reference","text":"

    A python jupyter notebook is available as reference implementation to help with implementation. The base for this example is getting-started jupyter notebook. In order to run the example take the following steps.

    1. Clone indy-sdk \\

       git clone https://github.com/hyperledger/indy-sdk.git\n
      1. Copy over following files to doc/getting-started \\
      2. consent-flow.ipynb
      3. docker-compose.yml *

      Note * - Reason for changing the docker-compose.yml is to be able to view consent-flow.ipynb.

    2. Ready to start docker-compose \\

      docker-compose up 4. Open html link and run consent-flow.ipynb

    "},{"location":"concepts/0167-data-consent-lifecycle/#reference","title":"Reference","text":"

    Provide guidance for implementers, procedures to inform testing, interface definitions, formal function prototypes, error codes, diagrams, and other technical details that might be looked up. Strive to guarantee that:

    "},{"location":"concepts/0167-data-consent-lifecycle/#annex-a-pdp-schema-mapping-to-kantara-consent-receipt","title":"Annex A: PDP Schema mapping to Kantara Consent Receipt","text":"

    Kantara has defined a Consent Receipt with a list of mandatory and optional attributes. This annex maps the attributes to the PDP. Many of the attributes are supported through the ledger and is not directly included in the PDP.

    Note: The draft used for this annex was file \"Consent receipt annex for 29184.docx\".

    Kantara attribute Hyperledger Indy mapping Version Schema registration Jurisdiction Agent registration Consent Timestamp PDP signed certificate Collection Method - Consent Receipt ID PDP signed certificate Public Key Ledger Language Overlays PII Principal ID Schema/Agent registration PII Controller Agent registration On Behalf Agent registration (1) PII Controller Contract Agent registration (2) PII Controller Address Agent registration PII Controller Email Agent registration PII Controller Phone Agent registration PII Controller URL [OPTIONAL] - Privacy Policy PDP services PDP purposes PDP Purpose Category - Consent Type PDP PII Categories - Primary Purpose PDP Termination Ledger Third Party Name PDP Sensitive PII Schema base

    Notes

    (1) Agent may be of type Cloud Agent which works on behalf of an Issuer (Data Controller). When the institution when they register in blockchain should make it clear who are they registering on behalf.

    (2) Controller Contact may change over time and is not a good reference to be used when accepting a consent. If required suggest include as part of Agent registration (or requirement)

    "},{"location":"concepts/0167-data-consent-lifecycle/#prior-art","title":"Prior art","text":""},{"location":"concepts/0167-data-consent-lifecycle/#etl-process","title":"ETL process","text":"

    Current data processing of PII date is not based on blockchain. Data is processed through ETL routines (ex. AWS API Gateway and Lambda) with a data warehouse (ex. AWS Redshift). The enforcement of GDPR is based on adding configuration routines to enforce storage limitations. Most data warehouses do not implement pseudonymization and may instead opt to have a very short storage limitation of a couple of months. The current practice is to collect as much data as possible which goes against data minimisation.

    "},{"location":"concepts/0167-data-consent-lifecycle/#personal-data-terms-and-conditions","title":"Personal Data Terms and Conditions","text":"

    The Customer Commons iniative (customercommons.org) has developed a [terms and conditions] for personal data usage. The implementation of these terms and conditions will be tied to the schema and overlay definitions. The overlay will specify the conditions of sharing. For more broader conditions the schema will have new attributes for actual consent for data sharing. The work by Hypeledger Aries and Customer Commons complement each other.

    "},{"location":"concepts/0167-data-consent-lifecycle/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0167-data-consent-lifecycle/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0167-data-consent-lifecycle/#plan","title":"Plan","text":""},{"location":"concepts/0167-data-consent-lifecycle/#todo","title":"ToDo","text":""},{"location":"concepts/0167-data-consent-lifecycle/#comments","title":"Comments","text":"Question From Date Answer Where is consent recorded? Harsh 2019-07-31 There are several types of consent listed below. Where the actual consent is recorded needs Specialised Consent (legal)Generic Consent (legal)General Data Processing Consent"},{"location":"concepts/0207-credential-fraud-threat-model/","title":"0207: Credential Fraud Threat Model","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#summary","title":"Summary","text":"

    Provides a model for analyzing and preventing fraud with verifiable credentials.

    "},{"location":"concepts/0207-credential-fraud-threat-model/#motivation","title":"Motivation","text":"

    Cybersecurity experts often view technology through the lens of a threat model that helps implementers methodically discover and remediate vulnerabilities.

    Verifiable credentials are a new technology that has enormous potential to shape the digital landscape. However, when used carelessly, they could bring to digital, remote interactions many of the same abuse possibilities that criminals have exploited for generations in face-to-face interactions.

    We need a base threat model for the specific subdiscipline of verifiable credentials, so implementations and deployments have a clear view of how vulnerabilities might arise, and how they can be eliminated. More specific threat models can build atop this general foundation.

    "},{"location":"concepts/0207-credential-fraud-threat-model/#tutorial","title":"Tutorial","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#scope","title":"Scope","text":"

    Verifiable credentials are a way to establish trust. They provide value for login, authorization, reputation, and data sharing, and they enable an entire ecosystem of loosely cooperating parties that use different software, follow different business processes, and require different levels of assurance.

    This looseness and variety presents a challenge. Exhaustively detailing every conceivable abuse in such an ecosystem would be nearly as daunting as trying to model all risk on the internet.

    This threat model therefore takes a narrower view. We assume the digital landscape (e.g., the internet) as context, with all its vulnerabilities and mitigating best practices. We focus on just the ways that the risks and mitigations for verifiable credential fraud are unique.

    "},{"location":"concepts/0207-credential-fraud-threat-model/#definition","title":"Definition","text":"

    Fraud: intentional deception to secure unfair or unlawful gain, or to hurt a victim. Contrast hoax, which is deception for annoyance or entertainment. (paraphrase from Wikipedia)

    "},{"location":"concepts/0207-credential-fraud-threat-model/#relation-to-familiar-methods","title":"Relation to familiar methods","text":"

    There are many methods for constructing threat models, including STRIDE, PASTA, LINDDUN, CVSS, and so forth. These are excellent tools. We use insights from them to construct what's offered here, and we borrow some terminology. We recommend them to any issuer, holder, or verifier that wants to deepen their expertise. They are an excellent complement to this RFC.

    However, this RFC is an actual model, not a method. Also, early exploration of the threat space suggests that with verifiable credentials, patterns of remediation grow more obvious if we categorize vulnerabilities in a specialized way. Therefore, what follows is more than just the mechanical expansion of the STRIDE algorithm or the PASTA process.

    "},{"location":"concepts/0207-credential-fraud-threat-model/#data-flow-diagram","title":"Data Flow Diagram","text":"

    Data flows in a verifiable credential ecosystem in approximately the following way:

    Some verifiable credential models include an additional flow (arrow) directly from issuers to verifiers, if they call for revocation to be tested by consulting a revocation list maintained by the issuer.

    "},{"location":"concepts/0207-credential-fraud-threat-model/#key-questions","title":"Key Questions","text":"

    Fraud could be categorized in many ways--for example, by how much damage it causes, how easy it is to detect, or how common it is. However, we get predictive power and true insight when we focus on characteristics that lead to different risk profiles and different remediations. For verifiable credentials, this suggests a focus on the following 4 questions:

    1. Who is the perpetrator?
    2. Who is directly deceived?
    3. When is the deception committed?
    4. Where (on which fact) is the deception focused?

    We can think of these questions as orthogonal dimensions, where each question is like an axis that has many possible positions or answers. We will enumerate as many answers to these questions as we can, and assign each answer a formal name. Then we can use a terse, almost mathematical notation in the form (w + x + y + z) (where w is an answer to question 1, x is an answer to question 2, and so forth) to identify a fraud potential in 4-dimensional space. For example, a fraud where the holder fools the issuer at time of issuance about subject data might be given by the locus: (liar-holder + fool-issuer + issuance-time + bad-subject-claims).

    What follows is an exploration of each question and a beginning set of associated answers. We provide at least one example of a situation that embodies each answer, notated with \u21e8. Our catalog is unlikely to be exhaustive; criminal creativity will find new expressions as we eliminate potential in the obvious places. However, these answers are complete enough to provide significant insight into the risks and remediations in the ecosystem.

    "},{"location":"concepts/0207-credential-fraud-threat-model/#1-who-is-the-perpetrator","title":"1. Who is the perpetrator?","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#third-parties","title":"third parties","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#combinations","title":"combinations","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#2-who-is-directly-deceived","title":"2. Who is directly deceived?","text":"

    Combinations of the above?

    "},{"location":"concepts/0207-credential-fraud-threat-model/#3-when-is-the-deception-committed","title":"3. When is the deception committed?","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#4-where-on-which-fact-is-the-deception-focused","title":"4. Where (on which fact) is the deception focused?","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#identity","title":"identity","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#claims","title":"claims","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#context","title":"context","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#reference","title":"Reference","text":"

    Provide guidance for implementers, procedures to inform testing, interface definitions, formal function prototypes, error codes, diagrams, and other technical details that might be looked up. Strive to guarantee that:

    "},{"location":"concepts/0207-credential-fraud-threat-model/#drawbacks","title":"Drawbacks","text":"

    Why should we not do this?

    "},{"location":"concepts/0207-credential-fraud-threat-model/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#prior-art","title":"Prior art","text":"

    Discuss prior art, both the good and the bad, in relation to this proposal. A few examples of what this can include are:

    This section is intended to encourage you as an author to think about the lessons from other implementers, provide readers of your proposal with a fuller picture. If there is no prior art, that is fine - your ideas are interesting to us whether they are brand new or if they are an adaptation from other communities.

    Note that while precedent set by other communities is some motivation, it does not on its own motivate an enhancement proposal here. Please also take into consideration that Aries sometimes intentionally diverges from common identity ../../features.

    "},{"location":"concepts/0207-credential-fraud-threat-model/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0217-linkable-message-paths/","title":"Aries RFC 0217: Linkable Message Paths","text":""},{"location":"concepts/0217-linkable-message-paths/#summary","title":"Summary","text":"

    Describes how to hyperlink to specific elements of specific DIDComm messages.

    "},{"location":"concepts/0217-linkable-message-paths/#motivation","title":"Motivation","text":"

    It must be possible to refer to specific pieces of data in specific DIDComm messages. This allows a message later in a protocol to refer to data in a message that preceded it, which is useful for stitching together subprotocols, debugging, error handling, logging, and various other scenarios.

    "},{"location":"concepts/0217-linkable-message-paths/#tutorial","title":"Tutorial","text":"

    There are numerous approaches to the general problem of referencing/querying a piece of data in a JSON document. We have chosen JSPath as our solution to that part of the problem; see Prior Art for a summary of that option and a comparison to alternatives.

    What we need, over and above JSPath, is a URI-oriented way to refer to an individual message, so the rest of the referencing mechanism has a JSON document to start from.

    "},{"location":"concepts/0217-linkable-message-paths/#didcomm-message-uris","title":"DIDComm Message URIs","text":"

    A DIDComm message URI (DMURI) is a string that references a sent/received message, using standard URI syntax as specified in RFC 3986. It takes one of the following forms:

    1. didcomm://<thid>/<msgid>
    2. didcomm://./<msgid> or didcomm://../<msgid>
    3. didcomm:///<msgid> (note 3 slashes)
    4. didcomm://<sender>@<thid>/<senderorder>

    Here, <msgid> is replaced with the value of the @id property of a plaintext DIDComm message; <thid> is replaced with the ~thread.thid property, <sender> is replaced with a DID, and <senderorder> is replaced with a zero-based index (the Nth message emitted in the thread by that sender).

    Form 1 is called absolute form, and is the prefered form of DMURI to use when talking about messages outside the context of an active thread (e.g., in log files)

    Form 2 is called relative form, and is a convenient way for one message to refer to another within an ongoing interaction. It is relatively explicit and terse. It uses 1 or 2 dots to reference the current or parent thread, and then provides the message id with that thread as context. Referencing more distant parent threads is done with absolute form.

    Form 3 is called simple form. It omits the thread id entirely. It is maximally short and usually clear enough. However, it is slightly less preferred than forms 1 and 2 because it is possible that some senders might not practice good message ID hygeine that guarantees global message ID uniqueness. When that happens, a message ID could get reused, making this form ambiguous. The most recent message that is known to match the message id must be assumed.

    Form 4 is called ordered form. It is useful for referencing a message that was never received, making the message's internal @id property unavailable. It might be used to request a resend of a lost message that is uncovered by the gap detection mechanism in DIDComm's message threading.

    Only parties who have sent or received messages can dereference DMURIs. However, the URIs should be transmittable through any number of third parties who do not understand them, without any loss of utility.

    "},{"location":"concepts/0217-linkable-message-paths/#combining-a-dmuri-with-a-jspath","title":"Combining a DMURI with a JSPath","text":"

    A JSPath is concatenated to a DMURI by using an intervening slash delimiter:

    didcomm:///e56085f9-4fe5-40a4-bf15-6438751b3ae8/.~timing.expires_time

    If a JSPath uses characters from RFC 3986's reserved characters list in a context where they have special meaning, they must be percent-encoded.

    "},{"location":"concepts/0217-linkable-message-paths/#reference","title":"Reference","text":"

    Provide guidance for implementers, procedures to inform testing, interface definitions, formal function prototypes, error codes, diagrams, and other technical details that might be looked up. Strive to guarantee that:

    "},{"location":"concepts/0217-linkable-message-paths/#drawbacks","title":"Drawbacks","text":"

    Why should we not do this?

    "},{"location":"concepts/0217-linkable-message-paths/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0217-linkable-message-paths/#prior-art","title":"Prior art","text":""},{"location":"concepts/0217-linkable-message-paths/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0217-linkable-message-paths/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0231-biometric-service-provider/","title":"Aries RFC 0231: Biometric Service Provider","text":""},{"location":"concepts/0231-biometric-service-provider/#summary","title":"Summary","text":"

    Biometric services for Identity Verification, Authentication, Recovery and other use cases referred to in Aries RFCs including DKMS.

    "},{"location":"concepts/0231-biometric-service-provider/#motivation","title":"Motivation","text":"

    Biometrics play a special role in many identity use cases because of their ability to intrinsically identify a unique individual, but their use depends on a variety of factors including liveness, matching accuracy, ease of acquisition, security and privacy. Use of biometrics is already well established in most countries for domestic and international travel, banking and law enforcement. In banking, know-your-customer (KYC) and anti-money laundering (AML) laws require some form of biometric(s) when establishing accounts.

    In this specification, we characterize the functions and schema that biometric service providers (BSPs) must implement to ensure a uniform interface to clients: wallets and agents. For example, current Automated Biometric Information Systems (ABIS) and other standards (IEEE 2410, FIDO) provide a subset of services but often require proprietary adaptors due to the fragmented history of the biometric market: different modalities (face, fingerprint, iris, etc.) require different functions, schema, and registration information. More recently, standards have begun to specify functions and schema across biometric modalities. This specification will adopt these approaches and treat biometric data within an encrypted envelope across modalities.

    "},{"location":"concepts/0231-biometric-service-provider/#tutorial","title":"Tutorial","text":"

    One goal of the Biometric Service Provider (BSP) specification is to allow for self-sovereign biometric credentials in a holder's wallet or cloud agent trusted by issuers and verifiers:

    An issuer may collect biometric information from a holder in order to issue credentials (biometric or not). Likewise, a verifier may require biometric matching against the holder's credentials for authentication. In either case, issuers, holders and verifiers may need to rely on 3rd party services to perform biometric matching functions for comparison to authoritative databases.

    "},{"location":"concepts/0231-biometric-service-provider/#basics","title":"Basics","text":"

    In general, biometrics are collected during registration from a person and stored for later comparisons. The registration data is called the Initial Biometric Vector (IBV). During subsequent sessions, a biometric reading is taken called the Candidate Biometric Vector (CBV) and \"matched\" to the IBV:

    Both the IBV and CBV must be securely stored on a mobile device or server often with the help of hardware-based encryption mechanisms such as a Trusted Execution Environment (TEE) or Hardware Security Module (HSM). The CBV is typically ephemeral and discarded (using secure erasure) following the match operation.

    If the IBV and/or CBV are used on a server, any exchange must use strong encryption between client and server if transmitted over public or private networks in case of interception. Failure to properly protect the collection, transmission, storage and processing of biometric data is a serious offense in most countries and violations are subject to severe fines and/or imprisonment.

    "},{"location":"concepts/0231-biometric-service-provider/#example-aadhaar","title":"Example: Aadhaar","text":"

    The Aadhaar system is an operational biometric that provides identity proofing and identity verification services for over 1 billion people in India. Aadhaar is comprised of many elements with authentication as the most common use case:

    Authentication Service Agents (ASAs) are licensed by the Government of India to pass the verification request via secure channels to the Unique Identification Authority of India (UIDAI) data centre where IBVs are retrieved and matched to incoming CBVs from Authentication User Agencies (AUA) that broker user authentication sessions from point-of-sale (PoS) terminals:

    "},{"location":"concepts/0231-biometric-service-provider/#use-cases","title":"Use Cases","text":"

    A Biometric Service Provider (BSP) supports the following use cases. In each case, we distinguish whether the use case requires one-to-one (1:1) matching or one-to-many (1:N) matching:

    1. Device Unlocking - primarily introduced to solve the inconvenience of typing a password into a small mobile device, face and single-digit fingerprint was introduced to mobile devices to protect access to the device resources. This is a 1:1 match operation.

    2. Authentication - the dominant use case for biometrics. Users must prove they sufficiently match the IBV created during registration in order to access local and remote resources including doors, cars, servers, etc. This is a 1:1 match operation.

    3. Identification - an unknown person presents for purposes of determining their identity against a database of registered persons. This is a 1:N match operation because the database(s) must be searched for all IBVs of matching identities.

    4. Identity Verification - a person claims a specific identity with associated metadata (e.g., name, address, etc.) and provides a CBV for match against that person's registered biometric data to confirm the claim. This is a 1:1 match operation.

    5. Identity Proofing - a person claims a specific identity with associated metadata (e.g., name, address, etc.) and provides a CBV for match against all persons in database(s) in order to determine the efficacy of their claims and any counter-claims. This is a 1:N match operation because the database(s) must be searched for all IBVs of matching identities.

    6. Deduplication - given a CBV, match against IBVs of all registered identities to determine if already present or not in the database(s). This is a 1:N matching operation.

    7. Fraud prevention - A match operation could return confidence score(s) (0..1) rather than a simply boolean. Confidence score(s) express the probability that the candidate is not an imposter and could be used in risk analysis engines. This may be a use case for BSP clients.

    8. Recovery - Using biometric shards, a process using one's biometrics to recover lost private keys associated with a credential is possible using secret sharing. This may be a use case for BSP clients.

    The previous diagram describing the IBV and CBV collection and matching during registration and presentation did not specify where the IBV is persisted nor where the match operation is performed. In general, we can divide the use cases into 4 categories depending on where the IBV is persisted and where the match must occur:

    Mobile-Mobile: The IBV is stored on the mobile device and the match with the CBV occurs on the mobile device

    Mobile-Server: The IBV is stored on the mobile device, but the match occurs on a server

    Server-Mobile: The IBV is stored on a server, but the match occurs on a mobile device

    Server-Server: The IBV is stored on a server and the match occurs on a server

    "},{"location":"concepts/0231-biometric-service-provider/#use-case-1-identity-proofing","title":"Use case 1: Identity Proofing","text":""},{"location":"concepts/0231-biometric-service-provider/#use-case-2-recovery","title":"Use case 2: Recovery","text":""},{"location":"concepts/0231-biometric-service-provider/#reference","title":"Reference","text":"

    The NIST 800-63-3 publications are guidelines that establish levels of assurance (LOA) for identity proofing (Volume A), authentication (Volume B), and federation (Volume C). The Biometric Service Provider (BSP) specification deals primarily with identity proofing and authentication.

    A common misconception is that a biometric is like a password, but cannot be replaced upon loss or compromise. A biometric is private but not secret, whereas a password is secret and private. Used correctly, biometrics require presentation attack detection (PAD), also called liveness, to ensure that the sensor is presented with a live face, fingerprints, etc. of a subject rather than a spoof, i.e., a photo, fake fingertips, etc. Indeed, NIST 800-63-3B requires presence of a person in front of a witness for Identity Assurance Level 3 (IAL3) in identity proofing use cases.NIST characterizes the identity proofing process as follows:

    Remote use of biometrics is increasing as well to streamline on-boarding and recovery processes without having to present to an official. NIST 800-63-3A introduced remote identity proofing for IAL2 in 2017 with some form of PAD strongly recommended (by reference to NIST 800-63-3B). Typically, additional measures are combined with biometrics including knowledge-based authentication (KBA), risk scoring and document-based verification to reduce fraud.

    "},{"location":"concepts/0231-biometric-service-provider/#protection","title":"Protection","text":"

    Biometric data is highly sensitive and must be protected wherever and whenever it is collected, transmitted, stored and processed. In general, some simple rules of thumb include:

    "},{"location":"concepts/0231-biometric-service-provider/#issues","title":"Issues","text":""},{"location":"concepts/0231-biometric-service-provider/#drawbacks","title":"Drawbacks","text":"

    Biometrics are explicitly required in many global regulations including NIST (USA), Aadhaar (India), INE (Mexico), and RENIEC (Peru) but also standardized by international organizations for travel (IATA) and finance (FATF).

    "},{"location":"concepts/0231-biometric-service-provider/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    By addressing biometrics, we seek to provide explicit guidance to developers who will undoubtedly encounter them in many identity credentialing and authentication processes.

    "},{"location":"concepts/0231-biometric-service-provider/#prior-art","title":"Prior art","text":"

    Several biometric standards exist that provide frameworks for biometric services including the FIDO family of standards and IEEE 2410. Within each biometric modality, standards exist to encode representations of biometric information. For example, fingerprints can be captured as raw images in JPEG or PNG format but also represented as vectors of minutiae encoded in the WSQ format.

    "},{"location":"concepts/0231-biometric-service-provider/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0231-biometric-service-provider/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0250-rich-schemas/","title":"RFC 0250: Rich Schema Objects","text":""},{"location":"concepts/0250-rich-schemas/#summary","title":"Summary","text":"

    A high-level description of the components of an anonymous credential ecosystem that supports rich schemas, W3C Verifiable Credentials and Presentations, and correspondingly rich presentation requests. Rich schemas are hierarchically composable graph-based representations of complex data. For these rich schemas to be incorporated into the aries anonymous credential ecosystem, we also introduce such objects as mappings, encodings, presentation definitions and their associated contexts.

    Though the goal of this RFC is to describe how rich schemas may be used with anonymous credentials, it will be noted that many of the objects described here may be used to allow any credential system to make use of rich schemas.

    This RFC provides a brief description of each rich schema object. Future RFCs will provide greater detail for each individual object and will be linked to from this document. The further RFCs will contain examples for each object.

    "},{"location":"concepts/0250-rich-schemas/#motivation","title":"Motivation","text":""},{"location":"concepts/0250-rich-schemas/#standards-compliance","title":"Standards Compliance","text":"

    The W3C Verifiable Claims Working Group (VCWG) will soon be releasing a verifiable credential data model. This proposal introduces aries anonymous credentials and presentations which are in compliance with that standard.

    "},{"location":"concepts/0250-rich-schemas/#interoperability","title":"Interoperability","text":"

    Compliance with the VCWG data model introduces the possibility of interoperability with other credentials that also comply with the standard. The verifiable credential data model specification is limited to defining the data structure of verifiable credentials and presentations. This includes defining extension points, such as \"proof\" or \"credentialStatus.\"

    The extensions themselves are outside the scope of the current specification, so interoperability beyond the data model layer will require shared understanding of the extensions used. Work on interoperability of the extensions will be an important aspect of maturing the data model specification and associated protocols.

    Additionally, the new rich schemas are compatible with or the same as existing schemas defined by industry standards bodies and communities of interest. This means that the rich schemas should be interoperable with those found on schema.org, for example. Schemas can also be readily defined for those organizations that have standards for data representation, but who do not have an existing formal schema representation.

    "},{"location":"concepts/0250-rich-schemas/#shared-semantic-meaning","title":"Shared Semantic Meaning","text":"

    The rich schemas and associated constructs are linked data objects that have an explicitly shared context. This allows for all entities in the ecosystem to operate with a shared vocabulary.

    Because rich schemas are composable, the potential data types that may be used for field values are themselves specified in schemas that are linked to in the property definitions. The shared semantic meaning gives greater assurance that the meaning of the claims in a presentation is in harmony with the semantics the issuer intended to attest when they signed the credential.

    "},{"location":"concepts/0250-rich-schemas/#improved-predicate-proofs","title":"Improved Predicate Proofs","text":"

    Introducing standard encoding methods for most data types will enable predicate proof support for floating point numbers, dates and times, and other assorted measurements. We also introduce a mapping object that ties intended encoding methods to each schema property that may be signed so that an issuer will have the ability to canonically specify how the data they wish to sign maps to the signature they provide.

    "},{"location":"concepts/0250-rich-schemas/#use-of-json-ld","title":"Use of JSON-LD","text":"

    Rich schema objects primarily wish to benefit from the accessibility of ordinary JSON, but require more sophisticated JSON-LD-driven patterns when the need arises.

    Each rich schema object will specify the extent to which it supports JSON-LD functionality, and the extent to which JSON-LD processing may be required.

    "},{"location":"concepts/0250-rich-schemas/#what-the-casual-developer-needs-to-know","title":"What the Casual Developer Needs to Know","text":""},{"location":"concepts/0250-rich-schemas/#details","title":"Details","text":"

    Compatibility with JSON-LD was evaluated against version 1.1 of the JSON-LD spec, current in early 2019. If material changes in the spec are forthcoming, a new analysis may be worthwhile. Our current understanding follows.

    "},{"location":"concepts/0250-rich-schemas/#type","title":"@type","text":"

    The type of an rich schema object, or of an embedded object within a rich schema object, is given by the JSON-LD @type property. JSON-LD requires this value to be an IRI.

    "},{"location":"concepts/0250-rich-schemas/#id","title":"@id","text":"

    The identifier for a rich schema object is given by the JSON-LD @id property. JSON-LD requires this value to be an IRI.

    "},{"location":"concepts/0250-rich-schemas/#context","title":"@context","text":"

    This is JSON-LD\u2019s namespacing mechanism. It is active in rich schema objects, but can usually be ignored for simple processing, in the same way namespaces in XML are often ignored for simple tasks.

    Every rich schema object has an associated @context, but for many of them we have chosen to follow the procedure described in section 6 of the JSON-LD spec, which focuses on how ordinary JSON can be interpreted as JSON-LD.

    Contexts are JSON objects. They are the standard mechanism for defining shared semantic meaning among rich schema objects. Contexts allow schemas, mappings, presentations, etc. to use a common vocabulary when referring to common attributes, i.e. they provide an explicit shared semantic meaning.

    "},{"location":"concepts/0250-rich-schemas/#ordering","title":"Ordering","text":"

    JSON-LD specifies that the order of items in arrays is NOT significant, and notes that this is the opposite of the standard assumption for plain JSON. This makes sense when viewed through the lens of JSON-LD\u2019s role as a transformation of RDF, and is a concept supported by rich schema objects.

    "},{"location":"concepts/0250-rich-schemas/#tutorial","title":"Tutorial","text":"

    The object ecosystem for anonymous credentials that make use of rich schemas has a lot of familiar items: credentials, credential definitions, schemas, and presentations. Each of these objects has been changed, some slightly, some more significantly, in order to take advantage of the benefits of contextually rich linked schemas and W3C verifiable credentials. More information on each of these objects can be found below.

    In addition to the familiar objects, we introduce some new objects: contexts, mappings, encodings, and presentation definitions. These serve to bridge between our current powerful signatures and the rich schemas, as well as to take advantage of some of the new capabilities that are introduced.

    Relationship graph of rich schema objects

    "},{"location":"concepts/0250-rich-schemas/#verifiable-credentials","title":"Verifiable Credentials","text":"

    The Verifiable Claims Working Group of the W3C is working to publish a Verifiable Credentials data model specification. Put simply, the goal of the new data format for anonymous credentials is to comply with the W3C specification.

    The data model introduces some standard properties and a shared vocabulary so that different producers of credentials can better inter-operate.

    "},{"location":"concepts/0250-rich-schemas/#rich-schemas","title":"Rich Schemas","text":"

    The proposed rich schemas are JSON-LD objects. This allows credentials issued according to them to have a clear semantic meaning, so that the verifier can know what the issuer intended. They also support explicitly typed properties and semantic inheritance. A schema may include other schemas as property types, or extend another schema with additional properties. For example a schema for \"employee\" may inherit from the schema for \"person.\"

    Rich schemas are an object that may be used by any verifiable credential system.

    "},{"location":"concepts/0250-rich-schemas/#mappings","title":"Mappings","text":"

    Rich schemas are complex, hierarchical, and possibly nested objects. The Camenisch-Lysyanskaya signature scheme used in anonymous credentials requires the attributes to be represented by an array of 256-bit integers. Converting data specified by a rich schema into a flat array of integers requires a mapping object.

    Mappings serve as a bridge between rich schemas and the flat array of signed integers. A mapping specifies the order in which attributes are transformed and signed. It consists of a set of graph paths and the encoding used for the attribute values specified by those graph paths. Each claim in a mapping has a reference to an encoding, and those encodings are defined in encoding objects.

    Mappings are written to a data registry so they can be shared by multiple credential definitions. They need to be discoverable. When a mapping has been created or selected by an issuer, it is made part of the credential definition.

    The mappings serve as a vital part of the verification process. The verifier, upon receipt of a presentation must not only check that the array of integers signed by the issuer is valid, but that the attribute values were transformed and ordered according to the mapping referenced in the credential definition.

    Note: The anonymous credential signature scheme introduced here is Camenisch-Lysyanskaya signatures. It is the use of this signature scheme in combination with rich schema objects that necessitates a mapping object. If another signature scheme is used which does not have the same requirements, a mapping object may not be necessary or a different mapping object may need to be defined.

    "},{"location":"concepts/0250-rich-schemas/#encodings","title":"Encodings","text":"

    All attribute values to be signed in an anonymous credential must be transformed into 256-bit integers in order to support the current Camenisch-Lysyanskaya signature scheme.

    The introduction of rich schemas and their associated range of possible attribute value data types require correspondingly rich encoding algorithms. The purpose of the encoding object is to specify the algorithm used to perform transformations for each attribute value data type. The encoding algorithms will also allow for extending the cryptographic schemes and various sizes of encodings (256-bit, 384-bit, etc.). The encoding algorithms will allow for broad use of predicate proofs, and avoid hashed values where they are not needed, as hashed values do not support predicate proofs.

    Encodings, at their heart, describe an algorithm for converting data from one format to another, in a deterministic way. They can therefore be used in myriad ways, not only for the values of attributes within anonymous credentials.

    Encoding objects are written to a data registry. Encoding objects also allow for a means of extending the standard set of encodings.

    "},{"location":"concepts/0250-rich-schemas/#credential-definitions","title":"Credential Definitions","text":"

    Credential definitions provide a method for issuers to specify a schema and mapping object, and provide public key data for anonymous credentials they issue. This ties the schema and public key data values to the issuer. The verifier uses the credential definition to check the validity of each signed credential attribute presented to the verifier.

    "},{"location":"concepts/0250-rich-schemas/#presentation-definitions","title":"Presentation Definitions","text":"

    A presentation definition is the means whereby a verifier asks for data from a holder. It contains a set of named desired proof attributes with corresponding restrictions that limit the potential sources for the attribute data according to the desired source schema, issuer DID, credential definition, etc. A presentation definition also contains a similar set of requested predicate proofs, with named attributes and restrictions.

    It may be helpful to think of a presentation definition as the mirror image of a mapping object. Where a mapping object specifies the graph paths of the attributes to be signed, a presentation definition specifies the graph query that may be fulfilled by such graph paths. The presentation definition does not need to concern itself with specifying a particular mapping that contains the desired graph paths, any mapping that contains those graph paths may be acceptable. The fact that multiple graph paths might satisfy the query adds some complexity to the presentation definition. The query may also restrict the acceptable set of issuers and credential definitions and specify the desired predicates.

    A presentation definition is expressed using JSON-LD and may be stored in a data registry. This supports re-use, interoperability, and a much richer set of communication options. Multiple verifiers can use the same presentation definitions. A community may specify acceptable presentation definitions for its verifiers, and this acceptable set may be adopted by other communities. Credential offers may include the presentation definition the issuer would like fulfilled by the holder before issuing them a credential. Presentation requests may also be more simply negotiated by pointing to alternative acceptable presentation definitions. Writing a presentation definition to a data registry also allows it to be publicly reviewed for privacy and security considerations and gain or lose reputation.

    Presentation definitions specify the set of information that a verifier wants from a holder. This is useful regardless of the underlying credential scheme.

    "},{"location":"concepts/0250-rich-schemas/#presentations","title":"Presentations","text":"

    The presentation object that makes use of rich schemas is defined by the W3C Verifiable Credentials Data Model, and is known in the specification as a verifiable presentation. The verifiable presentation is defined as a way to present multiple credentials to a verifier in a single package.

    As with most rich schema objects, verifiable presentations will be useful for credential systems beyond anonymous credentials.

    The claims that make up a presentation are specified by the presentation definition. For anonymous credentials, the credentials from which these claims originate are used to create new derived credentials that only contain the specified claims and the cryptographic material necessary for proofs.

    The type of claims in derived credentials is also specified by the presentation definition. These types include revealed and predicate proof claims, for those credential systems which support them.

    The presentation contains the cryptographic material needed to support a proof that source credentials are all held by the same entity. For anonymous credentials, this is accomplished by proving knowledge of a link secret.

    A presentation refers to the presentation definition it fulfills. For anonymous credentials, is also refers to the credential definitions on the data registry associated with the source credentials. A presentation is not stored on a data registry.

    The following image illustrates the relationship between anonymous credentials and presentations:

    "},{"location":"concepts/0250-rich-schemas/#presentation-description","title":"Presentation Description","text":"

    There may be a number of ways a presentation definition can be used by a holder to produce a presentation, based on the graph queries and other restrictions in the presentation definition. A presentation description describes the source credentials and the process that was used to derive a presentation from them.

    "},{"location":"concepts/0250-rich-schemas/#reference","title":"Reference","text":"

    This document draws on a number of other documents, most notably the W3C verifiable credentials and presentation data model.

    The signature types used for anonymous credentials are the same as those currently used in Indy's anonymous credential and Fabric's idemix systems. Here is the paper that defines Camenisch-Lysyanskaya signatures. They are the source for Indy's AnonCreds protocol.

    "},{"location":"concepts/0250-rich-schemas/#drawbacks","title":"Drawbacks","text":""},{"location":"concepts/0250-rich-schemas/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    This design has the following benefits: - It complies with the upcoming Verifiable Credentials standard. - It allows for interoperability with existing schemas, such as those found on schema.org. - It adds security guarantees by providing means for validation of attribute encodings. - It allows for a broad range of value types to be used in predicate proofs. - It introduces presentation definitions that allow for proof negotiation, rich presentation specification, and an assurance that the presentation requested complies with security and privacy concerns. - It supports discoverability of schemas, mappings, encodings, presentation definitions, etc.

    "},{"location":"concepts/0250-rich-schemas/#unresolved-questions","title":"Unresolved questions","text":"

    This technology is intended for implementation at the SDK API level. It does not address UI tools for the creation or editing of these objects.

    Variable length attribute lists are only partially addressed using mappings. Variable lists of attributes may be specified by a rich schema, but the maximum number of attributes that may be signed as part of the list must be determined at the time of mapping creation.

    "},{"location":"concepts/0250-rich-schemas/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0257-private-credential-issuance/","title":"Aries RFC 0257: Private Credential Issuance","text":""},{"location":"concepts/0257-private-credential-issuance/#summary","title":"Summary","text":"

    This document describes an approach to let private individuals issue credentials without needing to have a public DID or credential definition on the ledger but more importantly without disclosing their identity to the credential receiver or the verifier. The idea is for the private individual to anchor its identity in a public entity (DID) like an organization. The public entity issues a credential to the private individual which acts as a permission for the private individual to issue credentials on behalf of the public entity. To say it another way, the public entity is delegating the issuance capability to the private individual. The receiver of the delegated credential (from the private individual) does not learn the identity of the private individual but only learn that the public entity has allowed this private individual to issue credentials on its behalf. When such a credential is used for a proof, the verifier's knowledge of the issuer is same as the credential receiver, it only knows identity of the public entity. The contrasts the current anonymous credential scheme used by Aries where the credential receiver and proof verifier know the identity of the credential issuer. Additionally, using the same cryptographic techniques, the private individual can delegate issuance rights further, if allowed by the public entity.

    "},{"location":"concepts/0257-private-credential-issuance/#motivation","title":"Motivation","text":"

    As they\u2019ve been implemented so far, verifiable credentials in general, and Indy-style credentials in particular, are not well suited to helping private individuals issue. Here are some use cases we don\u2019t address:

    "},{"location":"concepts/0257-private-credential-issuance/#recommendations","title":"Recommendations","text":"

    Alice wants to give Bob a credential saying that he did good work for her as a plumber.

    "},{"location":"concepts/0257-private-credential-issuance/#testimony","title":"Testimony","text":"

    Alice isn\u2019t necessarily recommending Bob, but she\u2019s willing to say that he was physically present at her house at 9 am on July 31.

    "},{"location":"concepts/0257-private-credential-issuance/#payment-receipts","title":"Payment receipts","text":"

    Bob, a private person selling a car, wants to issue a receipt to Alice, confirming that she paid him the price he was asking.

    "},{"location":"concepts/0257-private-credential-issuance/#agreements","title":"Agreements","text":"

    Alice wants to issue a receipt to Carol, acknowledging that she is taking custody of a valuable painting and accepting responsibility for its safety. Essentially, this is Alice formalizing her half of a contract between peers. Carol wants to issue a receipt to Alice, formalizing her agreement to the contract as well. Note that consent receipts, whether they be for data sharing or medical procedures, fall into this category, but the category is broader than consent.

    "},{"location":"concepts/0257-private-credential-issuance/#delegation","title":"Delegation","text":"

    Alice wants to let Darla, a babysitter, have the right to seek medical care for her children in Alice\u2019s absence.

    The reasons why these use cases aren\u2019t well handled are:

    "},{"location":"concepts/0257-private-credential-issuance/#issuers-are-publicly-disclosed","title":"Issuers are publicly disclosed.","text":"

    Alice would have to create a wholly public persona and DID for her issuer role--and all issuance she did with that DID would be correlatable. This endangers privacy. (Non-Indy credentials have exactly this same problem; there is nothing about ZKPs that makes this problem arise. But proponents of other credential ecosystems don't consider this risk a concern, so they may not think their credentialing solution has a problem.)

    "},{"location":"concepts/0257-private-credential-issuance/#issuance-requires-tooling-setup-and-ongoing-maintenance","title":"Issuance requires tooling, setup, and ongoing maintenance.","text":"

    An issuer needs to register a credential definition and a revocation registry on the ledger, and needs to maintain revocation status. This is an expensive hassle for private individuals. (Setup for credential issuance in non-ZKP ecosystems is also a problem, particularly for revocation. However, it may be more demanding for Indy due to the need for a credential definition and due to the more sophisticated revocation model.)

    "},{"location":"concepts/0257-private-credential-issuance/#tutorial","title":"Tutorial","text":""},{"location":"concepts/0257-private-credential-issuance/#delegatable-credentials-as-a-tool","title":"Delegatable credentials as a tool","text":"

    Delegatable Credentials are a useful tool that we can use to solve this problem. They function like special Object Capabilities (OCAP) tokens, and may offer the beginnings of a solution. They definitely address the delegation use cases, at least. Their properties include:

    "},{"location":"concepts/0257-private-credential-issuance/#applying-delegatable-credentials-to-other-use-cases","title":"Applying Delegatable Credentials to Other Use Cases","text":"

    Here is how we might apply delegatable credentials to the private-individuals-can-issue problem.

    A new kind of issuer is needed, called a private credential facilitator (PCF). The job of a PCF is to eliminate some of the setup and maintenance hassle for private individual issuers by acting as a root issuer in a delegatable credential chain.

    On demand, a PCF is willing to issue a personal trust root (PTR) credential to any individual who asks. A PTR is a delegatable credential that points to a delegation trust framework where particular delegation patterns and credential schemas are defined. The PTR grants all privileges in that trust framework to its holder. It may also contain fields that describe the holder in certain ways (e.g., the holder is named Alice, the holder has a particular birth date or passport number or credit card number, the holder has a blinded link secret with a certain value, etc), based on things that the individual holder has proved to the PCF. The PCF is not making any strong claim about holder attributes when it issues these PTR credentials; it's just adding a few attributes that can be easily re-proved by Alice in the future, and that can be used to reliably link the holder to more traditional credentials with higher bars for trust. In some ways the PCF acts like a notary by endorsing or passing along credential attributes that originated elsewhere.

    For example, Alice might approach a PCF and ask for a PTR that she can use as a homeowner who wishes to delegate certain privileges in her smart home to AirBnB guests. The PCF would (probably for a fee) ask Alice to prove her name, address, and home ownership with either verifiable or non-digital credentials, agree with Alice on a trust framework that's useful for AirBnB scenarios, and create a PTR for Alice that gives Alice all privileges for her home under that trust framework.

    With this PTR in hand, Alice can now begin to delegate or subdivide permissions in whatever way she chooses, without a public DID and without going through any issuer setup herself. She issues (delegates) credentials to each guest, allowing them to adjust the thermostat and unlock the front doors, but not to schedule maintenance on the furnace. Each delegated credential she issues traces its trust back to the PTR and from there, to the PCF.

    Alice can revoke any credential she has delegated in this way, without coordinating either upstream or downstream. The PCF she contracted with gave her access to do this by either configuring their own revocation registry on the ledger so it was writable by Alice's DID as well as their own, or by providing a database or other source of truth where revocation info could be stored and edited by any of its customers.

    This use of delegatable credentials is obvious, and helpful. But what's cooler and less obvious is that Alice can also use the PTR and delegatable credential mechanism to address non-delegation use cases. For example, she can issue a degenerate delegated credential to Bob the plumber, granting him zero privileges but attesting to Alice's 5-star rating for the job he did. Bob can use this credential to build his reputation, and can prove that each recommendation is unique because each such recommendation credential is bound to a different link secret, which in turn traces back to a unique human due to the PCF's vetting of Alice when Alice enrolled in the service. If Alice agrees to include information about herself in the recommendation credential, Bob can even display credential-based recommendations (and proofs derived therefrom) on his website, showing that recommendation A came from a woman named Alice who lived in postal code X, whereas recommendation B came from a man named Bob who lived in postal code Y.

    Lets consider another case where an employee issues a delegated credential on the basis of a credential issued to by the employer. Lets say the PCF is an employer. The PCF issues a PTR credential to each of its employee using which the employee can issue recommendation credentials to different 3rd party service providers associated with the employer. The recommender (employee) while issuing a recommendation credential proves that he has a valid non-revoked PTR credential from the PCF. The credential contains the id of the employee, the rating, other data and is signed by the employee's private key. The 3rd party service provider can discover the employee's public key from the employer's hosted database. Now the service provider can use this credential to create proofs which do not reveal the identity of the employee but only the employer. If the verifier wanted more protection, he could demand that the service provider verifiably encrypt the employee ID from the PTR credential for the employer such that if the employer wishes (in case of any dispute), deanonymize the employee by decrypting the encrypted employee ID.

    Alice can issue testimony credentials in the same way she issues recommendation credentials. And she can issue payment receipts the same way.

    "},{"location":"concepts/0257-private-credential-issuance/#more-about-reputation-management","title":"More about Reputation Management","text":"

    Reputation requires a tradeoff with privacy; we haven't figured out anonymous reputation yet. If Alice's recommendation of Bob as a plumber (or her testimony that Bob was at her house yesterday) is going to carry any weight, people who see it need to know that the credential used as evidence truly came from a woman named Alice--not from Bob himself. And they need to know that Alice couldn't distort reputation by submitting dozens of recommendations or eyewitness accounts herself.

    Therefore, issuance of by private individuals should start by carefully answering this question:

    What characteristic(s) of the issuer will make this credential useful?

    The characteristics might include:

    Weighting factors are probably irrelevant to payment receipts and agreements; proofs in these use cases are about binary matching, not degree.

    All of our use cases for individual issuance care about distinguishing factors. Sometimes the distinguishing factors might be fuzzy (enough to tell that Alice-1 recommending Bob as a plumber is different from Alice-2, but not enough to strongly identify); other times they have to be exact. They do need distinguishing factors, though. Where these factors could maybe be fuzzy in recommendations or do matter, though. In many cases, the distinguishing factors need to be strongly identifying, whereas for recommendations or testimony, fuzzier distinguishing factors might probably don\u2019t care about weighting factors.

    Distinguishing factors and weighting factors should be embedded in each delegated credential, to the degree that they will be needed in downstream use to facilitate reputation. In some cases, we may want to use verifiable encryption to embed some of them. This would allow Alice to give an eyewitness testimony credential to Bob, to still remain anonymous from Bob, but to prove to Bob at the time of private issuance that Alice's strong personal identifiers are present, and could be revealed by Alice's PCF (or a designated 3rd party) if Bob comes up with a compelling reason.

    "},{"location":"concepts/0257-private-credential-issuance/#reference","title":"Reference","text":""},{"location":"concepts/0257-private-credential-issuance/#todo","title":"TODO","text":""},{"location":"concepts/0257-private-credential-issuance/#drawbacks","title":"Drawbacks","text":""},{"location":"concepts/0257-private-credential-issuance/#todo_1","title":"TODO","text":""},{"location":"concepts/0257-private-credential-issuance/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0257-private-credential-issuance/#todo_2","title":"TODO","text":""},{"location":"concepts/0257-private-credential-issuance/#prior-art","title":"Prior art","text":""},{"location":"concepts/0257-private-credential-issuance/#todo_3","title":"TODO","text":""},{"location":"concepts/0257-private-credential-issuance/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0257-private-credential-issuance/#todo_4","title":"TODO","text":""},{"location":"concepts/0257-private-credential-issuance/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/","title":"Aries RFC 0268: Unified DIDCOMM Deeplinking","text":""},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#summary","title":"Summary","text":"

    A set of specifications for mobile agents to standardize around to provide better interoperable support for DIDCOMM compliant messages. Standards around the way agents intepret these encoded messages allow increased user choice when picking agents.

    This RFC lists a series of standards which must be followed by an Aires compatible agent for it to be considered interoperable with other agents.

    "},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#motivation","title":"Motivation","text":"

    As more and more mobile agents come to market, the user base for these wallets will become increasingly fragmented. As one of the core tennats of SSI is interoperability we want to ensure that messages passed to users from these wallets are in formats that any wallet can digest. We want to ensure that the onboarding experience for new users is as seemless and unified as possible.

    "},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#tutorial","title":"Tutorial","text":"

    Alice wants to invite Bob to connect with her. Alice sends Bob a invitation link generated by her Mobile Agent (a wallet provided by ACME Corp).

    The invitation url takes the form of: \"www.acmecorp.com/invite?d_m=XXXXX\" where the text following the query parameter \"d_m\" is the base64url-encoded invitation.

    Bob recieves this link and opens it on his phone. Since he doesn't have an Aires wallet, he gets directed to the webpage, \"acmecorp.com/invite\" where there's a list of wallets for each platform that he can choose and pick from. On the page is also the ACME Corp offical wallet.

    Bob decides to download the ACME Corp wallet and clicks on the link again. Because the ACME Corp wallet registered 'www.acmecorp.com' as it's deeplink, Bob gets prompted to open it in the ACME Corp app.

    Alice sends a similar invite to Charlie. Charlie uses a wallet distributed by Open Corp. Open Corp does not have the \"acmecorp.com\" URI registered as their deeplink, because they do not own that domain.

    When Charlie lands on that page, along with the offer for wallets is a QR Code with the encoded invitation and a button that states \"Open in App\". This button launches the didcomm:// custom protocol, which is registered by ALL Aires compatible wallets (in the same way e-mail apps all register mailto: ).

    Pressing the button prompts Charlie's phone to open the app that can handle didcomm://, which happens to be the wallet app by Open Corp.

    In both instances, Alice does not need to worry about what wallet the counter party is using and can send didcomm messages with the assurance that the counter party will have a onboarding experience waiting for them even if they don't have a wallet already.

    "},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#reference","title":"Reference","text":""},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#uri-registration","title":"URI Registration","text":"

    Each mobile agent should register their own URI to open in app. These URI's should point to a landing invitation page.

    An example of such a URI page/invitation: \"www.spaceman.id/invite?d_m=\" In this case, if the recepient of this URL has an app that's registered Spaceman.id as it's domain (likely a wallet published by spaceman.id) then it will open the invation in app. If the recepient does not have the app installed, they'll have a page open on their mobile browser with suggestions for DIDCOMM compliant wallets.

    "},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#invitation-page","title":"Invitation Page","text":"

    The Invitation page users land on must have a list of DIDCOMM compliant agents for each platform (iOS, Android). A list of these can be found here:

    The Invitation page must also show the encoded message as a scannable QR code and have a button (\"Open in App\") to manually launch the didcomm:// protcol. The QR code helps interactions between web and mobile agent wallets.

    The button to manually launch the didcomm:// protocol allows other Aires wallets on their phone to launch to handle the message, even if they haven't registered that specific URI. Alternatively you can use a library to automatically launch the didcomm:// prefix when the webpage is opened.

    The Invitation page should also run a URL shortner service. This would help passing messages between services easily without needing to pass massive strings around. This also prevents polluting closed source properitary services with links.

    "},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#deeplink-prefix","title":"Deeplink Prefix","text":"

    There must exist a common prefix for mobile agents to register for DIDCOMM messages. This vastly improves interoperability between agents and messages, as they can be opened by any wallet. As the messages are all off the DIDCOMM family, we think the prefix that is best suited is didcomm://

    All Mobile Agents should register didcomm:// as affilated with their app to both iOS and Android. This will enable users to be prompted to use their wallet when they recieve a DIDCOMM message.

    "},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#message-requirements","title":"Message Requirements","text":"

    Proposed is a change to the query parameter usually used for the passing of the message from 'c_i' which stands for connection invite to the more inclusive 'd_m' which stands for DIDCOMM message.

    Furthermore, messages must also be base64 encoded serialized jsons, stripped of any excess space and as small as possible.

    "},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#drawbacks","title":"Drawbacks","text":"

    This puts extra work on wallet developers to ensure a good experience.

    On iOS only one app can be registered to handle didcomm:// at a time; the first one to be installed will prevent others from using this custom scheme.

    "},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    This allows each wallet to define their own invite page (or use an existing page provided by the community) while providing a common protocol scheme (didcomm://) for all applications.

    If we don't do this, there's a chance that wallet applications become unable to communicate with each other effecively during the onboarding process, leading to fragmentation, much like in the IM world.

    "},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#prior-art","title":"Prior Art","text":""},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#unresolved-questions","title":"Unresolved Questions","text":""},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes.

    Name / Link Implementation Notes"},{"location":"concepts/0270-interop-test-suite/","title":"0270: Interop Test Suite","text":""},{"location":"concepts/0270-interop-test-suite/#summary","title":"Summary","text":"

    Describes the goals, scope, and interoperability contract of the Aries Interop Test Suite. Does NOT serve as a design doc for the test suite code, or as a developer guide explaining how the test suite can be run; see the test suite codebase for that.

    "},{"location":"concepts/0270-interop-test-suite/#motivation","title":"Motivation","text":"

    The Aries Interop Test Suite makes SSI interoperability publicly and objectively measurable. It is a major deliverable of the Aries project as a whole--not a minor detail that only test zealots care about. It's important that the entire SSI community understand what it offers, how it works, what its results mean, and how it should be used.

    "},{"location":"concepts/0270-interop-test-suite/#tutorial","title":"Tutorial","text":"

    Interoperability is a buzzword in the SSI/decentralized identity space. We all want it.

    Without careful effort, though, interoperability is subjective and slippery. If products A and B implement the same spec, or if they demo cooperation in a single workflow, does that mean they can be used together? How much? For how long? Across which release boundaries? With what feature caveats?

    We need a methodology that gives crisp answers to questions like these--and it needs to be more efficient than continuously exercising every feature of every product against ../../features of every other product.

    However, it's important to temper our ambitions. Standards, community specs, and reference implementations exist, and many of them come with tests or test suites of their own. Products can test themselves with these tools, and with custom tests written by their dev staffs, and make rough guesses about interoperability. The insight we're after is a bit different.

    "},{"location":"concepts/0270-interop-test-suite/#goals","title":"Goals","text":"

    What we need is a tool that achieves these goals:

    1. Evaluate practical interoperability of agents.

      Other software that offers SSI ../../features should also be testable. Here, such components are conflated with agents for simplicity, but it's understood that the suite targets protocol participants no matter what their technical classification.

    Focus on remote interactions that deliver business value: high-level protocols built atop DIDComm, such as credential issuance, proving, and introducing, where each participant uses different software. DID methods, ledgers, crypto libraries, credential implementations, and DIDComm infrastructure should have separate tests that are out of scope here. None of these generate deep insight into whether packaged software is interoperable enough to justify purchase decisions; that's the gap we need to plug.

    1. Describe results in a formal, granular, reproducible way that supports comparison between agents A and B, and between A at two different points of time or in two different configurations.

      This implies a structured report, as well as support for versioning of the suite, the agents under test, and the results.

    2. Track the collective community state of the art, so measurements are comprehensive and up-to-date, and so new ideas automatically encounter pressure to be vetted for interoperability.

      The test suite isn't a compliance tool, and it should be unopinionated about what's important and what's not. However, it should embody a broad list of testable ../../features--certainly, ones that are standard, and often, ones that are still maturing.

    "},{"location":"concepts/0270-interop-test-suite/#dos-and-donts","title":"Dos and Don'ts","text":"

    Based on the preceding context, the following rules guide our understanding of the test suite scope:

    "},{"location":"concepts/0270-interop-test-suite/#general-approach","title":"General Approach","text":"

    We've chosen to pursue these goals by maintaining a modular interop test suite as a deliverable of the Aries project. The test suite is an agent in its own right, albeit an agent with deliberate misbehaviors, a security model unsuitable for production deployment, an independent release schedule, and a desire to use every possible version of every protocol.

    Currently the suite lives in the aries-protocol-test-suite repo, but the location and codebase could change without invalidating this RFC; the location is an implementation detail.

    "},{"location":"concepts/0270-interop-test-suite/#contract-between-suite-and-agent-under-test","title":"Contract Between Suite and Agent Under Test","text":"

    The contract between the test suite and the agents it tests is:

    "},{"location":"concepts/0270-interop-test-suite/#suite-will","title":"Suite will...","text":"
    1. Be packaged for local installation.

      Packaging could take various convenient forms. Those testing an agent install the suite in an environment that they control, where their agent is already running, and then configure the suite to talk to their agent.

    2. Evaluate the agent under test by engaging in protocol interactions over a frontchannel, and control the interactions over a backchannel.

      Note: Initially, this doc stipulated that both channels should use DIDComm over HTTP. This has triggered some dissonance. If an agent doesn't want to talk HTTP, should it have to, just to be tested? If an agent wants to be controlled over a RESTful interface, shouldn't it be allowed to do that? Answers to the preceding two questions have been proposed (use a generic adapter to transform the protocol, but don't make the test suite talk on a different frontchannel; unless all agents expose the same RESTful interface, the only thing we can count on is that agents will have DIDComm support, and the only methdology we have for uniform specification is to describe a DIDComm-based protocol, so yes, backchannel should be DIDComm). These two incompatible opinions are both alive and well in the community, and we are not yet converging on a consensus. Therefore, the actual implementation of the frontchannel and backchannel remains a bit muddy right now. Perhaps matters will clarify as we think longer and/or as we gain experience with implementation.

      Over the frontchannel, the test suite and the agent under test look like ordinary agents in the ecosystem; any messages sent over this channel could occur in the wild, with no clue that either party is in testing mode.

      The backchannel is the place where testing mode manifests. It lets the agent's initial state be set and reset with precision, guarantees its choices at forks in a workflow, eliminates any need for manual interaction, and captures notifications from the agent about errors. Depending on the agent under test, this backchannel may be very simple, or more complex. For more details, see Backchannel below.

      Agents that interact over other transports on either channel can use transport adapters provided by the test suite, or write their own. HTTP is the least common denominator transport into which any other transports are reinterpreted. Adapting is the job of the agent developer, not the test suite--but the suite will try to make this as easy as possible.

    3. Not probe for agent ../../features. Instead, it will just run whatever subset of its test inventory is declared relevant by the agent under test.

      This lets simple agents do simple integrations with the test suite, and avoid lots of needless error handling on both sides.

    4. Use a set of predefined identities and a set of starting conditions that all agents under test must be able to recognize on demand; these are referenced on the backchannel in control messages. See Predefined Inventory below.

    5. Run tests in arbitrary orders and combinations, but only run one test at a time.

      Some agents may support lots of concurrency, but the test suite should not assume that all agents do.

    6. Produce an interop profile for the agent under test, with respect to the tested ../../features, for every successful run of the test suite.

      A \"successful\" run is one where the test suite runs to completion and believes it has valid data; it has nothing to do with how many tests are passed by the agent under test. The test suite will not emit profiles for unsuccessful runs.

      Interop profiles emitted by the test suite are the artifacts that should be hyperlinked in the Implementation Notes section of protocol RFCs. They could also be published (possibly in a prettified form) in release notes, distributed as a product or documentation artifact, or returned as an attachment with the disclose message of the Discover Features protocol.

    7. Have a very modest footprint in RAM and on disk, so running it in Docker containers, VMs, and CI/CD pipelines is practical.

    8. Run on modern desktop and server operating systems, but not necessarily on embedded or mobile platforms. However, since it interacts with the agent under test over a remote messaging technology, it should be able to test agents running on any platform that's capable of interacting over HTTP or over a transport that can be adapted to HTTP.

    9. Enforce reasonable timeouts unless configured not to do so (see note about user interaction below).

    "},{"location":"concepts/0270-interop-test-suite/#agent-under-test-will","title":"Agent under test will...","text":"
    1. Provide a consistent name for itself, and a semver-compatible version, so test results can be compared across test suite runs.

    2. Use the test suite configuration mechanism to make a claim about the tests that it believes are relevant, based on the ../../features and roles it implements.

    3. Implement a distinction between test mode and non-test mode, such that:

      • Test mode causes the agent to expose and use a backchannel--but the backchannel does not introduce a risk of abuse in production mode.

      • Test mode either causes the agent to need no interaction with a user (preferred), or is combined with test suite config that turns off timeouts (not ideal but may be useful for debugging and mobile agents). This is necessary so the test suite can be automated, or so unpredictable timing on user interaction doesn't cause spurious results.

      The mechanism for implementing this mode distinction could be extremely primitive (conditional compilation, cmdline switches, config file, different binaries). It simply has to preserve ordinary control in the agent under test when it's in production, while ceding some control to the test suite as the suite runs.

    4. Faithfully create the start conditions implied by named states from the Predefined Inventory, when requested on the backchannel.

    5. Accurately report errors on the backchannel.

    "},{"location":"concepts/0270-interop-test-suite/#reference","title":"Reference","text":""},{"location":"concepts/0270-interop-test-suite/#releasing-and-versioning","title":"Releasing and Versioning","text":"

    Defining a release and versioning scheme is important, because the test suite's version is embedded in every interop profile it generates, and people who read test suite output need to reason about whether the results from two different test suites are comparable. By picking the right conventions, we can also avoid a lot of complexity and maintenance overhead.

    The test suite releases implicitly with every merged commit, and is versioned in a semver-compatible way as follows:

    The major version should change rarely, after significant community debate. The minor version should update on a weekly or monthly sort of timeframe as protocols accumulate and evolve in the community--without near-zero release effort by contributors to the test suite. The patch version is updated automatically with every commit. This is a very light process, but it still allows the test suite on Monday and the test suite on Friday to report versions like 1.39.5e22189 and 1.40.c5d8aaf, to know which version of the test suite is later, to know that both versions implement the same contract, and to know that the later version is backwards-compatible with the earlier one.

    "},{"location":"concepts/0270-interop-test-suite/#test-naming-and-grouping","title":"Test Naming and Grouping","text":"

    Tests in the test suite are named in a comma-separated form that groups them by protocol, version, role, and behavior, in that order. For example, a test of the holder role in version 1.0 of the the issue-credential protocol, that checks to see if the holder sends a proper ack at the end, might be named:

    issue-credential,1.0,holder,sends-final-ack\n

    Because of punctuation, this format cannot be reflected in function names in code, and it also will probably not be reflected in file names in the test suite codebase. However, it provides useful grouping behavior when sorted, and it is convenient for parsing. It lets agents under test declare patterns of relevant tests with wildcards. An agent that supports the credential issuance but not holding, and that only supports the 1.1 version of the issue-credential protocol, can tell the test suite what's relevant with:

    issue-credential,1.1,issuer,*\n
    "},{"location":"concepts/0270-interop-test-suite/#interop-profile","title":"Interop Profile","text":"

    The results of a test suite run are represented in a JSON object that looks like this:

    {\n    \"@type\": \"Aries Test Suite Interop Profile v1\"\n    \"suite_version\": \"1.39.5e22189\",\n    \"under_test_name\": \"Aries Static Agent Python\",\n    \"under_test_version\": \"0.9.3\",\n    \"test_time\": \"2019-11-23T18:59:06\", // when test suite launched\n    \"results\": [\n        {\"name\": \"issue-credential,1.0,holder,ignores-spurious-response\", \"pass\": false },\n        {\"name\": \"issue-credential,1.0,holder,sends-final-ack\", \"pass\": true },\n    ]\n}\n
    "},{"location":"concepts/0270-interop-test-suite/#backchannel","title":"Backchannel","text":"

    While the concept of a backchannel has been accepted by the community, there is not alignment with the definition of the backchannel provided here. Rather than maintaining this section as related work in the community evolves the concept, we're adding this note to say \"this section will likely change.\" Once backchannel implementations stabilize with a core definition, we'll refine this section as appropriate.

    The backchannel between test suite and agent under test is managed as a standard DIDComm protocol. The identifier for the message family is X. The messages include:

    "},{"location":"concepts/0270-interop-test-suite/#predefined-inventory","title":"Predefined Inventory","text":"

    TODO: link to the predefined identity for the test suite created by Daniel B, plus the RFC about other predefined DIDs. Any and all of these should be names as possible existing states in the KMS. Other initial states:

    "},{"location":"concepts/0270-interop-test-suite/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0270-interop-test-suite/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0289-toip-stack/","title":"0289: The Trust Over IP Stack","text":""},{"location":"concepts/0289-toip-stack/#summary","title":"Summary","text":"

    This Aries concept RFC introduces a complete architecture for Internet-scale digital trust that integrates cryptographic trust at the machine layer with human trust at the business, legal, and social layers.

    "},{"location":"concepts/0289-toip-stack/#motivations","title":"Motivations","text":"

    The importance of interoperability for the widespread adoption of an information network architecture has been proven by the dramatic rise to dominance of the Internet [1]. A key driver of the Internet's rise to dominance was the open source implementation of the TCP/IP stack in Version 4.2 of Berkeley Software Distrbution (BSD) of UNIX [2]. This widely-adopted open source implementation of the TCP/IP stack offered the capability for any two peer devices to form a connection and exchange data packets regardless of their local network. In addition, secure protocol suites such as the Secure Sockets Layer (SSL), and its modern version, Transport Layer Security (TLS), have been protecting Internet transactions since 1995.

    Without a doubt, implementations of the TCP/IP stack, followed by SSL/TLS, have driven a tremendous amount of innovation over the last 30 years. However, although protocols such as TLS offer world-class security, the architecture over which they have been built leaves a significant and widely-recognized gap: a means for any peer to establish trust over these digital connections. For example, while TLS does allow a user to trust she is accessing the right website, it does not offer, at least in an usable way, a way for the user to log in, or prove her identity, to the website. This gap has often been referred to as \"the Internet's missing identity layer\" [3].

    The purpose of this Aries Concept RFC is to fill this gap by defining a standard information network architecture that developers can implement to establish trusted relationships over digital communicatons networks.

    "},{"location":"concepts/0289-toip-stack/#architectural-layering-of-the-trust-over-ip-stack","title":"Architectural Layering of the Trust over IP Stack","text":"

    Since the ultimate purpose of an \"identity layer\" is not actually to identify entities, but to facilitate the trust they need to interact, co-author John Jordan coined the term Trust over IP (ToIP) for this stack. Figure 1 is a diagram of its four layers:

    Figure 1: The ToIP stack

    Note that it is actually a \"dual stack\": two parallel stacks encompassing both technology and governance. This reflects the fact that digital trust cannot be achieved by technology alone, but only by humans and technology working together.

    Important: The ToIP stack does not define specific governance frameworks. Rather it is a metamodel for how to design and implement digital governance frameworks that can be universally referenced, understood, and consumed in order to facilitate transitive trust online. This approach to defining governance makes it easier for humans\u2014and the software agents that represent us at Layer Two\u2014to make trust decisions both within and across trust boundaries.

    The ToIP Governance Stack plays a special role in ToIP architecture. See the descriptions of the specialized governance frameworks at each layer and also the special section on Scaling Digital Trust.

    "},{"location":"concepts/0289-toip-stack/#layer-one-public-utilities-for-decentralized-identifiers-dids","title":"Layer One: Public Utilities for Decentralized Identifiers (DIDs)","text":"

    The ToIP stack is fundamentally made possible by new advancements in cryptography and distributed systems, including blockchains and distributed ledgers. Their high availability and cryptographic verifiability enable strong roots of trust that are decentralized so they will not serve as single points of failure.

    "},{"location":"concepts/0289-toip-stack/#dids","title":"DIDs","text":"

    Adapting these decentralized systems to be the base layer of the ToIP stack required a new type of globally unique identifier called a Decentralized Identifier (DID). Starting with a research grant from the U.S. Department of Homeland Security Science & Technology division, the DID specification [4] and the DID Primer [5] were contributed to the W3C Credentials Community Group in June 2017. In September 2019 the W3C launched the DID Working Group to complete the job of turning DIDs into a full W3C standard [6].

    DIDs are defined by an RFC 3986-compliant URI scheme designed to provide four core properties:

    1. Permanence. A DID effectively functions as a Uniform Resource Name (URN) [7], i.e., once assigned to an entity (called the DID subject), a DID is a persistent identifier for that entity that should never be reassigned to another entity.
    2. Resolvability. A DID resolves to a DID document\u2014a data structure (encoded in JSON or other syntaxes) describing the public key(s) and service endpoint(s) necessary to engage in trusted interactions with the DID subject.
    3. Cryptographic verifiability. The cryptographic material in a DID document that enables a DID subject to prove cryptographic control of a DID.
    4. Decentralization. Because a DID is cryptographically generated and verified, it does not require a centralized registration authority such as those needed for phone numbers, IP addresses, or domain names today.

    Figure 2 shows the resemblance between DID syntax and URN syntax (RFC 8141).

    Figure 2: How DID syntax resembles URN syntax

    "},{"location":"concepts/0289-toip-stack/#did-methods","title":"DID Methods","text":"

    Like the URN specification, the DID specification also defines a generic URI scheme which is in turn used for defining other specific URI schemes. With DIDs, these are called DID methods. Each DID method is defined by its own DID method specification that must include:

    1. The target system (technically called a verifiable data registry) against which the DID method operates. In the ToIP stack this is called a utility. Note that a utility is not required to be implemented as a blockchain or distributed ledger. DID methods can be designed to work with any type of distributed database, file system, or other system that can anchor a cryptographic root of trust.
    2. The DID method name.
    3. The syntax of the DID method-specific string.
    4. The CRUD (Create, Read, Update, Delete) operations for DIDs and DID documents that conform to the specification.

    DIDs have already proved to be a popular solution to decentralized PKI (public key infrastructure) [8]. Over 40 DID methods have already been registered in the informal DID Method Registry [9] hosted by the W3C Credentials Community Group (which the W3C DID Working Group is planning to incorporate into a formal registry as one of its deliverables). The CCG DID Method Registry currently include methods for:

    "},{"location":"concepts/0289-toip-stack/#utility-governance-frameworks","title":"Utility Governance Frameworks","text":"

    A Layer One public utility may choose any governance model suited to the the constraints of its business model, legal model, and technical architecture. This is true whether the public utility is operated as a blockchain, distributed ledger, or decentralized file store, or whether it is permissioned, permissionless, or any hybrid. (Note that even permissionless blockchain networks still have rules\u2014formal or informal\u2014governing who can update the code.)

    All ToIP architecture requires is that the governance model conform to the requirements of the ToIP Governance Stack to support both interoperability and transitive trust. This includes transparent identification of the governance authority, the governance framework, and participant nodes or operators; transparent discovery of nodes and/or service endpoints; and transparent security, privacy, data protection, and other operational policies. See the Governance section below.

    Utility governance frameworks that conform to the ToIP Governance Stack model will support standard roles for all types of utility governance authorities. For example, the role currently supported by public-permissioned utilities such as those based on Hyperledger Indy include:

    "},{"location":"concepts/0289-toip-stack/#layer-one-support-for-higher-layers","title":"Layer One Support for Higher Layers","text":"

    DIDs and DID documents are not the only cryptographic data structures needed to support the higher layers. Others include:

    In summary, the interoperability of Layer One is currently defined by the W3C DID specification and by Aries RFCs for the other cryptographic data structures listed above. Any DID registry that supports all of these data structures can work with any agent, wallet, and secure data store that operates at Layer Two.

    "},{"location":"concepts/0289-toip-stack/#layer-two-the-didcomm-protocol","title":"Layer Two: The DIDComm Protocol","text":"

    The second layer of the Trust over IP stack is defined by the DIDComm secure messaging standards [10]. This family of specifications, which are now being defined in the DIDComm Working Group at the Decentralized Identity Foundation, establish a cryptographic means by which any two software agents (peers) can securely communicate either directly edge-to-edge or via intermediate cloud agents as shown in Figure 3).

    Figure 3: At Layer Two, agents communicate peer-to-peer using DIDComm standards

    "},{"location":"concepts/0289-toip-stack/#peer-dids-and-did-to-did-connections","title":"Peer DIDs and DID-to-DID Connections","text":"

    A fundamental feature of DIDComm is that by default all DID-to-DID connections are established and secured using pairwise pseudonymous peer DIDs as defined in the Peer DID Method Specification [11]. These DIDs are based on key pairs generated and stored by the local cryptographic key management system (KMS, aka \"wallet\") maintained by each agent. Agents then use the DID Exchange protocol to exchange peer DIDs and DID documents in order to establish and maintain secure private connections between each other\u2014including key rotation or revocation as needed during the lifetime of a trusted relationship.

    Because all of the components of peer DIDs and DID-to-DID connections are created, stored, and managed at Layer Two, there is no need for them to be registered in a Layer One public utility. In fact there are good privacy and security reasons not to\u2014these components can stay entirely private to the peers. As a general rule, the only ToIP actors who should need public DIDs at Layer One are:

    1. Credential issuers as explained in Layer Three below.
    2. Governance authorities at any layer as explained in the section on Scaling Digital Trust.

    This also means that, once formed, DID-to-DID connections can be used for any type of secure communications between the peers. Furthermore, these connections are capable of lasting literally forever. There are no intermediary service providers of any kind involved. The only reason a DID-to-DID connection needs to broken is if one or both of the peers no longer wants it.

    "},{"location":"concepts/0289-toip-stack/#agents-and-wallets","title":"Agents and Wallets","text":"

    At Layer Two, every agent is paired with a digital wallet\u2014or more accurately a KMS (key management system). This KMS can be anything from a very simple static file on an embedded device to a highly sophisticated enterprise-grade key server. Regardless of the complexity, the job of the KMS is to safeguard sensitive data: key pairs, zero-knowledge proof blinded secrets, verifiable credentials, and any other cryptographic material needed to establish and maintain technical trust.

    This job includes the difficult challenge of recovery after a device is lost or stolen or a KMS is hacked or corrupted. This is the province of decentralized key management. For more details, see the Decentralized Key Management System (DKMS) Design and Architecture document [12], and Dr. Sam Smith's paper on KERI (Key Event Receipt Infrastructure) [13].

    "},{"location":"concepts/0289-toip-stack/#secure-data-stores","title":"Secure Data Stores","text":"

    Agents may also be paired with a secure data store\u2014a database with three special properties:

    1. It is controlled exclusively by the DID controller (person, organization, or thing) and not by any intermediary or third party.
    2. All the data is encrypted with private keys in the subject\u2019s KMS.
    3. If a DID controller has more than one secure data store, the set of stores can be automatically synchronized according to the owner\u2019s preferences.

    Work on standardizing secure data stores has been proceeding in several projects in addition to Hyperledger Aries\u2014primarily at the Decentralized Identity Foundation (DIF) and the W3C Credentials Community Group. This has culminated in the formation of the Secure Data Store (SDS) Working Group at DIF.

    "},{"location":"concepts/0289-toip-stack/#guardianship-and-guardian-agentswallets","title":"Guardianship and Guardian Agents/Wallets","text":"

    The ToIP stack cannot become a universal layer for digital trust if it ignores the one-third of the world's population that do not have smartphones or Internet access\u2014or the physical, mental, or economic capacity to use ToIP-enabled infrastructure. This underscores the need for the ToIP layer to robustly support the concept of digital guardianship\u2014the combination of a hosted cloud agent/wallet service and an individual or organization willing to take legal responsibility for managing that cloud agent/wallet on behalf of the person under guardianship, called the dependent.

    For more about all aspects of digital guardianship, see the Sovrin Foundation white paper On Guardianship in Self-Sovereign Identity [14].

    "},{"location":"concepts/0289-toip-stack/#provider-governance-frameworks","title":"Provider Governance Frameworks","text":"

    At Layer Two, governance is needed primarily to establish interoperability testing and certification requirements, including security, privacy, data protection, for the following roles:

    "},{"location":"concepts/0289-toip-stack/#layer-two-support-for-higher-layers","title":"Layer Two Support for Higher Layers","text":"

    The purpose of Layer Two is to enable peers to form secure DID-to-DID connections so they can:

    1. Issue, exchange, and verify credentials over these connections using the data exchange protocols at Layer Three.
    2. Access the Layer One cryptographic data structures needed to issue and verify Layer Three credentials regardless of the public utility used by the issuer.
    3. Migrate and port ToIP data between agents, wallets, and secure data stores without restriction. This data portability is critical to the broad adoption and interoperability of ToIP.
    "},{"location":"concepts/0289-toip-stack/#layer-three-data-exchange-protocols","title":"Layer Three: Data Exchange Protocols","text":"

    Layer One and Layer Two together enable the establishment of cryptographic trust (also called technical trust) between peers. By contrast, the purpose of Layers Three and Four is to establish human trust between peers\u2014trust between real-world individuals and organizations and the things with which they interact (devices, sensors, appliances, vehicles, buildings, etc.)

    Part of the power of the DIDComm protocol at Layer Two is that it lays the foundation for secure, private agent-to-agent connections that can now \"speak\" any number of data exchange protocols. From the standpoint of the ToIP stack, the most important of these are protocols that support the exchange of verifiable credentials.

    "},{"location":"concepts/0289-toip-stack/#the-verifiable-credentials-data-model","title":"The Verifiable Credentials Data Model","text":"

    After several years of incubation led by Manu Sporny, David Longley, and other members of the W3C Credentials Community Group, the W3C Verifiable Claims Working Group (VCWG) was formed in 2017 and produced the Verifiable Credentials Data Model 1.0 which became a W3C Recommendation in September 2019 [15].

    Figure 4 is a diagram of the three core roles in verifiable credential exchange\u2014often called the \"trust triangle\". For more information see the Verifiable Credentials Primer [16].

    Figure 4: The three primary roles in the W3C Verifiable Credentials Data Model

    The core goal of the Verifiable Credentials standard is to enable us to finally have the digital equivalent of the physical credentials we store in our physical wallets to provide proof of our identity and attributes every day. This is why the presentation of a verifiable credential to a verified is call a proof\u2014it is both a cryptographic proof and a proof of some set of attributes or relationships a verifier needs to make a trust decision.

    "},{"location":"concepts/0289-toip-stack/#credential-proof-types","title":"Credential Proof Types","text":"

    The Verifiable Credentials Data Model 1.0 supports several different cryptographic proof types:

    1. JSON Web Tokens (JWTs) secured using JSON Web Signatures.
    2. Linked Data Signatures using JSON-LD.
    3. Zero Knowledge Proofs (ZKPs) using Camenisch-Lysyanskaya Signatures.

    All three proof types address specific needs in the market:

    To support all three of these credential proof types in the ToIP stack means:

    "},{"location":"concepts/0289-toip-stack/#credential-exchange-protocols","title":"Credential Exchange Protocols","text":"

    At Layer Three, the exchange of verifiable credentials is performed by agents using data exchange protocols layered over the DIDComm protocol. These data exchange protocol specifications are being published as part of the DIDComm suite [10]. Credential exchange protocols are unique to each credential proof type because the request and response formats are different. The goal of the ToIP technology stack is to standardize all supported credential exchange protocols so that any ToIP-compatible agent, wallet, and secure data store can work with any other agent, wallet, and secure data store.

    With fully interoperable verifiable credentials, any issuer may issue any set of claims to any holder who can then prove them to any verifier. Every verifier can decide which issuers and which claims it will trust. This is a fully decentralized system that uses the same trust triangle as the physical credentials we carry in our physical wallets today. This simple, universal trust model can be adapted to any set of requirements from any trust community. Even better, in most cases it does not require new policies or business relationships. Instead the same policies that apply to existing physical credentials can just be applied to a new, more flexible and useful digital format.

    "},{"location":"concepts/0289-toip-stack/#credential-governance-frameworks","title":"Credential Governance Frameworks","text":"

    Since Layer Three is where the ToIP stack crosses over from technical trust to human trust, this is the layer where governance frameworks become a critical component for interoperability and scalability of digital trust ecosystems. Credential governance frameworks can be used to specify:

    Standard roles that credential governance frameworks can define under the ToIP Governance Stack model include:

    "},{"location":"concepts/0289-toip-stack/#layer-three-support-for-higher-layers","title":"Layer Three Support for Higher Layers","text":"

    Layer Three enables human trust\u2014in the form of verifiable assertions about entities, attributes and relationships\u2014to be layered over the cryptographic trust provided by Layers One and Two. Layer Four is the application ecosystems that request and consume these verifiable credentials in order to support the specific trust models and policies of their own digital trust ecosystem.

    "},{"location":"concepts/0289-toip-stack/#layer-four-application-ecosystems","title":"Layer Four: Application Ecosystems","text":"

    Layer Four is the layer where humans interact with applications in order to engage in trusted interactions that serve a specific business, legal, or social purpose. Just as applications call the TCP/IP stack to communicate over the Internet, applications call the ToIP stack to register DIDs, form connections, obtain and exchange verifiable credentials, and engage in trusted data exchange using the protocols in Layers One, Two, and Three.

    The ToIP stack no more limits the applications that can be built on it than the TCP/IP stack limits the applications that can be built on the Internet. The ToIP stack simply defines the \"tools and rules\"\u2014technology and governance\u2014for those applications to interoperate within digital trust ecosystems that provide the security, privacy, and data protection that their members expect. The ToIP stack also enables the consistent user experience of trust decisions across applications and ecosystems that is critical to achieving widespread trust online\u2014just as a consistent user experience of the controls for driving a car (steering wheel, gas pedal, brakes, turn signals) are critical to the safety of drivers throughout the world.

    "},{"location":"concepts/0289-toip-stack/#ecosystem-governance-frameworks","title":"Ecosystem Governance Frameworks","text":"

    Layer Four is where humans will directly experience the ToIP Governance Stack\u2014specifically the trust marks and policy promises of ecosystem governance frameworks. These specify the purpose, principes, and policies that apply to all governance authorities and governance frameworks operating within that ecosystem\u2014at all four levels of the ToIP stack.

    The ToIP Governance Stack will define standard roles that can be included in an ecosystem governance framework (EGF) including:

    To fully understand the scope and power of ecosystem governance frameworks, let us dive deeper into the special role of the ToIP Governance Stack.

    "},{"location":"concepts/0289-toip-stack/#scaling-digital-trust","title":"Scaling Digital Trust","text":"

    The top half of Figure 5 below shows the basic trust triangle architecture used by verifiable credentials. The bottom half shows a second trust triangle\u2014the governance trust triangle\u2014that can solve a number of problems related to the real-world adoption and scalability of verifiable credentials and the ToIP stack.

    Figure 5: The special role of governance frameworks

    "},{"location":"concepts/0289-toip-stack/#governance-authorities","title":"Governance Authorities","text":"

    The governance trust triangle in Figure 5 represents the same governance model that exists for many of the most successful physical credentials we use every day: passports, driving licenses, credit cards, health insurance cards, etc.

    These credentials are \"backed\" by rules and policies that in many cases have taken decades to evolve. These rules and policies have been developed, published, and enforced by many different types of existing governance authorities\u2014private companies, industry consortia, financial networks, and of course governments.

    The same model can be applied to verifiable credentials simply by having these same governance authorities\u2014or new ones formed explicitly for ToIP governance\u2014publish digital governance frameworks. Any group of issuers who want to standardize, strengthen, and scale the credentials they offer can join together under the auspices of a sponsoring authority to craft a governance framework. No matter the form of the organization\u2014government, consortia, association, cooperative\u2014the purpose is the same: define the business, legal, and technical rules under which the members agree to operate in order to achieve trust.

    This of course is exactly how Mastercard and Visa\u2014two of the world\u2019s largest trust networks\u2014have scaled. Any bank or merchant can verify in seconds that another bank or merchant is a member of the network and thus bound by its rules.

    With the ToIP stack, this governance architecture can be applied to any set of roles and/or credentials, for any trust community, of any size, in any jurisdiction.

    As an historical note, some facets of the ToIP governance stack are inspired by the Sovrin Governance Framework (SGF) [17] developed starting in 2017 by the Sovrin Foundation, the governance authority for the Sovrin public ledger for self-sovereign identity (SSI).

    "},{"location":"concepts/0289-toip-stack/#defining-a-governance-framework","title":"Defining a Governance Framework","text":"

    In addition to the overall metamodel, the ToIP governance stack will provide an architectural model for individual governance frameworks at any level. This enables the components of the governance framework to be expressed in a standard, modular format so they can be easily indexed and referenced both internally and externally from other governance frameworks.

    Figure 6 shows this basic architectural model:

    Figure 6: Anatomy of a governance framework

    "},{"location":"concepts/0289-toip-stack/#discovery-and-verification-of-authoritative-issuers","title":"Discovery and Verification of Authoritative Issuers","text":"

    Verifiers often need to verify that a credential was issued by an authoritative issuer. The ToIP stack will give governance authorities multiple mechanisms for designating their set of authoritative issuers (these options are non-exclusive\u2014they can each be used independently or in any combination):

    1. DID Documents. The governance authority can publish the list of their DIDs in a DID document on one or more public utilities of its choice.
    2. Member Directories. A governance authority can publish a \"whitelist\" of DIDs via a whitelisting service available at a standard service endpoint published in the governance authority\u2019s own DID document.
    3. Credential registries. If search and discovery of authoritative issuers is desired, a governance authority can publish verifiable credentials containing both the DID and additional attributes for each authoritative issuer in a credential registry. Note that in this case the credential registry serves as a separate, cryptographically-verifiable holder of the credential\u2014a holder that is not the subject of the credential, but which can independently prove the validity of the credential.
    4. Verifiable credentials. As shown in Figure 5, the governance authority (or its designated auditors) can issue verifiable credentials to the authoritative issuers, which they in turn can provide directly to verifiers or indirectly via credential holders.
    "},{"location":"concepts/0289-toip-stack/#discovery-and-verification-of-authoritative-verifiers","title":"Discovery and Verification of Authoritative Verifiers","text":"

    Holders often need to verify that a credential was requested by an authoritative verifier, e.g. as part of a \u2018machine readable governance framework\u2019. The ToIP stack will give governance authorities multiple mechanisms for designating their set of authoritative verifiers (these options are non-exclusive\u2014they can be used independently or in any combination):

    1. DID Documents. The governance authority can publish the list of their DIDs in a DID document on one or more verifiable data registries of its choice.
    2. Member Directories. A governance authority can publish a \"whitelist\" of DIDs via a whitelisting service available at a standard service endpoint published in the governance authority\u2019s own DID document.
    3. Credential registries. If search and discovery of authoritative verifiers is desired, a governance authority can publish verifiable credentials containing both the DID and additional attributes for each authoritative verifiers in a credential registry. Note that in this case the credential registry serves as a separate, cryptographically-verifiable holder of the credential\u2014a holder that is not the subject of the credential, but which can independently prove the validity of the credential.
    4. Verifiable credentials. Similar to Figure 5, the governance authority (or its designated auditors) can issue verifiable credentials to the authoritative issuers in the governance framework. Those issuers can in turn provide proofs directly to holders or verifiers.
    "},{"location":"concepts/0289-toip-stack/#countermeasures-against-coercion","title":"Countermeasures against coercion","text":"

    The concept of \"self-sovereign\" identity presumes that parties are free to enter a transaction, to share personal and confidential information, and to walk away when requests by the other party are deemed unreasonable or even unlawful. In practice, this is often not the case: \"What do you give an 800-pound gorilla?\", answer: \"Anything that it asks for\". Examples of such 800-pound gorillas are some big-tech websites, immigration offices and uniformed individuals alleging to represent law-enforcement [20][21]. Also the typical client-server nature of web transactions reinforces this power imbalance, where the human party behind its client agent feels coerced in surrendering personal data as otherwise they are denied access to a product, service or location. Point in case are the infamous cookie walls, where a visitor of a website get the choice between \"accept all cookies or go into the maze-without-exit\".

    Governance frameworks may be certified to implement one or more potential countermeasures against different types of coercion. In case of a machine readable governance framework, some of such countermeasures may be automatically enforced, safeguarding its user from being coerced into action against their own interest. Different governance frameworks may choose different balances between full self-sovereignty and tight control, depending of the interests that are at play as well as applicable legislation.

    The following are examples of potential countermeasures against coercion. The governance framework can stimulate or enforce that some verifiable credentials are only presented when the holder agent determines that some requirements are satisfied. When a requirement is not fulfilled, the user is warned about the violation and the holder agent may refuse presentation of the requested verifiable credential. 1. Require authoritative verifier. Verifiers would need to be authorized within the applicable governance framework, see also section \u201cDiscovery and Verification of Authoritative Verifiers\u201d. 2. Require evidence collection. Requests for presentation of verifiable credentials may hold up as evidence in court, if the electronic signature on the requests is linked to the verifier in a non-repudiable way. 3. Require enabling anonymous complaints. The above evidence collection may be compromised if the holder can be uniquely identified from the collected evidence. So a governance framework may require the blinding of holder information, as well as instance-identifiable information about the evidence itself. 4. Require remote/proxy verification. Verification has only value to a holder, if it results in a positive decision by the verifier. Hence a holder should preferably only surrender personal data if such warrants a positive decision. It would save travel, if the requested decision is access to a physical facility. It would in any case prevent unnecessary disclosure of personal data. Some verifiers may consider their decision criteria confidential. Hence, different governance frameworks may choose different balances between holder privacy and verifier confidentiality. 5. Require complying holder agent. Some rogue holder agents may surrender personal data against the policies of the governance framework associated with that data. Issuers of such data may require verification of compliance of the holder\u2019s agent before issuing. 6. Require what-you-know authentication. Holders may be forced to surrender biometric authentication by rogue verifiers as well as some state jurisdictions. This is the reason that many bank apps require \u201cwhat-you-know\u201d authentication, next to biometric \u201cwhat-you-are\u201d or device-based \u201cwhat-you-have\u201d authentication. This may be needed even for when then the user views its own personal data in the app without electronic presentation, as some 800-pound gorillas require watching over the shoulder.

    "},{"location":"concepts/0289-toip-stack/#interoperability-with-other-governance-frameworks","title":"Interoperability with Other Governance Frameworks","text":"

    The ToIP governance stack is designed to be compatible with\u2014and an implementation vehicle for\u2014national governance frameworks such as the Pan-Canadian Trust Framework (PCTF) [18] being developed through a public/private sector collaboration with the Digital Identity and Authentication Council of Canada (DIACC). It should also interoperate with regional and local governance frameworks of all kinds. For example, the Province of British Columbia (BC) has implemented a ToIP-compatible verifiable credential registry service called OrgBook BC. OrgBook is a holder service for legally registered entities in BC that was built using Indy Catalyst and Hyperledger Aries Cloud Agent - Python. Other provinces such as Ontario and Alberta as well as the Canadian federal government have begun to experiment with these services for business credentials, giving rise to new kind of network where trust is at the edge. For more information see the VON (Verifiable Organization Network) [19].

    "},{"location":"concepts/0289-toip-stack/#building-a-world-of-interoperable-digital-trust-ecosystems","title":"Building a World of Interoperable Digital Trust Ecosystems","text":"

    The Internet is a network of networks, where the interconnections between each network are facilitated through the TCP/IP stack. The ToIP-enable Internet is a digital trust ecosystem of digital trust ecosystems, where the interconnections between each digital trust ecosystem are facilitated through the ToIP stack. The boundaries of each digital trust ecosystem are determined by the governance framework(s) under which its members are operating.

    This allows the ToIP-enabled Internet to reflect the same diversity and richness the Internet has today, but with a new ability to form and maintain trust relationships of any kind\u2014personal, business, social, academic, political\u2014at any distance. These trust relationships can cross trust boundaries as easily as IP packets can cross network boundaries today.

    "},{"location":"concepts/0289-toip-stack/#conclusion-a-trust-layer-for-the-internet","title":"Conclusion: A Trust Layer for the Internet","text":"

    The purpose of the ToIP stack is to define a strong, decentralized, privacy-respecting trust layer for the Internet. It leverages blockchain technology and other new developments in cryptography, decentralized systems, cloud computing, mobile computing, and digital governance to solve longstanding problems in establishing and maintaining digital trust.

    This RFC will be updated to track the evolution of the ToIP stack as it is further developed, both through Hyperledger Aries and via other projects at the Linux Foundation. We welcome comments and contributions.

    "},{"location":"concepts/0289-toip-stack/#references","title":"References","text":"
    1. Petros Kavassalis, Richard Jay Solomon, Pierre-Jean Benghozi, The Internet: a Paradigmatic Rupture in Cumulative Telecom Evolution, Industrial and Corporate Change, 1996; accessed September 5 2019.
    2. FreeBSD, What, a real UNIX\u00ae?, accessed September 5, 2019.
    3. Kim Cameron, The Laws of Identity, May 2005; accessed November 2, 2019.
    4. Drummond Reed, Manu Sporny, Markus Sabadello, David Longley, Christopher Allen, Ryan Grant, Decentralized Identifiers (DIDs) v1.0, December 2019; accessed January 24, 2020.
    5. W3C Credentials Community Group, DID Primer, January 2019; accessed July 6, 2019.
    6. W3C DID Working Group, Home Page, September 2019; accessed November 2, 2019.
    7. Uniform Resource Names (URNs), RFC 8141, April 2017; accessed November 2, 2019.
    8. Greg Slepak, Christopher Allen, et al, Decentralized Public Key Infrastructure, December 2015, accessed January 24, 2020.
    9. W3C Credentials Community Group, DID Method Registry, June 2019; accessed July 6, 2019.
    10. Daniel Hardman, DID Communication, January 2019; accessed July 6, 2019.
    11. Daniel Hardman et al, Peer DID Method 1.0 Specification, July 2019; accessed July 6, 2019.
    12. Drummond Reed, Jason Law, Daniel Hardman, Mike Lodder, DKMS Design and Architecture V4, March 2019; accessed November 2, 2019.
    13. Samuel M. Smith, Key Event Receipt Infrastructure (KERI) , July 2019, accessed February 4, 2020.
    14. Sovrin Governance Framework Working Group, On Guardianship in Self-Sovereign Identity, December 2019, accessed April 10, 2020.
    15. Manu Sporny, Grant Noble, Dave Longley, Daniel C. Burnett, Brent Zundel, Verifiable Credentials Data Model 1.0, September 2019; accessed November 2, 2019.
    16. Manu Sporny, Verifiable Credentials Primer, February 2019; accessed July 6, 2019.
    17. Sovrin Foundation, Sovrin Governance Framework V2, March 2019; accessed December 21, 2019.
    18. DIACC, Pan-Canadian Trust Framework, May 2019; accessed July 6, 2019.
    19. Governments of British Columbia, Ontario, and Canada, Verifiable Organizations Network (VON),June 2019; accessed July 6, 2019.
    20. Oskar van Deventer et al, TNO, Netherlands, Self-Sovereign Identity - The Good, The Bad And The Ugly, May 2019.
    21. Oskar van Deventer (TNO), Alexander Blom (Bloqzone), Line Kofoed (Bloqzone) Verify the Verifier - anti-coersion by design, October 2020.
    "},{"location":"concepts/0289-toip-stack/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0302-aries-interop-profile/","title":"0302: Aries Interop Profile","text":""},{"location":"concepts/0302-aries-interop-profile/#summary","title":"Summary","text":"

    This RFC defines the process for the community of Aries agent builders to:

    \"Agent builders\" are organizations or teams that are developing open source code upon which agents can be built (e.g. aries-framework-dotnet), or deployable agents (e.g. Aries Mobile Agent Xamarin), or commercially available agents.

    An Aries Interop Profile (AIP) version provides a clearly defined set of versions of RFCs for Aries agent builders to target their agent implementation when they wish it to be interoperable with other agents supporting the same Aries Interop Profile version. The Aries Interop Profile versioning process is intended to provide clarity and predictability for Aries agent builders and others in the broader Aries community. The process is not concerned with proposing new, or evolving existing, RFCs, nor with the development of Aries code bases.

    At all times, the Reference section of this RFC defines one or more current Aries Interop Profile versions -- a number and set of links to specific commits of concept and ../../features RFCs, along with a list of all previous Aries Interop Profile versions. Several current Aries Interop Profile versions can coexist during periods when multiple major Aries Interop Profile versions are in active use (e.g. 1.x and 2.x). Each entry in the previous versions list includes a link to the commit of this RFC associated with that Aries Interop Profile version. The Reference section MAY include one <major>.next version for each existing current major Aries Interop Profile versions. Such \"next\" versions are proposals for what is to be included in the next minor AIP version.

    Once a suitably populated Aries test suite is available, each Aries Interop Profile version will include a link to the relevant subset of test cases. The test cases will include only those targeting the specific versions of the ../../concepts and ../../features RFCs in that version of Aries Interop Profile. A process for maintaining the link between the Aries Interop Profile version and the test cases will be defined in this RFC once the Aries test suite is further evolved.

    This RFC includes a section maintained by Aries agent builders listing their Aries agents or agent deployments (whether open or closed source). This list SHOULD include the following information for each listed agent:

    An Aries agent builder SHOULD include an entry in the table per major version supported. Until there is a sufficiently rich test suite that produces linkable results, builders SHOULD link to and maintain a page that summarizes any exceptions and extensions to the agent's AIP support.

    The type of the agent MUST be selected from an enumerated list above the table of builder agents.

    "},{"location":"concepts/0302-aries-interop-profile/#motivation","title":"Motivation","text":"

    The establishment of Aries Interop Profile versions defined by the Aries agent builder community allows the independent creation of interoperable Aries agents by different Aries agent builders. Whether building open or closed source implementations, an agent that aligns with the set of RFC versions listed as part of an Aries Interop Profile version should be interoperable with any other agent built to align with that same version.

    "},{"location":"concepts/0302-aries-interop-profile/#tutorial","title":"Tutorial","text":"

    This RFC MUST contain the current Aries Interop Profile versions as defined by a version number and a set of links to concept and feature RFCs which have been agreed to by a community of Aries agent builders. \"Agreement\" is defined as when the community agrees to merge a Pull Request (PR) to this RFC that affects an Aries Interop Profile version number and/or any of the links to concept and feature RFCs. PRs that do not impact the Aries Interop Profile version number or links can (in general) be merged with less community scrutiny.

    Each link to a concept or feature RFCs MUST be to a specific commit of that RFC. RFCs in the list MAY be flagged as deprecated. Linked RFCs that reference external specs or standards MUST refer to as specific a version of the external resource as possible.

    Aries Interop Profile versions SHOULD have a link (or links) to a version (specific commit) of a test suite (or test cases) which SHOULD be used to verify compliance with the corresponding version of Aries Interop Profile. Aries agent builders MAY self-report their test results as part of their entries in the list of agents.

    Aries Interop Profile versions MUST evolve at a pace determined by the Aries agent builder community. This pace SHOULD be at a regular time interval so as to facilitate the independent but interoperable release of Aries Agents. Aries agent builders are encouraged to propose either updates to the list of RFCs supported by Aries Interop Profile through GitHub Issues or via a Pull Request. Such updates MAY trigger a change in the Aries Interop Profile version number.

    All previous versions of Aries Interop Profile MUST be listed in the Previous Versions section of the RFP and must include a link to the latest commit of this RFC at the time that version was active.

    A script in the /code folder of this repo can be run to list RFCs within an AIP version that have changed since the AIP version was set. For script usage information run the following from the root of the repo:

    python code/aipUpdates.py --help

    "},{"location":"concepts/0302-aries-interop-profile/#sub-targets","title":"Sub-targets","text":"

    AIP 2.0 is organized into a set of base requirements, and additional optional targets. These requirements are listed below. When indicating levels of support for AIP 2.0, subtargets are indicated in this format: AIP 2.0/INDYCREDS/MEDIATE with the subtargets listed in any order.

    Any RFCs within a single AIP Version and it's subtargets MUST refer to the exact same version of the RFC.

    "},{"location":"concepts/0302-aries-interop-profile/#discover-features-usage","title":"Discover Features Usage","text":"

    AIP Targets can be disclosed in the discover_../../features protocol, using the feature-type of aip. The feature's id is AIP<major>.<minor> for base compatibility, and AIP<major>.<minor>/<subtarget> for subtargets, each subtarget being included individually.

    Example:

    {\n  \"@type\": \"https://didcomm.org/discover-features/2.0/disclosures\",\n  \"disclosures\": [\n    {\n      \"feature-type\": \"aip\",\n      \"id\": \"AIP2.0\",\n    },\n    {\n      \"feature-type\": \"aip\",\n      \"id\": \"AIP2.0/INDYCRED\"\n    }\n  ]\n}\n
    "},{"location":"concepts/0302-aries-interop-profile/#reference","title":"Reference","text":"

    The Aries Interop Profile version number and links to other RFCs in this section SHOULD only be updated with the agreement of the Aries agent builder community. There MAY be multiple active major Aries Interop Profile versions. A list of previous versions of Aries Interop Profile are listed after the current version(s).

    "},{"location":"concepts/0302-aries-interop-profile/#aries-interop-profile-version-10","title":"Aries Interop Profile Version: 1.0","text":"

    The initial version of Aries Interop Profile, based on the existing implementations such as aries-cloudagent-python, aries-framework-dotnet, Open Source Mobile Agent and Streetcred.id's IOS agent. Agents adhering to AIP 1.0 should be able to establish connections, exchange credentials and complete a connection-less proof-request/proof transaction.

    RFC Type RFC/Link to RFC Version Concept 0003-protocols Concept 0004-agents Concept 0005-didcomm Concept 0008-message-id-and-threading Concept 0011-decorators Concept 0017-attachments Concept 0020-message-types Concept 0046-mediators-and-relays Concept 0047-json-LD-compatibility Concept 0050-wallets Concept 0094-cross-domain messaging Feature 0015-acks Feature 0019-encryption-envelope Feature 0160-connection-protocol Feature 0025-didcomm-transports Feature 0035-report-problem Feature 0036-issue-credential Feature 0037-present-proof Feature 0056-service-decorator"},{"location":"concepts/0302-aries-interop-profile/#changelog-aip-10","title":"Changelog - AIP 1.0","text":"

    The original commit used in the definition of AIP 1.0 was: 64e5e55

    The following clarifications have been made to RFCs that make up AIP 1.0:

    "},{"location":"concepts/0302-aries-interop-profile/#aip-v10-test-suite","title":"AIP v1.0 Test Suite","text":"

    To Do: Link(s) to version(s) of the test suite/test cases applicable to this Aries Interop Profile version.

    "},{"location":"concepts/0302-aries-interop-profile/#aries-interop-profile-version-20","title":"Aries Interop Profile Version: 2.0","text":"

    The following are the goals used in selecting RFC versions for inclusion in AIP 2.0, and the RFCs added as a result of each goal:

    "},{"location":"concepts/0302-aries-interop-profile/#aip-20-changelog-by-pull-requests","title":"AIP 2.0 Changelog by Pull Requests","text":"

    Since approval of the AIP 2.0 profile, the following RFCs have been clarified by updating the commit in the link to the RFC:

    "},{"location":"concepts/0302-aries-interop-profile/#aip-20-changelog-by-clarifications","title":"AIP 2.0 Changelog by Clarifications","text":"

    The original commit used in the definition of AIP 2.0 was: b3a3942ef052039e73cd23d847f42947f8287da2

    The following clarifications have been made to RFCs that make up AIP 2.0. This list excludes commits changed solely because of status changes:

    "},{"location":"concepts/0302-aries-interop-profile/#base-requirements","title":"Base Requirements","text":"RFC Type RFC/Link to RFC Version Note Concept 0003-protocols AIP V1.0, Reformatted Concept 0004-agents AIP V1.0, Unchanged Concept 0005-didcomm AIP V1.0, Minimally Updated Concept 0008-message-id-and-threading AIP V1.0, Updated Concept 0011-decorators AIP V1.0, Updated Concept 0017-attachments AIP V1.0, Updated Concept 0020-message-types AIP V1.0, UpdatedMandates message prefix https://didcomm.org for Aries Protocol messages. Concept 0046-mediators-and-relays AIP V1.0, Minimally Updated Concept 0047-json-LD-compatibility AIP V1.0, Minimally Updated Concept 0050-wallets AIP V1.0, Unchanged Concept 0094-cross-domain messaging AIP V1.0, Updated Concept 0519-goal-codes Feature 0015-acks AIP V1.0, Updated Feature 0019-encryption-envelope AIP V1.0, UpdatedSee envelope note below Feature 0023-did-exchange Feature 0025-didcomm-transports AIP V1.0, Minimally Updated Feature 0035-report-problem AIP V1.0, Updated Feature 0044-didcomm-file-and-mime-types Feature 0048-trust-ping Feature 0183-revocation-notification Feature 0360-use-did-key Feature 0434-outofband Feature 0453-issue-credential-v2 Update to V2 Protocol Feature 0454-present-proof-v2 Update to V2 Protocol Feature 0557-discover-features-v2"},{"location":"concepts/0302-aries-interop-profile/#mediate-mediator-coordination","title":"MEDIATE: Mediator Coordination","text":"RFC Type RFC/Link to RFC Version Note Feature 0211-route-coordination Feature 0092-transport-return-route"},{"location":"concepts/0302-aries-interop-profile/#indycred-indy-based-credentials","title":"INDYCRED: Indy Based Credentials","text":"RFC Type RFC/Link to RFC Version Note Feature 0592-indy-attachments Evolved from AIP V1.0 Concept 0441-present-proof-best-practices"},{"location":"concepts/0302-aries-interop-profile/#ldcred-json-ld-based-credentials","title":"LDCRED: JSON-LD Based Credentials","text":"RFC Type RFC/Link to RFC Version Note Feature 0593-json-ld-cred-attach Feature 0510-dif-pres-exch-attach"},{"location":"concepts/0302-aries-interop-profile/#bbscred-bbs-based-credentials","title":"BBSCRED: BBS+ Based Credentials","text":"RFC Type RFC/Link to RFC Version Note Feature 0593-json-ld-cred-attach Feature 0646-bbs-credentials Feature 0510-dif-pres-exch-attach"},{"location":"concepts/0302-aries-interop-profile/#chat-chat-related-features","title":"CHAT: Chat related ../../features","text":"RFC Type RFC/Link to RFC Version Note Feature 0095-basic-message"},{"location":"concepts/0302-aries-interop-profile/#aip-20-rfcs-removed","title":"AIP 2.0 RFCs Removed","text":"

    [!WARNING] After discussion amongst the Aries implementers, the following RFCs initially in AIP 2.0 have been removed as both never implemented (as far as we know) and/or impractical to implement. Since the RFCs have never been implemented, their removal does not have a practical impact on implementations. Commentary below the table listing the removed RFCs provides the reasoning for the removal of each RFC.

    RFC Type RFC/Link to RFC Version Note Feature 0317-please-ack Removed from AIP 2.0 Feature 0587-encryption-envelope-v2 Removed from AIP 2.0 Feature 0627-static-peer-dids The use of static peer DIDs in Aries has evolved and all AIP 2.0 implementations should be using DID Peer types 4 (preferred), 1 or 2. "},{"location":"concepts/0302-aries-interop-profile/#aip-v20-test-suite","title":"AIP v2.0 Test Suite","text":"

    The Aries Agent Test Harness has a set of tests tagged to exercise AIP 1.0 and AIP 2.0, including the extended targets.

    "},{"location":"concepts/0302-aries-interop-profile/#implementers-note-about-didcomm-envelopes-and-the-accept-element","title":"Implementers Note about DIDComm Envelopes and the ACCEPT element","text":"

    [!WARNING] The following paragraph is struck out as no longer relevant, since the 0587-encryption-envelope-v2 RFC has been removed from AIP 2.0. The upcoming (to be defined) AIP 3.0 will include the transition from DIDComm v1 to the next DIDComm generation, and at that time, the 0587-encryption-envelope-v2 will again be relevant.

    AIP 2.0 contains two RFCs that reference envelopes 0019-encryption-envelope and 0587-encryption-envelope-v2 (links above). The important feature that Aries implementers should understand to differentiate which envelope format can or is being used by an agent is the accept element of the DIDComm service endpoint and the out-of-band invitation message. If the accept element is not present, the agent can only use the RFC 0019-encryption-envelope present. If it is present, the values indicate the envelope format(s) the agent does support. See the RFCs for additional details.

    "},{"location":"concepts/0302-aries-interop-profile/#previous-versions","title":"Previous Versions","text":"

    Will be the version number as a link to the latest commit of this RFC while the version was current.

    "},{"location":"concepts/0302-aries-interop-profile/#aries-agent-builders-and-agents","title":"Aries Agent Builders and Agents","text":"

    A list of agents that claim compatibility with versions of Aries Interop Profile. A entry can be included per agent and per major Aries Interop Profile version.

    The agent type MUST be one of the following:

    Name / Version / Link Agent Type Builder / Link Aries Interop Profile Version Test Results Notes"},{"location":"concepts/0302-aries-interop-profile/#drawbacks","title":"Drawbacks","text":"

    It may be difficult to agree on the exact list of RFCs to support in a given version.

    "},{"location":"concepts/0302-aries-interop-profile/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Continuing with the current informal discussions of what agents/frameworks should support and when is an ineffective way of enabling independent building of interoperable agents.

    "},{"location":"concepts/0302-aries-interop-profile/#prior-art","title":"Prior art","text":"

    This is a typical approach to creating an early protocol certification program.

    "},{"location":"concepts/0302-aries-interop-profile/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0302-aries-interop-profile/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0345-community-coordinated-update/","title":"0345: Community Coordinated Update","text":""},{"location":"concepts/0345-community-coordinated-update/#summary","title":"Summary","text":"

    This RFC describes the recommended process for coordinating a community update. This is not a mandate; this process should be adapted as useful to the circumstances of the update being performed.

    "},{"location":"concepts/0345-community-coordinated-update/#motivation","title":"Motivation","text":"

    Occasionally, an update will be needed that requires a coordinated change to be made across the community. These should be rare, but are inevitable. The steps in this process help avoid a coordinated software deployment, where multiple teams must fit a tight timeline of software deployment to avoid compatibility problems. Tightly coordinated software deployments are difficult and problematic, and should be avoided whenever possible.

    "},{"location":"concepts/0345-community-coordinated-update/#tutorial","title":"Tutorial","text":"

    This process descries how to move from OLD to NEW. OLD and NEW represent the required change, where OLD represents the item being replaced, and NEW represents the item OLD will be replaced with. Often, these will be strings.

    In brief, we first accept OLD and NEW while still defaulting to OLD, Then we default to NEW (while still accepting OLD), and then we remove support for OLD. These steps are coordinated with the community with a gracious timeline to allow for development cycles and deployment ease.

    "},{"location":"concepts/0345-community-coordinated-update/#prerequisite-community-agreement-on-change","title":"Prerequisite: Community agreement on change.","text":"

    Before these steps are taken, the community MUST agree on the change to be made.

    "},{"location":"concepts/0345-community-coordinated-update/#step-1-accept-old-and-new","title":"Step 1: Accept OLD and NEW","text":"

    The first step of the process is to accept both OLD and NEW from other agents. Typically, this is done by detecting and converting one string to the other in as few places in the software as possible. This allows the software to use a common value internally, and constrains the change logic to where the values are received.

    OLD should still be sent on outbound communication to other agents.

    During step 1, it is acceptable (but optional) to begin sending NEW when receiving NEW from the other agent. OLD should still be sent by default when the other Agent's support is unknown.

    This step is formalized by writing and RFC detailing which changes are expected in this update. This step is scheduled in the community by including the update RFC in a new version of the Interop Profile and setting a community target. The schedule should allow a generous time for development, generally between 1 and 3 months.

    Step 1 Coordination: This is the most critical coordination step. The community should have completed step 1 before moving to step 2.

    "},{"location":"concepts/0345-community-coordinated-update/#step-2-default-to-new","title":"Step 2: Default to NEW","text":"

    The second step changes the outbound value in use from OLD to NEW. Communication will not break with agents who have completed Step 1.

    OLD must still be accepted during step 2. OLD becomes deprecated.

    During step 2, it is acceptable (but optional) to keep sending OLD when receiving OLD from the other agent. NEW should still be sent by default when the other Agent's support is unknown.

    This step is formalized by writing an RFC detailing which changes are expected in this update. This step is scheduled by including the update RFC in a new version of the Interop Profile and setting a community target date. The schedule should allow a generous time for development, generally between 1 and 3 months.

    Step 2 Coordination: The community should complete step 2 before moving to step 3 to assure that OLD is no longer being sent prior to removing support.

    "},{"location":"concepts/0345-community-coordinated-update/#step-3-remove-support-for-old","title":"Step 3: Remove support for OLD.","text":"

    Software will be updated to remove support for OLD. Continued use is expected to result in a failure or error as appropriate

    This step is formalized by writing an RFC detailing which changes are expected in this update. Upon acceptance of the RFC, OLD is considered invalid. At this point, nobody should be sending the OLD.

    Step 3 Coordination: The deadline for step 3 is less important than the previous steps, and may be scheduled at the convenience of each development team.

    "},{"location":"concepts/0345-community-coordinated-update/#reference","title":"Reference","text":"

    This process should only be used for changes that are not detectable via the Discover Features protocol, either because the Discover Features Protocol cannot yet be run or the Discover Features Protocol does not reveal the change.

    "},{"location":"concepts/0345-community-coordinated-update/#changes-not-applicable-to-this-process","title":"Changes NOT applicable to this process","text":"

    Any changes that can be handled by increasing the version of a protocol should do so. The new version can be scheduled via Interop Profile directly without this process.

    Example proper applications of this process include switching the base common Message Type URI, and DID Doc Service Types.

    "},{"location":"concepts/0345-community-coordinated-update/#pace","title":"Pace","text":"

    The pace for Steps 1 and 2 should be appropriate for the change in question, but should allow generous time to allow for developer scheduling, testing, and production deployment schedules. App store approval process sometimes take a bit of time. A generous time allowance eases the burden of implementing the change.

    "},{"location":"concepts/0345-community-coordinated-update/#drawbacks","title":"Drawbacks","text":"

    This approach invites the drawbacks of sanity, unpanicked deployments, and steady forward community progress.

    "},{"location":"concepts/0345-community-coordinated-update/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0345-community-coordinated-update/#prior-art","title":"Prior art","text":"

    This process was discussed in Issue 318 and in person at the 2019 December Aries Connectathon.

    "},{"location":"concepts/0345-community-coordinated-update/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0345-community-coordinated-update/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0346-didcomm-between-two-mobile-agents/","title":"0346: DIDComm Between Two Mobile Agents Using Cloud Agent Mediator","text":""},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#summary","title":"Summary","text":"

    Explains how one mobile edge agent can send messages to another mobile edge agent through cloud agents. The sender edge agent also determines the route of the message. The recipient, on the other hand, can consume messages at its own pace and time.

    "},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#motivation","title":"Motivation","text":"

    The DIDCOMM between two mobile edge agents should be easy and intuitive for a beginner to visualize and to implement.

    "},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#scenario","title":"Scenario","text":"

    Alice sends a connection request message to Bob and Bob sends back an acceptance response. For simplicity's sake, we will only consider the cloud agents in play while sending and receiving a message for Alice.

    "},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#cloud-agent-registration-process","title":"Cloud Agent Registration Process","text":"

    A registration process is necessary for an edge agent to discover cloud agents that it can use to send a message through them. Cloud agents in the simplest form are routers hosted as a web application that solves the problem of availability by providing a persistent IP address. The Web server has a wallet of it's own storing its private key as a provisioning record, along with any information needed to forward messages to other agents. Alice wants to accept a connection invitation from Bob. But before doing so Alice needs to register herself with one or more cloud agents. The more cloud agents she registers with the more cloud agents she can use in transporting her message to Bob. To register herself with a cloud agent she visits the website of a cloud agent and simply scans a QR code.

    The cloud agent registration invite looks like below

    {\u200b\n    \"@type\": \"https://didcomm.org/didexchange/1.0/cloudagentregistrationinvitation\",\u200b\n    \"@id\": \"12345678900987654321\",\u200b\n    \"label\": \"CloudAgentA\",\u200b\n    \"recipientKeys\": [\"8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K\"],\u200b\n    \"serviceEndpoint\": \"https://cloudagenta.com/endpoint\",\n    \"responseEndpoint\": \"https://cloudagenta.com/response\", \n    \"consumer\": \"b1004443feff4f3cba25c45ef35b492c\",\n    \"consumerEndpoint\" : \"https://cloudagenta.com/consume\"\u200b\n}\u200b\n

    The registration data is base64url-encoded and is added to alink as part of the c_a_r query param. The recipient key is the public key of \"Cloud Agent A\". The service endpoint is where the edge agent should send the message to. Response endpoint is where a response that is being sent to Alice should be sent to. For example, if Bob wants to send a message to Alice, then Bob should send the message to the response endpoint. Consumer endpoint is where Alice's edge agent should consume the messages that are sent to her. The \"Consumer\" is an identifier to identify Alice's edge agent by the cloud agent \"A\". This identifier is different with each cloud agent and hence provides low correlation risk. Each time an invitation QR code is generated, a new consumer id is generated. No acknowledgment is required to be sent to the cloud agent or vice versa as the consumer-generated is never repeated.

    All the endpoint data and the public key of the cloud agents are then stored as non secret records in Alice's wallet with a tag \"cloud-agent\"

    "},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#how-connection-request-from-alice-flows-to-bob","title":"How connection request from Alice flows to Bob","text":"

    When Alice scans Bob's QR code invitation. It starts preparing the connection request message. It first queries the wallet record service for records tagged with \"cloud-agent\" and puts them in a list. The edge agent now randomly chooses one from the list (say Cloud Agent \"A\") and creates a new list without the cloud agent that is already chosen. Alice's edge agent creates the connection request message json and adds the service endpoint as the chosen cloud agent's response endpoint together with its consumer id.

    \"serviceEndpoint\": \"https://cloudagenta.com/response/b1004443feff4f3cba25c45ef35b492c\"\n

    It then packs this message by Bob's recipient key and then creates another json message structure like the below by ising the forward message type

    {\u200b\n    \"@type\": \"https://didcomm.org/routing/1.0/forward\",\u200b\n    \"@id\": \"12345678900987654321\",\u200b\n    \"msg\": \"<Encrypted message for Bob here>\",\n    \"to\": \"<Service endpoint of Bob>\"\u200b\n}\u200b\n

    It then packs it with the public key of cloud agent \"A\".

    Now it randomly chooses cloud agent from the new list and keeps on repeating the process of writing the message forwarding request.

    For example, say the next random cloud agent that it chooses is Cloud Agent \"C\". So now it creates another message forward json structure as below

    {\u200b\n    \"@type\": \"https://didcomm.org/routing/1.0/forward\",\u200b\n    \"@id\": \"12345678900987654321\",\u200b\n    \"msg\": \"<Encrypted message for Cloud Agent A>\",\n    \"to\": \"<Service endpoint of Cloud Agent A>\"\u200b\n}\u200b\n
    And then packs with Cloud Agent \"C\"'s public key.

    This process happens till it has exhausted all the list of the cloud agent in the list and then sends the message to the service endpoint of the last cloud agent (say Cloud Agent \"B\") chosen. For example, the message could have randomly been packed for this path, B->C->A where A is one of Bob's cloud agents that stores the message on the distributed log.

    "},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#message-forwarding-process-by-cloud-agents","title":"Message Forwarding process by cloud agents","text":"

    When the message is reached to cloud agent \"B\", the message is first unpacked by cliud agent \"B\"'s private key. It then finds out the message type is of \"forward\". It then processes the message by taking the value of the \"message\" attribute in the decrypted json and sending it to the forwardTo URI.

    Thus Cloud Agent \"B\" unpacks the message and forward the message to Cloud Agent \"C\" who then again unpacks and forwards it to Cloud Agent \"A\". Cloud Agent \"A\" ultimately unpacks and forwards it to Bob's edge agent (For simplicity sake we are not describing how the message reaches Bob through Bob's registered cloud agents)

    "},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#bob-returns-a-response-back","title":"Bob returns a response back","text":"

    Bob when recives the connection request message from Alice. It then creates a connection accept response and sends the response back to Alice at the service endpoint of Alice which is

    \"serviceEndpoint\": \"https://cloudagenta.com/response/b1004443feff4f3cba25c45ef35b492c\"\n

    For simplicity sake, we are not describing how the message ends up at the above endpoint from Bob after multiple routing through Bob's cloud agents. When the message actually ends up at the service endpoint mentioned by Alice, which is the response endpoint of cloud agent \"A\", the cloud agent simply stores it in a distributed log(NEEDS A LINK TO KAFKA INBOX RFC) using the consumer id as a key

    "},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#alice-consumes-connection-accepted-response-from-bob","title":"Alice consumes connection accepted response from Bob","text":"

    Alice's edge agent periodically checks the consumer endpoint of all the cloud agents it has registered with. For each cloud agent, Alice passes the unique consumer id that was used in registration so that cloud agent can return the correct messages. When it does the same for cloud agent \"A\", it simply consumes the message from the distributed log.

    "},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#drawbacks-and-alternatives","title":"Drawbacks and Alternatives","text":"

    In other suggested message formatting protocol Alice would provide a list of routing keys and the endpoint of the first hop in the chain of cloud agents. That gives allice confidence that bob is forced to use the path she has provided. The proposed routing in this RFC lacks that confidence. In contrast, routing with a list of routing keys requires a lot of overhead set up before establishing a connection. This proposed routing simplifies that overhead and provides more flexibility.

    "},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#related-art","title":"Related art","text":"

    [related-art] #prior-art Aries-rfc Aries RFC 0046: Mediators and Relays

    "},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#prior-art","title":"Prior art","text":""},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#unresolved-questions","title":"Unresolved questions","text":"

    Does separation of a \"service endpoint\" and \"Consumer endpoint\" provide a point of correlation that can be avoided by handling all messages through a single service endpoint?

    Can a cloud agent have their own army of servers that just basically looks into a registry of servers and randomly chooses an entry and exit node and a bunch of hops and just passes the message along. The exit node will then pass the message to the next cloud agent?

    "},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0420-rich-schemas-common/","title":"0420: Rich Schema Objects Common","text":""},{"location":"concepts/0420-rich-schemas-common/#summary","title":"Summary","text":"

    A low-level description of the components of an anonymous credential ecosystem that supports rich schemas, W3C Verifiable Credentials and Presentations, and correspondingly rich presentation requests.

    Please see 0250: Rich Schema Objects for high-level description.

    This RFC provides more low-level description of Rich Schema objects defining how they are identified and referenced. It also defines a general template and common part for all Rich Schema objects.

    "},{"location":"concepts/0420-rich-schemas-common/#motivation","title":"Motivation","text":"

    Please see 0250: Rich Schema Objects for use cases and high-level description of why Rich Schemas are needed.

    This RFC serves as a low-level design of common parts between all Rich Schema objects, and can help developers to properly implement Rich Schema transactions on the Ledger and the corresponding client API.

    "},{"location":"concepts/0420-rich-schemas-common/#tutorial-general-principles","title":"Tutorial: General Principles","text":"

    By Rich Schema objects we mean all objects related to Rich Schema concept (Context, Rich Schema, Encoding, Mapping, Credential Definition, Presentation Definition)

    Let's discuss a number of items common for all Rich Schema objects

    "},{"location":"concepts/0420-rich-schemas-common/#components-and-repositories","title":"Components and Repositories","text":"

    The complete architecture for every Rich Schema object involves three separate components: - aries-vdri: This is the location of the aries-verifiable-data-registy-interface. Changes to this code will enable users of any data registry with an aries-vdri-compatible data manager to handle Rich Schema objects. - Specific Verifiable Data Registry implementation (for example, indy-vdr). It needs to comply with the interface described by the aries-verifiable-data-registry-interface and is built to plug in to the aries ecosystem. It contains the code to communicate with a specific data registry (ledger).

    "},{"location":"concepts/0420-rich-schemas-common/#immutability-of-rich-schema-objects","title":"Immutability of Rich Schema Objects","text":"

    The following Rich Schema objects are immutable: - Context - Rich Schema - Encoding - Mapping

    The following Rich Schema objects can be mutable: - Credential Definition - Presentation Definition

    Credential Definition and Presentation Definition should be immutable in most of the cases, but some applications may consider them as mutable objects.

    Credential Definition can be considered as a mutable object since the Issuer may rotate keys present there. However, rotation of Issuer's keys should be done carefully as it will invalidate all credentials issued for this key.

    Presentation Definition can be considered as a mutable object since restrictions to Issuers, Schemas and Credential Definitions to be used in proof may evolve. For example, Issuer's key for a given Credential Definition may be compromised, so Presentation Definition can be updated to exclude this Credential Definition from the list of recommended ones.

    Please note, that some ledgers (Indy Ledger for example) have configurable auth rules which allow to have restrictions on mutability of particular objects, so that it can be up to applications and network administrators to decide if Credential Definition and Presentation Definition are mutable.

    "},{"location":"concepts/0420-rich-schemas-common/#identification-of-rich-schema-objects","title":"Identification of Rich Schema Objects","text":"

    The suggested Identification scheme allows to have a unique Identifier for any Rich Schema object. DID's method name (for example did:sov) allows to identify Rich Schema objects with equal content within different data registries (ledgers).

    "},{"location":"concepts/0420-rich-schemas-common/#referencing-rich-schema-objects","title":"Referencing Rich Schema Objects","text":""},{"location":"concepts/0420-rich-schemas-common/#relationship","title":"Relationship","text":"

    A presentation definition may use only a subset of the attributes of a schema.

    "},{"location":"concepts/0420-rich-schemas-common/#usage-of-json-ld","title":"Usage of JSON-LD","text":"

    The following Rich Schema objects must be in JSON-LD format: - Schema - Mapping - Presentation Definition

    Context object can also be in JSON-LD format.

    If a Rich Schema object is a JSON-LD object, the content's @id field must be equal to the id.

    More details about JSON-LD usage may be found in the RFCs for specific rich schema objects.

    "},{"location":"concepts/0420-rich-schemas-common/#how-rich-schema-objects-are-stored-in-the-data-registry","title":"How Rich Schema objects are stored in the Data Registry","text":"

    Any write request for Rich Schema object has the same fields:

    'id': <Rich Schema object's ID>                # DID string \n'content': <Rich Schema object as JSON>        # JSON-serialized string\n'rs_name': <rich schema object name>           # string\n'rs_version': <rich schema object version>     # string\n'rs_type': <rich schema object type>           # string enum (currently one of `ctx`, `sch`, `map`, `enc`, `cdf`, `pdf`)\n'ver': <format version>                        # string                              \n
    - id is a unique ID (for example a DID with a id-string being base58 representation of the SHA2-256 hash of the content field) - The content field here contains a Rich Schema object in JSON-LD format (see 0250: Rich Schema Objects). It's passed and stored as-is. The content field must be serialized in the canonical form. The canonicalization scheme we recommend is the IETF draft JSON Canonicalization Scheme (JCS). - metadata contains additional fields which can be used for human-readable identification - ver defines the version of the format. It defines what fields and metadata are there, how id is generated, what hash function is used there, etc. - Author's and Endorser's DIDs are also passed as a common metadata fields for any Request.

    If a Rich Schema object is a JSON-LD object, the content's @id field must be equal to the id.

    "},{"location":"concepts/0420-rich-schemas-common/#querying-rich-schema-objects-from-the-data-registry","title":"Querying Rich Schema objects from the Data Registry","text":"

    The following information is returned from the Ledger in a reply for any get request of a Rich Schema object:

    'id': <Rich Schema object's ID>              # DID string \n'content': <Rich Schema object as JSON>      # JSON-serialized string\n'rs_name': <rich schema object name>         # string\n'rs_version': <rich schema object version>   # string\n'rs_type': <rich schema object type>         # string enum (currently one of `ctx`, `sch`, `map`, `enc`, `cdf`, `pdf`)\n'ver': <format version>                      # string\n'from': <author DID>,                        # DID string\n'endorser': <endorser DID>,                  # DID string\n

    Common fields specific to a Ledger are also returned.

    "},{"location":"concepts/0420-rich-schemas-common/#aries-data-registry-interface","title":"Aries Data Registry Interface","text":"

    We can have a unified API to write and read Rich Schema objects from a Data Registry. Just two methods are sufficient to handle all Rich Schema types: - write_rich_schema_object - read_rich_schema_object_request

    "},{"location":"concepts/0420-rich-schemas-common/#write_rich_schema_object","title":"write_rich_schema_object","text":"

    Writes a Rich Schema object to the ledger.\n\n#Params\nsubmitter: information about submitter\ndata: {\n    id: Rich Schema object's unique ID for example a DID with an id-string being\n        base58 representation of the SHA2-256 hash of the `content` field),\n    content: Rich Schema object as a JSON or JSON-LD string,\n    rs_name: Rich Schema object name,\n    rs_version: Rich Schema object version,\n    rs_type: Rich schema object type's enum string (currently one of `ctx`, `sch`, `map`, `enc`, `cdf`, `pdf`),\n    ver: the version of the generic object template\n},\nregistry: identifier for the registry\n\n#Returns\nregistry_response: result as json,\nerror: {\n    code: aries common error code,\n    description:  aries common error description\n}\n
    The combination of rs_type, rs_name, and rs_version must be unique among all rich schema objects on the ledger.

    "},{"location":"concepts/0420-rich-schemas-common/#read_rich_schema_object_by_id","title":"read_rich_schema_object_by_id","text":"
    Reads a Rich Schema object from the ledger by its unique ID.\n\n#Params\nsubmitter (optional): information about submitter\ndata: {\n    id: Rich Schema object's ID (as a DID for example),\n    ver: the version of the generic object template\n},\nregistry: identifier for the registry\n\n#Returns\nregistry_response: result as json,\nerror: {\n    code: aries common error code,\n    description:  aries common error description\n}\n
    "},{"location":"concepts/0420-rich-schemas-common/#read_rich_schema_object_by_metadata","title":"read_rich_schema_object_by_metadata","text":"
    Reads a Rich Schema object from the ledger by its unique combination of (name, version, type)\n\n#Params\nsubmitter (optional): information about submitter\ndata: {\n    rs_name: Rich Schema object name,\n    rs_version: Rich Schema object version,\n    rs_type: Rich schema object type's enum string (currently one of `ctx`, `sch`, `map`, `enc`, `cdf`, `pdf`),\n    ver: the version of the generic object template\n},\nregistry: identifier for the registry\n\n#Returns\nregistry_response: result as json,\nerror: {\n    code: aries common error code,\n    description:  aries common error description\n}\n
    "},{"location":"concepts/0420-rich-schemas-common/#reference","title":"Reference","text":""},{"location":"concepts/0420-rich-schemas-common/#drawbacks","title":"Drawbacks","text":"

    Rich schema objects introduce more complexity.

    "},{"location":"concepts/0420-rich-schemas-common/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0420-rich-schemas-common/#rich-schema-object-id","title":"Rich Schema object ID","text":"

    The following options on how a Rich Schema object can be identified exist: - DID unique for each Rich Schema - DID URL with the origin (issuer's) DID as a base - DID URL with a unique (not issuer-related) DID as a base - UUID or other unique ID

    UUID doesn't provide global resolvability. We can not say what ledger the Rich Schema object belongs to by looking at the UUID.

    DID and DID URL give persistence, global resolvability and decentralization. We can resolve the DID and understand what ledger the Rich Schema object belongs to. Also we can see that the object with the same id-string on different ledger is the same object (if id-string is calculated against a canonicalized hash of the content).

    However, Rich Schema's DIDs don't have cryptographic verifiability property of common DIDs, so this is a DID not associated with keys in general case. This DID belongs neither to person nor organization nor thing.

    Using Issuer's DID (origin DID) as a base for DID URL may be too Indy-specific as other ledgers may not have an Issuer DID. Also it links a Rich Schema object to the Issuer belonging to a particular ledger.

    So, we are proposing to use a unique DID for each Rich Schema object as it gives more natural way to identify an entity in the distributed ledger world.

    "},{"location":"concepts/0420-rich-schemas-common/#rich-schema-object-as-did-doc","title":"Rich Schema object as DID DOC","text":"

    If Rich Schema objects are identified by a unique DID, then a natural question is whether each Rich Schema object needs to be presented as a DID DOC and resolved by a DID in a generic way.

    We are not requiring to define Rich Schema objects as DID DOCs for now. We may re-consider this in future once DID DOC format is finalized.

    "},{"location":"concepts/0420-rich-schemas-common/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0420-rich-schemas-common/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0430-machine-readable-governance-frameworks/","title":"Aries RFC 0430: Machine-Readable Governance Frameworks","text":""},{"location":"concepts/0430-machine-readable-governance-frameworks/#summary","title":"Summary","text":"

    Explains how governance frameworks are embodied in formal data structures, so it's possible to react to them with software, not just with human intelligence.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#motivation","title":"Motivation","text":"

    We need to be able to write software that reacts to arbitrary governance frameworks in standard ways. This will allow various desirable ../../features.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#tutorial","title":"Tutorial","text":"

    A governance framework (also called a trust framework in some contexts) is a set of rules that establish trust about process (and indirectly, about outcomes) in a given context. For example, the rules that bind buyers, merchants, vendors, and a global credit card company like Mastercard or Visa constitute a governance framework in a financial services context \u2014 and they have a corresponding trust mark to make the governance framework's relevance explicit. The rules by which certificate authorities are vetted and accepted by browser manufacturers, and by which CAs issue derivative certificates, constitute a governance framework in a web context. Trust frameworks are like guy wires: they balance opposing forces to produce careful alignment and optimal behavior.

    Decentralized identity doesn't eliminate all forms of centralized authority, but its opt-in collaboration, openness, and peer orientation makes the need for trust rules particularly compelling. Somehow, a community needs to agree on answers to questions like these:

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#sample-questions-answered-in-a-trust-framework","title":"Sample Questions Answered in a Trust Framework","text":"

    Many industry groups are exploring these questions, and are building careful documentation of the answers they produce. It is not the purpose of this RFC to duplicate or guide such work. Rather, it's our goal to answer a secondary question:

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#the-question-tackled-by-this-rfc","title":"The Question Tackled By This RFC","text":"

    How can answers to these questions be represented so they are consumable as artifacts of software policy?

    When we have good answers to this question, we can address feature requests like the following:

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#desirable-features","title":"Desirable Features","text":""},{"location":"concepts/0430-machine-readable-governance-frameworks/#sample-data-structure","title":"Sample Data Structure","text":"

    Trust frameworks generally begin as human-friendly content. They have to be created, reviewed, and agreed upon by experts from various disciplines: legal, business, humanitarian, government, trade groups, advocacy groups, etc. Developers can help by surfacing how rules are (or are not) susceptible to modeling in formal data structures. This can lead to an iterative process, where data structures and human conversations create refinement pressure on each other until the framework is ready for release.

    [TODO: The following blurb of JSON is one way to embody what we're after. I can imagine other approaches but haven't thought them through in detail. I'm less interested in the details of the JSON, for now, than in the ../../concepts we're trying to communicate and automate. So have a conversation about whether this format works for us, or should be tweaked/replaced.]

    Each problem domain will probably have unique requirements. Therefore, we start with a general governance framework recipe, but plan for extension. We use JSON-LD for this purpose. Here we present a simple example for the problem domain of university credentials in Germany. It manifests just the components of a governance framework that are common across all contexts; additional JSON-LD @context values can be added to introduce more structure as needed. (See Field Details for explanatory comments.)

    {\n    \"@context\": [\n        // The first context must be this RFC's context. It defines core properties.\n        \"https://github.com/hyperledger/aries-rfcs/blob/main../../concepts/0430-machine-readable-governance-frameworks/context.jsonld\", \n        // Additional contexts can be added to extend.\n        \"https://kmk.org/uni-accred-trust-fw\"\n    ],\n    \"name\": \"Universit\u00e4tsakkreditierung\"\n    \"version\": \"1.0\",\n    \"logo\": \"http://kmk.org/uni-accred-trust-fw/logo.png\",\n    \"description\": \"Governs accredited colleges and universities in Germany.\",\n    \"docs_uri\": \"http://https://kmk.org/uni-accred-trust-fw/v1\",\n    \"data_uri\": \"http://https://kmk.org/uni-accred-trust-fw/v1/tf.json\",\n    \"topics\": [\"education\"],\n    \"jurisdictions\": [\"de\", \"eu\"],\n    \"geos\": [\"Deutschland\"],\n    \"roles\": [\"accreditor\", \"school\", \"graduate\", \"safe-verifier\"],\n    \"privileges\": [\n        {\"name\": \"accredit\", \"uri\": \"http://kmk.org/tf/accredit\"},\n        {\"name\": \"issue-edu\", \"uri\": \"http://kmk.org/tf/issue-edu\"},\n        {\"name\": \"hold-edu\", \"uri\": \"http://kmk.org/tf/hold-edu\"},\n        {\"name\": \"request-proof\", \"uri\", \"http://kmk.org/tf/request-proof\"\n    ],\n    \"duties\": [\n        {\"name\": \"safe-accredit\", \"uri\": \"http://kmk.org/tf/responsible-accredit\"},\n        {\"name\": \"GDPR-dat-control\", \"uri\": \"http://europa.eu/gdpr/trust-fw/gdpr-data-controller\"}\n        {\"name\": \"GDPR-edu-verif\", \"uri\": \"http://kmk.org/tf/gdpr-verif\"}\n        {\"name\": \"accept-kmk-tos\", \"uri\": \"http://kmk.org/tf/tos\"}\n    ],\n    \"define\": [\n        {\"name\": \"KMK\": \"id\": \"did:example:abc123\"},\n        {\"name\": \"KMK\": \"id\": \"did:anotherexample:def456\"},\n    ], \n    \"rules\": [\n        {\"grant\": [\"accredit\"], \"when\": {\"name\": \"KMK\"},\n            \"duties\": [\"safe-accredit\"]},\n        {\"grant\": [\"issue-edu\"], \"when\": {\n                // Proof request (see RFC 0037) specifying that\n                // institution is accredited by KMK.\n            },\n            // Any party who fulfills these criteria is considered\n            // to have the \"school\" role.\n            \"thus\": [\"school\"],\n            // And is considered to have the \"GDPR-dat-control\" duty.\n            \"duties\": [\"GDPR-dat-control\", \"accept-kmk-tos\"]\n        },\n        {\"grant\": \"hold-edu\", \"when\": {\n                // Proof request specifying that holder is a human.\n                // The presence of this item in the GF means that\n                // conforming issuers are supposed to verify\n                // humanness before issuing. Issuers can impose\n                // additional criteria; this is just the base\n                // requirement.\n            },\n            // Any party who fulfills these criteria is considered\n            // to qualify for the \"graduate\" role.\n            \"thus\": \"graduate\",\n            \"duties\": [\"accept-kmk-tos\"]\n        },\n        // In this governance framework, anyone can request proof based\n        // on credentials, as long as they demonstrate that they possess\n        // an \"approved verifier\" credential.\n        {\n            \"grant\": \"request-proof\", \"when\": {\n                // Proof request specifying that the party must possess\n                // a credential that makes them an approved verifier.\n                // The presence of this item in the GF means that\n                // provers should, at a minimum, verify the verifiers\n                // in this way before sharing proof. Provers can impose\n                // additional criteria of their own; this is just the\n                // base requirement.\n            }, \"thus\": \"safe-verifier\",\n            \"duties\": [\"GDPR-edu-verif\", \"accept-kmk-tos\"]\n        },\n        // Is there an authority that audits interactions?\n        \"audit\": {\n            // Where should reports be submitted via http POST?\n            \"uri\": \"http://kmk.org/audit\",\n            // How likely is it that a given interaction needs to\n            // be audited? Each party in the interaction picks a\n            // random number between 0 and 1, inclusive; if the number\n            // is <= this number, then that party submits a report about it.\n            \"probability\": \"0.01\"\n        },\n        // Is there an authority to whom requests for redress can\n        // be made, if one party feels like another violates\n        // the governance framework? \n        \"redress\": {\n            \"uri\": \"http://kmk.org/redress\"\n        }\n    }    \n}\n
    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#using-the-sample","title":"Using the Sample","text":"

    Let's look at how the above structure can be used to influence behavior of verifiable credential management software, and the parties that use it.

    We begin by noticing that KMK (KultusMinisterKonferenz), the accrediting body for universities in Germany, has a privileged role in this governance framework. It is given the right to operate as \"KMK\" as long as it proves control of one of the two DIDs named in the define array.

    We posit an issuer, Faber College, that wants to issue credentials compliant with this governance framework. This means that Faber College wants the issue-edu privilege defined at http://kmk.org/tf/issue-edu (see the second item in the privileges array). It wants to create credentials that contain the following field: \"trust_framework\": \"http://https://kmk.org/uni-accred-trust-fw/v1/tf.json\" (see the data_uri field). It wants to have a credential from KMK proving its accreditation (see second item in the rules array).

    Faber is required by this governance framework to accept the terms of service published at http://kmk.org/tf/tos, because it can't get the issue-edu privilege without incurring that duty (see the accept-kmk-tos duty in the second item in the rules array). KMK by implication incurs the obligation to enforce these terms of service when it issues a credential attesting Faber's accreditation and compliance with the governance framework.

    Assuming that Faber proceeds and satisfies KMK, Faber is now considered a school as far as this governance framework is concerned.

    Now, let us suppose that Alice, a student at Faber, wants to get a diploma as a verifiable credential. In addition to whatever else Faber does before it gives Alice a diploma, Faber is obligated by the governance framework to challenge Alice to prove she's a human being (see when in the third item of the rules array). Hopefully this is easy, and was done long before graduation. :-) It is also obligated to introduce Alice to the terms of service for KMK, since Alice will be acquiring the graduate role and this rule has the accept-kmk-tos duty. How Faber does this is something that might be clarified in the terms of service that Faber already accepted; we'll narrate one possible approach.

    Alice is holding a mobile app that manages credentials for her. She clicks an invitation to receive a credential in some way. What she sees next on her screen might look something like this:

    Her app knew to display this message because the issuer, Faber College, communicated its reliance on this governance framework (by referencing its data_uri) as part of an early step in the issuance process (e.g., in the invitation or in the offer-credential message). Notice how metadata from the governance framework \u2014 its title, version, topics, and descriptions \u2014 show up in the prompt. Notice as well that governance frameworks have reputations. This helps users determine whether the rules are legitimate and worth using. The \"More Info\" tab would link to the governance framework's docs_uri page.

    Alice doesn't have to re-accept the governance framework if she's already using it (e.g., if she already activated it in her mobile app because it's relevant to other credentials she holds). As a person works regularly within a particular credential domain, decisions like these will become cached and seamless. However, we're showing the step here, for completeness.

    Suppose that Alice accepts the proposed rules. The governance framework requires that she also accept the KMK terms of service. These might require her to report any errors in her credential promptly, and clarify that she has the right to appeal under certain conditions (see the redress section of the governance framework data structure). They might also discuss the KMK governance framework's requirement for random auditing (see the audit section).

    A natural way to introduce Alice to these topics might be to combine them with a normal \"Accept terms of service\" screen for Faber itself. Many issuers are likely to ask holders to agree to how they want to manage revocation, privacy, and GDPR compliance; including information about terms that Faber inherited from the governance framework would be an easy addition.

    Suppose, therefore, that Alice is next shown a \"Terms of Service\" screen like the following.

    Note the hyperlink back to the governance framework; if Alice already accepted the governance framework in another context, this helps her know what governance framework is in effect for a given credential.

    After Alice accepts the terms, she now proceeds with the issuance workflow. For the most part, she can forget about the governance framework attached to her credential \u2014 but the software doesn't. Some of the screens it might show her, because of information that it reads in the governance framework, include things like:

    Or, alternatively:

    In either case, proof of the issuer's qualifications was requested automatically, using canned criteria (see the second item in the governance framework's rules array).

    A similar kind of check can be performed on verifiers:

    Or, alternatively:

    Trust framework knowledge can also be woven into other parts of a UI, as for example:

    And:

    And:

    The point here is not the specifics in the UI we're positing. Different UX designers may make different choices. Rather, it's that by publishing a carefully versioned, machine-readable governance framework, such UIs become possible. The user's experience becomes less about individual circumstances, and more about general patterns that have known reputations, dependable safeguards, and so forth.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#versioning","title":"Versioning","text":"

    Trust framework data structures follow semver rules:

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#localization","title":"Localization","text":"

    Trust frameworks can offer localized alternatives of text using the same mechanism described in RFC 0043: l10n; treat the governance framework JSON as a DIDComm message and use decorators as it describes.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#reference","title":"Reference","text":"

    We've tried to make the sample JSON above self-describing. All fields are optional except the governance framework's name, version, data_uri, and at least one define or rules item to confer some trust.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#field-details","title":"Field Details","text":""},{"location":"concepts/0430-machine-readable-governance-frameworks/#name","title":"name","text":"

    A short descriptive string that explains the governance framework's purpose and focus. Extends http://schema.org/name.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#version","title":"version","text":"

    A semver-formatted value. Typically only major and minor segments are used, but patch should also be supported if present. Extends http://schema.org/version with the major/minor semantics discussed under Versioning above.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#logo","title":"logo","text":"

    A URI that references something visually identifying for this framework, suitable for display to a user. Extends http://schema.org/logo.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#description","title":"description","text":"

    Longer explanatory comment about the purpose and scope of the framework. Extends http://schema.org/description.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#docs_uri","title":"docs_uri","text":"

    Where is this governance framework officially published in human-readable form? A human should be able to browse here to learn more. Extends http://schema.org/url.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#data_uri","title":"data_uri","text":"

    Where is this governance framework officially published as a machine-readable data structure? A computer should be able to GET this JSON (MIME type = application/json) at the specified URI.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#topics","title":"topics","text":"

    In which problem domains is this governance framework relevant? Think of these like hash tags; they constitute a loose, overlapping topic cloud rather than a normative taxonomy; the purpose is to facilitate search.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#geos","title":"geos","text":"

    In which geographies is this governance framework relevant? May be redundant with jurisdictions in many cases.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#jurisdictions","title":"jurisdictions","text":"

    In which legal jurisdictions is this governance framework relevant? Values here should use ISO 639-2 or 3 language code, possibly narrowed to a standard province and even county/city using > as the narrowing character, plus standard abbreviations where useful: us>tx>houston for \"Houston, Texas, USA\" or ca>qc for the province of Quebec in Canada.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#roles","title":"roles","text":"

    Names all the roles that are significant to understanding interactions in this governance framework. These map to X in rules like \"X can do Y if Z.\"

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#privileges","title":"privileges","text":"

    Names all the privileges that are significant to understanding interactions in this governance framework. These map to Y in rules like \"X can do Y if Z.\" Each privilege is defined for humans at the specified URI, so a person can understand what it entails.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#duties","title":"duties","text":"

    Name all the duties that are significant to understanding interactions in this governance framework. Each duty is defined for humans at the specified URI, so a person can understand what it entails.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#define","title":"define","text":"

    Uses an array of {\"name\":x, \"id\": did value} objects to define key participants in the ecosystem.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#rules","title":"rules","text":"

    Uses SGL syntax to describe role-based rules of behavior like \"X can do Y if Z,\" where Z is a criterion following \"when\".

    Another sample governance framework (including the human documentation that would accompany the data structure) is presented as part of the discussion of guardianship in RFC 0103.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#drawbacks","title":"Drawbacks","text":""},{"location":"concepts/0430-machine-readable-governance-frameworks/#timing","title":"Timing?","text":"

    It may be early in the evolution of the ecosystem to attempt to standardize governance framework structure. (On the other hand, if we don't standardize now, we may be running the risk of unwise divergence.)

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#overkill","title":"Overkill?","text":"

    Joe Andrieu has pointed out on W3C CCG mailing list discussions that some important use cases for delegation involve returning to the issuer of a directed capability to receive the intended privilege. This contrasts with the way verifiable credentials are commonly used (across trust domain boundaries).

    Joe notes that governance frameworks are unnecessary (and perhaps counterproductive) for the simpler, within-boundary case; if the issuer of a directed capability is also the arbiter of trust in the end, credentials may be overkill. To the extent that Joe's insight applies, it may suggest that formalizing governance framework data structures is also overkill in some use cases.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0430-machine-readable-governance-frameworks/#prior-art","title":"Prior art","text":"

    Some of the work on consent receipts, both in the Kantara Initiative and here in RFC 0167, overlaps to a small degree. However, this effort and that one are mainly complementary rather than conflicting.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0430-machine-readable-governance-frameworks/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0430-machine-readable-governance-frameworks/gov-fw-covid-19/","title":"Gov fw covid 19","text":"
    {\n    \"@context\": [\n        \"https://github.com/hyperledger/aries-rfcs/blob/main/concepts/0430-machine-readable-governance-frameworks\", \n        \"https://fightthevirus.org/covid19-fw\"\n    ],\n    \"name\": \"COVID-19 Creds\"\n    \"1.0\",\n    \"description\": \"Which health-related credentials can be trusted for which levels of assurance, given which assumptions.\",\n    \"docs_uri\": \"http://fightthevirus.org/covid19-fw/v1\",\n    \"data_uri\": \"http://fightthevirus.org/covid19-fw/v1/tf.json\",\n    \"topics\": [\"health\", \"public safety\"],\n    \"jurisdictions\": [\"us\", \"uk\", \"eu\"],\n    \"roles\": [\"healthcare-provider\", \"healthcare-worker\", \"patient\"],\n    \"privileges\": [\n        {\"name\": \"travel\", \"uri\": \"http://ftv.org/tf/travel\"},\n        {\"name\": \"receive-healthcare\", \"uri\": \"http://ftv.org/tf/be-patient\"},\n        {\"name\": \"tlc-fragile\", \"uri\": \"http://ftv.org/tf/tlc\"},\n        {\"name\": \"visit-hot-zone\", \"uri\": \"http://ftv.org/tf/visit\"}\n    ],\n    // Name all the duties that are significant to understanding\n    // interactions in this governance framework. Each duty is defined for humans\n    // at the specified URI, so a person can understand what it\n    // entails.\n    \"duties\": [\n        {\"name\": \"safe-accredit\", \"uri\": \"http://kmk.org/tf/responsible-accredit\"},\n        {\"name\": \"GDPR-dat-control\", \"uri\": \"http://europa.eu/gdpr/trust-fw/gdpr-data-controller\"}\n        {\"name\": \"GDPR-edu-verif\", \"uri\": \"http://kmk.org/tf/gdpr-verif\"}\n        {\"name\": \"accept-kmk-tos\", \"uri\": \"http://kmk.org/tf/tos\"}\n    ],\n    // Use DIDs to define key participants in the ecosystem. KMK is\n    // the accreditation authority for higher education in Germany.\n    // Here we show it using two different DIDs.\n    \"define\": [\n        {\"name\": \"KMK\": \"id\": \"did:example:abc123\"},\n        {\"name\": \"KMK\": \"id\": \"did:anotherexample:def456\"},\n    ], \n    // Describe role-based rules of behavior like \"X can do Y if Z,\"\n    // where Z is a criterion following \"when\".\n    \"rules\": [\n        {\"grant\": [\"accredit\"], \"when\": {\"name\": \"KMK\"},\n            \"duties\": [\"safe-accredit\"]},\n        {\"grant\": [\"issue-edu\"], \"when\": {\n                // Proof request (see RFC 0037) specifying that\n                // institution is accredited by KMK.\n            },\n            // Any party who fulfills these criteria is considered\n            // to have the \"school\" role.\n            \"thus\": [\"school\"],\n            // And is considered to have the \"GDPR-dat-control\" duty.\n            \"duties\": [\"GDPR-dat-control\", \"accept-kmk-tos\"]\n        },\n        {\"grant\": \"hold-edu\", \"when\": {\n                // Proof request specifying that holder is a human.\n                // The presence of this item in the TF means that\n                // conforming issuers are supposed to verify\n                // humanness before issuing. Issuers can impose\n                // additional criteria; this is just the base\n                // requirement.\n            },\n            // Any party who fulfills these criteria is considered\n            // to qualify for the \"graduate\" role.\n            \"thus\": \"graduate\",\n            \"duties\": [\"accept-kmk-tos\"]\n        },\n        // In this governance framework, anyone can request proof based\n        // on credentials. No criteria are tested to map an entity\n        // to the \"anyone\" role.\n        {\n            \"grant\": \"request-proof\", \"thus\": \"anyone\",\n            \"duties\": [\"GDPR-edu-verif\", \"accept-kmk-tos\"]\n        },\n    ],\n    // Is there an authority that audits interactions?\n    \"audit\": {\n        // Where should reports be submitted via http POST?\n        \"uri\": \"http://kmk.org/audit\",\n        // How likely is it that a given interaction needs to\n        // be audited? Each party in the interaction picks a\n        // random number between 0 and 1, inclusive; if the number\n        // is <= this number, then that party submits a report about it.\n        \"probability\": \"0.01\"\n    },\n    // Is there an authority to whom requests for redress can\n    // be made, if one party feels like another violates\n    // the governance framework? \n    \"redress\": {\n        \"uri\": \"http://kmk.org/redress\"\n    }\n}   \n
    "},{"location":"concepts/0440-kms-architectures/","title":"0440: KMS Architectures","text":""},{"location":"concepts/0440-kms-architectures/#summary","title":"Summary","text":"

    A Key Management Service (KMS) is designed to handle protecting sensitive agent information like keys, credentials, protocol state, and other data. User authentication, access control policies, and cryptography are used in various combinations to mitigate various threat models and minimize risk. However, how to do this in practice is not intuitive and done incorrectly results in flawed or weak designs. This RFC proposes best practices for designing a KMS that offers reasonable tradeoffs in flexibility for implementers with strong data security and privacy guarantees.

    "},{"location":"concepts/0440-kms-architectures/#motivation","title":"Motivation","text":"

    A KMS needs to be flexible to support various needs that arise when implementing agents. Mobile device needs are very different from an enterprise server environment, but ultimately the secrets still need to be protected in all environments. Some KMSs have already been implemented but fail to consider all the various threat models that exist within their designs. Some overlook good authentication schemes. Some misuse cryptography making the implementation insecure. A good KMS should provide the ability to configure alternative algorithms that are validated against specific standards like the Federal Information Processing Standards (FIPS). This RFC is meant to reduce the chances that an insecure implementation could be deployed while raising awareness of principles used in more secure designs.

    "},{"location":"concepts/0440-kms-architectures/#tutorial","title":"Tutorial","text":"

    A KMS can be broken into three main components with each component having potential subcategories. These components are designed to handle specific use cases and should be plug-and-play. The components are listed below and described in detail in the following sections:

    1. Enclave -
      • Safeguards cryptographic keys
      • Key Generation
        • Encryption
        • Digital Signatures
        • Key exchange
        • Proof generation and verification
    2. Persistence -
      • Stores non-key data
        • Verifiable credentials
        • Protocol states
        • DID documents
        • Other metadata
    3. LOX -
      • Handle user authentication
      • Access control enforcement
      • Session/context establishment and management to the previous layers as described here.
    "},{"location":"concepts/0440-kms-architectures/#architecture","title":"Architecture","text":"

    LOX sits between clients and the other subsystems. LOX asks the Enclave to do specific cryptographic operations and may pass the results to clients or Persistence or LOX may consume the results itself. The persistence layer does not directly interact with the enclave layer nor does the enclave layer interact directly with the persistence layer.

    "},{"location":"concepts/0440-kms-architectures/#lox","title":"LOX","text":"

    LOX is the first layer KMS consumers will encounter and where the bulk of KMS work for implementers happens. LOX is divided into the following subcomponents that are not mutually exclusive:

    1. Authentication - Credentials for accessing the KMS and how to communicate with the KMS. Username/Passwords, PINS, Cryptographic keys, Verifiable credentials, Key fobs, key cards, and OpenID Connect are common methods for this layer. Any sensitive data handled by in this layer should be cleared from memory and minimize its footprint.
    2. Access control - Policies that indicate who can access the data and how data are handled.
    3. Audit - Logging about who does what and when and how verbose are the details

    Connecting to a KMS is usually done using functional system or library API calls, physical means like USB, or networks like Bluetooth, SSH, or HTTPS. These connections should be secured using encryption techniques like TLS or SSH or Signal other methods to prevent eavesdropping on authentication credentials from end users. This is often the most vulnerable part of the system because its easy to design something with weak security like passwords with only 6 characters and sent in plaintext. It is preferred to use keys and multi-factor authentication techniques for connecting to LOX. Since password based sign ins are the most common, the following is a list of good methods for handling password based sign ins.

    "},{"location":"concepts/0440-kms-architectures/#use-hashing-specially-designed-for-passwords","title":"Use hashing specially designed for passwords","text":"

    Password based hashes are designed to be slow so the outputs are not so easily subjected to a brute-force dictionary attack. Simply hashing the password with cryptographic algorithms like Sha2/Sha3/Blake2 is not good enough. Below is a list of approved algorithms

    1. PBKDF2
    2. Bcrypt
    3. Scrypt
    4. Argon2

    The settings for using each of these should be enough to make even the strongest hardware take about 1-2 seconds to complete. The recommended settings in each section, also apply to mobile devices.

    PBKDF2

    Many applications use PBKDF2 which is NIST approved. However, PBKDF2 can use any SHA-family hash algorithm. Thus, it can be made weak if paired with SHA1 for example. When using PBKDF2, choose SHA2-512, which is significantly slower in current GPUs. The PBKDF2 parameters are

    Bcrypt

    Bcrypt is a functional variant of the Blowfish cipher designed specially for password hashing. Unfortunately, the password sizes are limited to the first 72 bytes, any extra bytes are ignored. The recommended number of rounds to use is \u226514. Bcrypt can support up to 31 rounds. Bcrypt is less resistant to ASIC and GPU attacks because it uses constant memory making it easier to build hardware-accelerated password crackers. When properly configured, Bcrypt is considered secure and is widely used in practice.

    Scrypt

    Scrypt is designed to make it costly to perform large-scale custom hardware attacks by requiring large amounts of memory. It is memory-intensive on purpose to prevent GPU, ASIC and FPGA attacks. Scrypt offers multiple parameters:

    The memory in Scrypt is accessed in strongly dependent order at each step, so the memory access speed is the algorithm's bottleneck. The memory required is calculated as 128 * N * r * p bytes. Example: 128*16384*8*1 = 16MB

    Choosing parameters depends on how much waiting is desired and what level of security (cracking resistance) should be achieved.

    MyEtherWallet uses N=8192, r=8, p=1. This is not considered strong enough for crypto wallets. Parameters of N=16384,r=8,p=1 (RAM = 2MB) typically take around 0.5 seconds, used for interactive sign ins. This doesn't hammer server side performance too much where many users can login at the same time. N=1048576,r=8,p=1 (RAM = 1GB) takes around 2-3 seconds. Scrypt is considered highly secure when properly configured.

    Argon2

    Argon2 is optimized for the x86 architecture and exploits the cache and memory layouts of modern Intel and AMD processors. It was the password hash competition winner and is recommended over PBKDF2, Bcrypt and Scrypt. Not recommended for ARM ABI's as performance tends to be much slower. This performance hit seem be desirable but what tends to happen is the parameters are configured lower for ARM environments to be reasonable but then can be cracked at 2-3 times faster when the hash is brute-forced on x86. Argon2 comes in three flavors: Argon2d, Argon2i and Argon2id and uses the following parameters

    Parameters p=2,m=65536,n=128 typically take around 0.5 seconds, used for interactive sign ins. Moderate parameters are p=3,m=262144,n=192 typically take around 2-3 seconds, and sensitive parameters are p=4,m=1048576,n=256. Always time it in your environments.

    "},{"location":"concepts/0440-kms-architectures/#session-establishment","title":"Session establishment","text":"

    Upon authentication to LOX, LOX should establish connections to the enclave component and persistence component which should appear opaque to the client. LOX may need to authenticate to the enclave component or persistence component depending on where client access credentials are stored by implementors. Its preferable to store these in keychains or keystores if possible where the access is determined by the operating system and can include stronger mechanisms like TouchID or FaceID and hardware tokens in addition to passwords or pins. As described in LOX, the credentials for accessing the enclave and persistence layer can then be retrieved or generated if the client is new and stored in a secure manner.

    "},{"location":"concepts/0440-kms-architectures/#using-os-keychain","title":"Using OS Keychain","text":""},{"location":"concepts/0440-kms-architectures/#using-password-based-key-derivation","title":"Using Password based key derivation","text":""},{"location":"concepts/0440-kms-architectures/#session-management","title":"Session management","text":"

    Active connections to these other layers may possibly be pooled for efficiency reasons but care must be taken to avoid accidental permission grants. For example, Alice must not be able to use Bob's connection and Bob for Alice. However, the same database connection credentials might be used transparently for Alice and Bob, in which case the database connection can be reused. This should be an exception as credential sharing is strongly discouraged. Auditing in the database might not be able to determine whether Alice or Bob performed a specific query. Connections to enclaves and persistence usually requires session or context objects to be handled. These must not be returned to clients, but rather maintained by LOX. When a client connection is closed, these must be securely deleted and/or closed.

    "},{"location":"concepts/0440-kms-architectures/#enclave","title":"Enclave","text":"

    The enclave handles as many operations related to cryptography and key management as possible to ensure keys are adequately protected. Keys can potentially be stolen by means of side channel attacks from memory, disk, operational timing, voltage, heat, extraction, and others. Each enclave has been designed to consider certain threat models and mitigate these risks. For example, a Thales HSM is very different than a Yubico HSM or an Intel SGX Enclave. The correct mental model is to think about the formal guarantees that are needed and pick and choose/design the enclave layer to suite those needs. Build the system that meets the definition(s) of security then prove it meets the requirements. An enclave functions as a specialized cryptography component. The enclave provides APIs for passing in data to be operated on and executes its various cryptographic operations internally and only returns results to callers. The following is a list of operations that enclaves can support. The list will vary depending on the vendor.

    1. Generate asymmetric key
    2. Generate symmetric key
    3. Generate random
    4. Put key
    5. Delete key
    6. Wrap key
    7. Unwrap key
    8. Export wrapped key
    9. Import wrapped key
    10. List keys
    11. Update key metadata/capability
    12. Get key info/metadata
    13. Derive key agreement
    14. Sign
    15. Verify
    16. Encrypt
    17. Decrypt
    18. Get enclave info - metadata about the enclave like device info, version, manufacturer
    19. Query enclave capability
    20. Audit - e.g. enable/disable auditing
    21. Log - e.g. read audit logs
    22. Attestation - e.g. generate proofs about the enclave

    Most hardware implementations do not allow key material to be passed through the API, even in encrypted form, so a system of external references is required that allows keys to be referenced in a way that supports:

    1. Consistency - The same ID refers to the same key every time.
    2. Naming schemes - such as RSA_PSS_2048_8192_SHA256

    Some enclaves do support key material to be passed through the API. If allowed, these are called Key blocks. Key blocks are how a key is formatted when passed into or out of the enclave. See here.

    In keeping with the drive for enclaves to be simple and hard to mess up, the proposal is to make key IDs in the enclave storage be simple UTF-8 string names, and leave the underlying provider implementation to deal with the complexities of translation, key rollover, duplication and so on. Each of these operations uses different parameters and the enclave should specify what it allows and what it does not. If code needs to discover the capabilities on the fly, it is much more efficient to query the enclave if it supports it rather than return a list of capabilities that are search externally.

    Enclaves also store metadata with each key. This metadata consists of

    1. Attributes - e.g. key id, label, tag
    2. Access Constraints - e.g. require passcode or biometric auth to use
    3. Access Controls - e.g. can decrypt, can verify, can sign, exportable when wrapped
    "},{"location":"concepts/0440-kms-architectures/#attributes","title":"Attributes","text":"

    Attributes describe a key and do not enforce any particular permission about it. Such attributes typically include

    1. Identifier - e.g. f12f149d-e8d9-427c-85c7-f116f87f2e70 or 5a71028a74f1ad9f3f39 or 9s5m7EEJq1zZyc or 158
    2. Alias - e.g. fcb9ec81-d613-4c0f-a023-470155f38f92 or 1BC6HUi1soNML. Useful to sharing key but using a different id. Audits would show which one was used.
    3. Label or Description - e.g. Alice to Bob's DID key
    4. Class - e.g. public, private, symmetric
    5. Tag - e.g. sign, verify, did
    6. Type - e.g. aes, rsa, ecdsa, ed25519
    7. Size in bits - e.g. 256, 2048
    8. Creator - i.e. the original creator/owner
    9. Creation date
    10. Last modification date
    11. Always sensitive - i.e. was created with sensitive flag and never removed this control
    12. Never exported - i.e. never left the enclave
    13. Derived from - e.g. use PBKDF2 on this value to generate the key, or reference another key as a seed derived using HKDF

    Most of these attributes cannot change like Size in bits and are read-only. Aliases, Labels, and Tags are the only Attributes that can change. Enclaves allow 1 to many aliases and tags but only one label. The enclave should specify how many tags and aliases may be used. A common number is 5 for both.

    "},{"location":"concepts/0440-kms-architectures/#access-constraints","title":"Access Constraints","text":"

    Constraints restrict key access to be done under certain conditions like who is accessing the key like the owner(s) or group(s), password or biometric authentication, when host is unlocked like in mobile devices, always accessible. Constraints must be honored by the enclave to consumers to have confidence and trust. Possible constraints are:

    1. Owner(s) - e.g. who is allowed to access and use the key.
    2. User presence - e.g. require additional authentication like passcode or biometric auth (TouchID/FaceID). Authentication is on a per key basis vs just owner. Must be owner and meet additional authentication requirements.
    3. Biometric - e.g. require biometric authentication
    4. Passcode - e.g. require passcode authentication
    5. Fresh Interval - e.g. allowed time between additional authentications, 10 minutes is acceptable for extra sensitive keys

    Each of these can be mixed in combinations of AND and OR conjunctions. For example, key ABC might have the following constraints

    OR ( AND (Owner, User presence, Passcode), AND(Owner, User presence, Biometric) ) This requires the owner to enter an additional passcode OR biometric authentication to access the key.

    "},{"location":"concepts/0440-kms-architectures/#access-controls","title":"Access Controls","text":"

    Enclaves use access controls to restrict what operations keys are allowed to perform and who is allowed to use them. Controls are set during key generation and may or may not be permitted to change depending on the vendor or settings. Controls are stored with the key and enforced by the enclave. Possible access controls are:

    1. Can Key Agreement - e.g. Diffie-Hellman
    2. Can Derive - e.g. serve as seed to other keys
    3. Can Decrypt - e.g. can decrypt data for private keys and symmetric keys
    4. Can Encrypt - e.g. can encrypt data for public keys and symmetric keys
    5. Can Wrap - e.g. can be used to wrap another key to be exported
    6. Can Unwrap - e.g. can be used to unwrap a key that was exported
    7. Can Sign - e.g. can create digital signatures for private keys and MACs for symmetric keys
    8. Can Verify - e.g. can verify digital signatures for public keys and MACs for symmetric keys
    9. Can Attest - e.g. can be used to prove information about the enclave
    10. Is Exportable - e.g. can be exported from the enclave.
    11. Is Sensitive - e.g. can only be exported in an encrypted format.
    12. Is Synchronizable - e.g. can only be exported directly to another enclave
    13. Is Modifiable - e.g. can any controls be changed
    14. Is Visible - e.g. is the key available to clients outside the enclave or used internally.
    15. Valid Until Date - e.g. can use until this date, afterwards the key cannot be used

    To mitigate certain attacks like key material leaking in derive and encrypt functions, keys should be as limited as possible to one task (see here and here). For example, allowing concatenation of a base key to another key should be discouraged as it has the potential to perform key extraction attacks (see Clulow. Clulow shows that a key with decrypt and wrap capabilities can export a key then be used to decrypt it. This applies to both symmetric keys with decrypt and wrap and its variant where the wrapping key is a public key and the decryption key is the corresponding private key). The correct mental model to follow for enclave implementors is to model an intruder than can call any API command in any order he likes using any values that he knows. Sensitive and unexportable keys should never be directly readable and should not be changeable into nonsensitive and exportable. If a key can wrap, it should not be allowed to decrypt. Some of these controls should be considered sticky\u2013\u2013cannot be changed to mitigate these attacks like sensitive. This is useful when combined with conflicting controls like wrap and decrypt.

    Exporting key requirements

    The most secure model is to not allow keys to leave an enclave. However, in practice, this is not always reasonable. Backups, replications must be made, keys must be shared between two enclaves for data to be shared by two parties. When a key is lifted from the enclave, its attributes, constraints and controls must correctly bind with it. When a key lands in another enclave, the enclave must honor the attributes, constraints and controls with which it came. The wrapping format should be used to correctly bind key attributes, constraints and controls to the key. This prevents attacks where the key is wrapped and unwrapped twice with conflicting metadata as described by Delaune et. al, Cachin et.al, Cortier and Steel and Bortolozzo.

    "},{"location":"concepts/0440-kms-architectures/#templates","title":"Templates","text":"

    Some enclaves support creating templates such that keys can be generated and wrapped following secure guidelines in a reproducible way. Define separate templates for key generation and key wrapping.

    "},{"location":"concepts/0440-kms-architectures/#final-enclave-notes","title":"Final enclave notes","text":"

    Hardware enclaves typically have limited storage space like a few megabytes. A hardware enclave could be used to protect a software enclave that has a much higher storage capacity. A KMS is not limited to just one enclave. Cloud access security brokers (cloud enclaves) like Hashicorp Vault, Amazon\u2019s Cloud HSM, Iron Core Labs, Azure Key Vault, and Box Keysafe require trusting a SaaS vendor will store them in a place that is not vulnerable to data breaches. Even then, there is no assurance that the vendor, or one of their partners, won\u2019t access the secret material. This doesn't belittle their value, it's just another point to consider when using SaaS enclaves. Keys should be shared as little as possible if at all. Keys should be as short lived as possible and have a single purpose. This limits the need to replicate keys to other agents, either for dual functionality or recovery purposes, as well as damage in the event of a compromise.

    "},{"location":"concepts/0440-kms-architectures/#persistence","title":"Persistence","text":"

    This layer is meant to store other data in a KMS like credentials and protocol state. This layer could be optional for static agents where they store very little if anything. Credentials that access the persistence layer should stored in the enclave layer or with LOX in keychains. For example, if the persistence layer is a postgres database, the username/password or keypair to authenticate to the database could be stored in the enclave rather than a config file. Upon a successful authentication to LOX, these credentials are retrieved to connect to postgres and put into a connection pool. This is more secure than storing credentials in config files, environment variables or prompting them from user input.

    The most common storage mechanism is a SQL database. Designers should consider their system requirements in terms of performance, environments, user access control, system administrator access, then read Kamara\u2019s blog on how to develop encrypted databases (see 1, 2, 3, 4). For example, mobile environments vs enterprise environments. Mobile environments probably won\u2019t have a network adversary when using secure storage or an Honest-but-curious adversary where as enterprise environments will. Should the query engine be able to decrypt data prior to filtering or should it run on encrypted data and the querier performs decryption? Neither of these is considered wrong per se, but each comes with a set of trade offs. In the case of query engine decryption, there is no need to write a separate query mechanism and databases can execute as they normally do with a slight decrease in performance from decryption and encryption. However, the data reader must trust the query engine to not leak data or encryption keys. If the querier performs data encryption/decryption, no trust is given to the query engine but additional work must be performed before handing data to the engine and this searchable encryption is still vulnerable to access pattern leakage. What is practical engineering vs design vs theory? Theory is about what can and can\u2019t be done and why. Design is about using efficient primitives and protocols. Engineering is about effective and secure implementation.

    Implementers should consider the following threats and adversary sections.

    Threats * Memory access patterns leakage - Attackers can infer contents and/or importance of data based on how it is accessed and frequency of access. Attacker learns the set of matching records. * Volume leakage - Attacker learns the number of records/responses. * Search pattern leakage - Attackers can infer the contents and/or importance of data based on search patterns\u2013attacker can easily judge whether any two queries are generated from the same keywords or not. * Rank leakage - Attacker can infer or learn which data was queried * Side channel leakage - Attackers can access or learn the value of the secret material used to protect data confidentiality and integrity through side channels like memory inspection, RAM scraping attacks with swap access, and timing attacks [8]. * Microarchitectural attacks - Attacker is able to learn secrets through covert channels that target processors (Spectre/Meltdown)

    Adversary * Network adversary - observes traffic on the network. In addition to snooping and lurking, they can also perform fingerprinting attacks to learn who are the endpoints and correlate them to identical entities. * Snapshot adversary - breaks into server, snapshots system memory or storage. Sees copy of encrypted data but does not see transcripts related to any queries. Trade off between functionality and security. * Persistent adversary - corrupts server for a period of time, sees all communication transcripts. * Honest but curious adversary - System Administrator that can watch accesses to the data

    Disk Encryption This feature can help with encryption-at-rest requirements but only protects the data if the disk is off. Once booted, then an attacker can read it as easily as a system administrator and thus provides very little protection. Data can and usually is stolen via other methods like software vulnerabilities or virus\u2019. Useful if the storage hardware is not virtualized like in the cloud and is mobile like a laptop, phone, USB drives. If storage is in the cloud or network, it's worth more to invest in host-based intrusion prevention, intrusion detection systems, cloud, and file encryption (see Sastry, Yegulalp, and Why Full Disk Encryption Isn't Enough).

    Application vs Database Encryption Databases provide varying levels of built-in encryption methods. SQL Server and SQLCipher are examples of databases that provide Transparent Data Encryption\u2013the users don\u2019t even know the data is encrypted, it\u2019s transparent to their knowledge. This works similar to Disk Encryption in that it mostly protects the database data at rest, but as soon as the user is connected, the data can be read in plaintext. A further abstraction is Postgres and SQL Server allowing database keys to be partially managed by the database to encrypt columns or cells and the user must supply either all or some of the keys and manage them separately. The last approach is for applications to manage all encryption. This has the advantage of being storage agnostic. Postgres permits the keys to be stored separately from the encrypted columns. When the data are queried, the key is passed to the query engine and the data are decrypted temporarily in memory to check the query.

    If the application is handling encryption, this means the query engine only operates on encrypted data vs allowing it decrypt and read the data directly. The tradeoff is a different query method/language will have to be used than provided by the persistence layer.

    Many databases are networked which requires another layer of protection like TLS or SSH.

    "},{"location":"concepts/0440-kms-architectures/#data-storage","title":"Data storage","text":"

    Metadata for data includes access controls and constraints. Controls dictate what can be done with the data, Constraints dictate who can access it. These could be managed by the underlying persistence application or it can be enforced by LOX before returning the data to the client. The constraints and controls indicate permissions to end clients and not necessarily to anything outside of the persistence layer. This is not like designing an enclave. Persistence is meant to be for more general purpose data that may or may not be sensitive. Metadata about the data includes attributes, constraints, and controls in a similar manner to the enclave.

    "},{"location":"concepts/0440-kms-architectures/#access-constraints_1","title":"Access constraints","text":"

    Constraints restrict data access to be done under certain conditions like who is accessing like the owner(s) or group(s). Constraints must be honored by the persistence layer or LOX to consumers to have confidence and trust. Possible constraints are:

    1. Identity or roles
    2. Context - i.e. what contexts can this data be used. For example, a credential may be restricted to be used only in a work environment if desired.
    "},{"location":"concepts/0440-kms-architectures/#access-controls_1","title":"Access controls","text":"
    1. Crypto Protection - e.g. which key id(s) and algorithm are used to protect this data. This allows data to re-encrypted, transformed via functional encryption in the future when keys are rotated or ciphers are deemed weak or insecure.
    2. Is Exportable - e.g. can the data leave the KMS
    3. Is Modifiable
    4. Can Delete
    5. Valid Until Date
    "},{"location":"concepts/0440-kms-architectures/#reference","title":"Reference","text":"

    Indy Wallet implements this in part and is one of the first attempts at this architecture. Indy Wallet doesn't use LOX yet, functions similar to an enclave in that it does not give direct access to private keys and uses key ids to execute operations. It supports a flexible persistence layer to be either SQLite or Postgres. The top layer encrypts data before it is queried or sent to the persistence layer and decrypted when returned. Aries Mayaguez is another implementation

    "},{"location":"concepts/0440-kms-architectures/#drawbacks","title":"Drawbacks","text":"

    There are additional complexities related to handling keys and other data as two distinct entities and it might be faster to combine them with a potential security tradeoff.

    "},{"location":"concepts/0440-kms-architectures/#prior-art","title":"Prior art","text":"

    PKCS#11 and KMIP were developed for key management strictly for enclaves. These design patterns are not limited to just key management but any sensitive data.

    "},{"location":"concepts/0440-kms-architectures/#unresolved-questions","title":"Unresolved questions","text":"

    Is providing access constraints to the persistence layer necessary? Could this be removed? What are the consequences? Are there any missing constraints and controls for the enclave or persistence layer?

    "},{"location":"concepts/0440-kms-architectures/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes

    Name / Link Implementation Notes"},{"location":"concepts/0441-present-proof-best-practices/","title":"0441: Prover and Verifier Best Practices for Proof Presentation","text":""},{"location":"concepts/0441-present-proof-best-practices/#summary","title":"Summary","text":"

    This work prescribes best practices for provers in credential selection (toward proof presentation), for verifiers in proof acceptance, and for both regarding non-revocation interval semantics in fulfilment of the Present Proof protocol RFC0037. Of particular instance is behaviour against presentation requests and presentations in their various non-revocation interval profiles.

    "},{"location":"concepts/0441-present-proof-best-practices/#motivation","title":"Motivation","text":"

    Agents should behave consistently in automatically selecting credentials and proving presentations.

    "},{"location":"concepts/0441-present-proof-best-practices/#tutorial","title":"Tutorial","text":"

    The subsections below introduce constructs and outline best practices for provers and verifiers.

    "},{"location":"concepts/0441-present-proof-best-practices/#presentation-requests-and-non-revocation-intervals","title":"Presentation Requests and Non-Revocation Intervals","text":"

    This section prescribes norms and best practices in formulating and interpreting non-revocation intervals on proof requests.

    "},{"location":"concepts/0441-present-proof-best-practices/#semantics-of-non-revocation-interval-presence-and-absence","title":"Semantics of Non-Revocation Interval Presence and Absence","text":"

    The presence of a non-revocation interval applicable to a requested item (see below) in a presentation request signifies that the verifier requires proof of non-revocation status of the credential providing that item.

    The absence of any non-revocation interval applicable to a requested item signifies that the verifier has no interest in its credential's non-revocation status.

    A revocable or non-revocable credential may satisfy a presentation request with or without a non-revocation interval. The presence of a non-revocation interval conveys that if the prover presents a revocable credential, the presentation must include proof of non-revocation. Its presence does not convey any restriction on the revocability of the credential to present: in many cases the verifier cannot know whether a prover's credential is revocable or not.

    "},{"location":"concepts/0441-present-proof-best-practices/#non-revocation-interval-applicability-to-requested-items","title":"Non-Revocation Interval Applicability to Requested Items","text":"

    A requested item in a presentation request is an attribute or a predicate, proof of which the verifier requests presentation. A non-revocation interval within a presentation request is specifically applicable, generally applicable, or inapplicable to a requested item.

    Within a presentation request, a top-level non-revocation interval is generally applicable to all requested items. A non-revocation interval defined particularly for a requested item is specifically applicable to that requested attribute or predicate but inapplicable to all others.

    A non-revocation interval specifically applicable to a requested item overrides any generally applicable non-revocation interval: no requested item may have both.

    For example, in the following (indy) proof request

    {\n    \"name\": \"proof-request\",\n    \"version\": \"1.0\",\n    \"nonce\": \"1234567890\",\n    \"requested_attributes\": {\n        \"legalname\": {\n            \"name\": \"legalName\",\n            \"restrictions\": [\n                {\n                    \"issuer_did\": \"WgWxqztrNooG92RXvxSTWv\"\n                }\n            ]\n        },\n        \"regdate\": {\n            \"name\": \"regDate\",\n            \"restrictions\": [\n                {\n                    \"issuer_did\": \"WgWxqztrNooG92RXvxSTWv\"\n                }\n            ],\n            \"non_revoked\": {\n                \"from\": 1600001000,\n                \"to\": 1600001000\n            }\n        }\n    },\n    \"requested_predicates\": {\n    },\n    \"non_revoked\": {\n        \"from\": 1600000000,\n        \"to\": 1600000000\n    }\n}\n

    the non-revocation interval on 1600000000 is generally applicable to the referent \"legalname\" while the non-revocation interval on 1600001000 specifically applicable to referent \"regdate\".

    "},{"location":"concepts/0441-present-proof-best-practices/#semantics-of-non-revocation-interval-endpoints","title":"Semantics of Non-Revocation Interval Endpoints","text":"

    A non-revocation interval contains \"from\" and \"to\" (integer) EPOCH times. For historical reasons, any timestamp within this interval is technically acceptable in a non-revocation subproof. However, these semantics allow for ambiguity in cases where revocation occurs within the interval, and in cases where the ledger supports reinstatement. These best practices require the \"from\" value, should the prover specify it, to equal the \"to\" value: this approach fosters deterministic outcomes.

    A missing \"from\" specification defaults to the same value as the interval's \"to\" value. In other words, the non-revocation intervals

    {\n    \"to\": 1234567890\n}\n

    and

    {\n    \"from\": 1234567890,\n    \"to\": 1234567890\n}\n

    are semantically equivalent.

    "},{"location":"concepts/0441-present-proof-best-practices/#verifier-non-revocation-interval-formulation","title":"Verifier Non-Revocation Interval Formulation","text":"

    The verifier MUST specify, as current INDY-HIPE 11 notes, the same integer EPOCH time for both ends of the interval, or else omit the \"from\" key and value. In effect, where the presentation request specifies a non-revocation interval, the verifier MUST request a non-revocation instant.

    "},{"location":"concepts/0441-present-proof-best-practices/#prover-non-revocation-interval-processing","title":"Prover Non-Revocation Interval Processing","text":"

    In querying the nodes for revocation status, given a revocation interval on a single instant (i.e., on \"from\" and \"to\" the same, or \"from\" absent), the prover MUST query the ledger for all germane revocation updates from registry creation through that instant (i.e., from zero through \"to\" value): if the credential has been revoked prior to the instant, the revocation necessarily will appear in the aggregate delta.

    "},{"location":"concepts/0441-present-proof-best-practices/#provers-presentation-proposals-and-presentation-requests","title":"Provers, Presentation Proposals, and Presentation Requests","text":"

    In fulfilment of the RFC0037 Present Proof protocol, provers may initiate with a presentation proposal or verifiers may initiate with a presentation request. In the former case, the prover has both a presentation proposal and a presentation request; in the latter case, the prover has only a presentation request.

    "},{"location":"concepts/0441-present-proof-best-practices/#credential-selection-best-practices","title":"Credential Selection Best Practices","text":"

    This section specifies a prover's best practices in matching a credential to a requested item. The specification pertains to automated credential selection: obviously, a human user may select any credential in response to a presentation request; it is up to the verifier to verify the resulting presentation as satisfactory or not.

    Note that where a prover selects a revocable credential for inclusion in response to a requested item with a non-revocation interval in the presentation request, the prover MUST create a corresponding sub-proof of non-revocation at a timestamp within that non-revocation interval (insofar as possible; see below).

    "},{"location":"concepts/0441-present-proof-best-practices/#with-presentation-proposal","title":"With Presentation Proposal","text":"

    If prover initiated the protocol with a presentation proposal specifying a value (or predicate threshold) for an attribute, and the presentation request does not require a different value for it, then the prover MUST select a credential matching the presentation proposal, in addition to following the best practices below regarding the presentation request.

    "},{"location":"concepts/0441-present-proof-best-practices/#preference-for-irrevocable-credentials","title":"Preference for Irrevocable Credentials","text":"

    In keeping with the specification above, presentation of an irrevocable credential ipso facto constitutes proof of non-revocation. Provers MUST always prefer irrevocable credentials to revocable credentials, when the wallet has both satisfying a requested item, whether the requested item has an applicable non-revocation interval or not. Note that if a non-revocation interval is applicable to a credential's requested item in the presentation request, selecting an irrevocable credential for presentation may lead to a missing timestamp at the verifier (see below).

    If only revocable credentials are available to satisfy a requested item with no applicable non-revocation interval, the prover MUST present such for proof. As per above, the absence of a non-revocation interval signifies that the verifier has no interest in its revocation status.

    "},{"location":"concepts/0441-present-proof-best-practices/#verifiers-presentations-and-timestamps","title":"Verifiers, Presentations, and Timestamps","text":"

    This section prescribes verifier best practices concerning a received presentation by its timestamps against the corresponding presentation request's non-revocation intervals.

    "},{"location":"concepts/0441-present-proof-best-practices/#timestamp-for-irrevocable-credential","title":"Timestamp for Irrevocable Credential","text":"

    A presentation's inclusion of a timestamp pertaining to an irrevocable credential evinces tampering: the verifier MUST reject such a presentation.

    "},{"location":"concepts/0441-present-proof-best-practices/#missing-timestamp","title":"Missing Timestamp","text":"

    A presentation with no timestamp for a revocable credential purporting to satisfy a requested item in the corresponding presentation request, where the requested item has an applicable non-revocation interval, evinces tampering: the verifier MUST reject such a presentation.

    It is licit for a presentation to have no timestamp for an irrevocable credential: the applicable non-revocation interval is superfluous in the presentation request.

    "},{"location":"concepts/0441-present-proof-best-practices/#timestamp-outside-non-revocation-interval","title":"Timestamp Outside Non-Revocation Interval","text":"

    A presentation may include a timestamp outside of a the non-revocation interval applicable to the requested item that a presented credential purports to satisfy. If the latest timestamp from the ledger for a presented credential's revocation registry predates the non-revocation interval, but the timestamp is not in the future (relative to the instant of presentation proof, with a reasonable allowance for clock skew), the verifier MUST log and continue the proof verification process.

    Any timestamp in the future (relative to the instant of presentation proof, with a reasonable allowance for clock skew) evinces tampering: the verifier MUST reject a presentation with a future timestamp. Similarly, any timestamp predating the creation of its corresponding credential's revocation registry on the ledger evinces tampering: the verifier MUST reject a presentation with such a timestamp.

    "},{"location":"concepts/0441-present-proof-best-practices/#dates-and-predicates","title":"Dates and Predicates","text":"

    This section prescribes issuer and verifier best practices concerning representing dates for use in predicate proofs (eg proving Alice is over 21 without revealing her birth date).

    "},{"location":"concepts/0441-present-proof-best-practices/#dates-in-credentials","title":"Dates in Credentials","text":"

    In order for dates to be used in a predicate proof they MUST be expressed as an Int32. While unix timestamps could work for this, it has several drawbacks including: can't represent dates outside of the years 1901-2038, isn't human readable, and is overly precise in that birth time down to the second is generally not needed for an age check. To address these issues, date attributes SHOULD be represented as integers in the form YYYYMMDD (eg 19991231). This addresses the issues with unix timestamps (or any seconds-since-epoch system) while still allowing date values to be compared with < > operators. Note that this system won't work for any general date math (eg adding or subtracting days), but it will work for predicate proofs which just require comparisons. In order to make it clear that this format is being used, the attribute name SHOULD have the suffix _dateint. Since most datetime libraries don't include this format, here are some examples of helper functions written in typescript.

    "},{"location":"concepts/0441-present-proof-best-practices/#dates-in-presentations","title":"Dates in Presentations","text":"

    When constructing a proof request, the verifier SHOULD express the minimum/maximum date as an integer in the form YYYYMMDD. For example if today is Jan 1, 2021 then the verifier would request that bithdate_dateint is before or equal to Jan 1 2000 so <= 20000101. The holder MUST construct a predicate proof with a YYYYMMDD represented birth date less than that value to satisfy the proof request.

    "},{"location":"concepts/0441-present-proof-best-practices/#reference","title":"Reference","text":""},{"location":"concepts/0441-present-proof-best-practices/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0478-coprotocols/","title":"Aries RFC 0478: Coprotocols","text":""},{"location":"concepts/0478-coprotocols/#summary","title":"Summary","text":"

    Explains how one protocol can invoke and interact with others, giving inputs and receiving outputs and errors.

    "},{"location":"concepts/0478-coprotocols/#motivation","title":"Motivation","text":"

    It's common for complex business workflows to be composed from smaller, configurable units of logic. It's also common for multiple processes to unfold in interrelated ways, such that a complex goal is choreagraphed from semi-independent tasks. Enabling flexible constructions like this is one of the major goals of protocols built atop DIDComm. We need a standard methodology for doing so.

    "},{"location":"concepts/0478-coprotocols/#tutorial","title":"Tutorial","text":"

    A protocol is any recipe for a stateful interaction. DIDComm itself is a protocol, as are many primitives atop which it is built, such as HTTP, Diffie-Hellman key exchange, and so forth. However, when we talk about protocols in decentralized identity, without any qualifiers, we usually mean application-level interactions like credential issuance, feature discovery, third-party introductions, and so forth. These protocols are message-based interactions that use DIDComm.

    We want these protocols to be composable. In the middle of issuing credentials, we may want to challenge the potential holder for proof -- and in the middle of challenging for proof, maybe we want to negotiate payment. We could build proving into issuing, and payment into proving, but this runs counter to the DRY principle and to general best practice in encapsulation. A good developer writing a script to issue credentials would probably isolate payment and proving logic in separate functions or libraries, and would strive for loose coupling so each could evolve independently.

    Agents that run protocols have goals like those of the script developer. How we achieve them is the subject of this RFC.

    "},{"location":"concepts/0478-coprotocols/#subroutines","title":"Subroutines","text":"

    In the world of computer science, a subroutine is a vital abstraction for complex flows. It breaks logic into small, reusable chunks that are easy for a human to understand and document, and it formalizes their interfaces. Code calls a subroutine by referencing it via name or address, providing specified arguments as input. The subroutine computes on this input, eventually producing an output; the details don't interest the caller. While the subroutine is busy, the caller typically waits. Callers can often avoid recompilation when details inside subroutines change. Subroutines can come from pluggable libraries. These can be written by different programmers in different programming languages, as long as a calling convention is shared.

    Thinking of protocols as analogs to subroutines suggests some interesting questions:

    "},{"location":"concepts/0478-coprotocols/#coroutines","title":"Coroutines","text":"

    Before we answer these questions, let's think about a generalization of subroutines that's slightly less familiar to some programmers: coroutines. Coroutines achieve the same encapsulation and reusability as subroutines, but as a category they are more flexible and powerful. Coroutines may be, but aren't required to be, call-stack \"children\" of their callers; they may have complex lifecycles that begin or end outside the caller's lifespan. Coroutines may receive inputs at multiple points, not just at launch. They may yield outputs at multiple points, too. Subroutines are just the simplest variant of coroutines.

    The flexiblity of coroutines gives options to programmers, and it explains why most programming languages evolve to offer them as first-class constructs when they encounter demanding requirements for asynchronicity, performance, or scale. For example, early versions of python lacked the concept of coroutines; if you wrote a loop over range(1, 1000000), python allocated and filled a container holding 1 million numbers, and then iterated over the container. When generators (a type of coroutine) were added to the language, the underlying logic changed. Now range(1, 1000000) is a coroutine invocation that trades execution state back and forth with its sibling caller routine. The first time it is invoked, it receives and stores its input values, then produces one output (the lower bound of the range). Each subsequent time through the loop it is invoked again; it increments its internal state and yields a new output back to the caller. No allocations occur, and an early break from the loop wastes nothing.

    If we want to choose one conceptual parallel for how protocols relate to one another, we should think of them as coroutines, not subroutines; doing so constrains us less. Although payment as a subroutine inside credential issuance sounds plausible at first glance, it turns out to be clumsy under deeper analysis. A payment protocol yields more than one output -- typically a preauthorization at an intermediate stage, then a final outcome when it completes. At the preauthorization stage, it should accept graceful cancellation (a second input, after launch). And high-speed, bulk issuance of credentials is likely to benefit from payment and issuance being partly parallelized instead of purely sequential.

    Similarly, a handshake protocol like DID Exchange or Connection is best framed as a coprotocol of Introduce; this makes it easy for Introduce to complete as soon as the handshake begins, instead of waiting for the handshake to finish as if it were a subroutine.

    By thinking of cross-protocol interactions like coroutine interactions, we get the best of both worlds: where the interaction is just subroutine-like, the model lets us simplify; where we need more flexibility and power, the model still fits.

    Protocols don't have to support the types of coprotocol interactions we're describing here; protocols developed by Aries developers have already proven their value even without it. But to unlock their full potential, adding coprotocol support to new and existing protocol definitions may be worthwhile. This requires only a modest update to a protocol RFC, and creates little extra work for implementers.

    "},{"location":"concepts/0478-coprotocols/#the-simple-approach-that-falls-apart","title":"The simple approach that falls apart","text":"

    When the DIDComm community first began thinking about one protocol invoking another, we imagined that the interface to the called coprotocol would simply be its first message. For example, if verfiable credential issuer Acme Corp wanted to demand payment for a credential during an issuance protocol with Bob, Acme would send to Bob a request_payment message that constituted the first message in a make_payment protocol. This would create an instance of the payment protocol running alongside issuance; issuance could then wait until it completed before proceeding. And Bob wouldn't need to lift a finger to make it work, if he already supported the payment protocol.

    Unfortunately, this approach looks less attractive after study:

    "},{"location":"concepts/0478-coprotocols/#general-interface-needs","title":"General Interface Needs","text":"

    What we want, instead, is a formal declaration of something a bit like a coprotocol's \"function signature.\" It needs to describe the inputs that launch the protocol, and the outputs and/or errors emitted as it finishes. It should hide implementation details and remain stable across irrelevant internal changes.

    We need to bind compatible coprotocols to one another using the metadata in these declarations. And since coprotocol discovery may have to satisfy a remote party, not just a local one, our binding needs to work well dynamically, and late, and with optional, possibly overlapping plugins providing implementations. This suggests that our declarations must be rich and flexible about binding criteria \u2014 it must be possible to match on something more than just a coprotocol name and/or arg count+type.

    An interesting divergence from the function signature parallel is that we may have to describe inputs and outputs (and errors) at multiple interaction points, not just the coprotocol's initial invocation.

    Another subtlety is that protocol interfaces need to be partitioned by role; the experience of a payer and a payee with respect to a payment protocol may be quite different. The interface offered by a coprotocol must vary by which role the invoked coprotocol instance embodies.

    Given all these considerations, we choose to describe coprotocol interfaces using a set of function-like signatures, not just one. We use a function-like notation to make them as terse and intuitive as possible for developers.

    "},{"location":"concepts/0478-coprotocols/#example","title":"Example","text":"

    Suppose we are writing a credential issuance protocol, and we want to use coprotocols to add support for situations where the issuer expects payment partway through the overall flow. We'd like it to be possible for our payment step to use Venmo/Zelle, or cryptocurrency, or traditional credit cards, or anything else that issuers and holders agree upon. So we want to encapsulate the payment problem as a pluggable, discoverable, negotiable coprotocol.

    We do a little research and discover that many DIDComm-based payment protocols exist. Three of them advertise support for the same coprotocol interface:

    goal: aries.buy.make-payment\npayee:\n  get:\n      - invoke(amount: float, currency: str, bill_of_sale: str) @ null\n      - proceed(continue: bool) @ requested:, waiting-for-commit\n  give:\n      - preauth(code: str) @ waiting-for-commit\n      - return(confirmation_code: str) @ finalizing\n

    In plain English, the declared coprotocol semantics are:

    This is a coprotocol interface for protocols that facilitate the aries.buy.make-payment goal code. The payee role in this coprotocol gets input at two interaction points, \"invoke\" and \"proceed\". Invoke happens when state is null (at launch); \"proceed\" happens when state is \"requested\" or \"waiting-for-commit.\" At invoke, the caller of the co-protocol provides 3 inputs: an amount, a currency, and a bill of sale. At proceed, the caller decides whether to continue. Implementations of this coprotocol interface also give output at two interaction points, \"preauth\" and \"return.\" At preauth, the output is a string that's a preauth code; at return, the output is a confirmation code.

    "},{"location":"concepts/0478-coprotocols/#simplified-description-only","title":"Simplified description only","text":"

    It's important to understand that this interface is NOT the same as the protocol's direct interface (the message family and state machine that a protocol impl must provide to implement the protocol as documented). It is, instead, a simplified encapuslation -- just like a function signature is a simplified encapsulation of a coroutine. A function impl can rename its args for internal use. It can have steps that the caller doesn't know about. The same is true for protocols: their role names, state names, message types and versions, and field names in messages don't need to be exposed directly in a coprotocol interface; they just need a mapping that the protocol understands internally. The specific payment protocol implementation might look like this (don't worry about details; the point is just that some might exist):

    When we describe this as a coprotocol, we omit most of its details, and we change some verbiage. The existence of the payee, gateway and blockchain roles is suppressed (though we now have an implicit new role -- the caller of the coprotocol that gives what the protocol gets, and gets what the protocol gives). Smart contracts disappear. The concept of handle to pending txn is mapped to the coprotocol's preauth construct, and txn hash is mapped to the coprotocol's confirmation_code. As a coprotocol, the payee can interact according to a far simpler understanding, where the caller asks the payee to engage in a payment protocol, expose some simple hooks, and notify on completion:

    "},{"location":"concepts/0478-coprotocols/#calling-convention","title":"Calling Convention","text":"

    More details are needed to understand exactly how the caller and the coprotocol communicate. There are two sources of such details:

    1. Proprietary methods
    2. Standard Aries-style DIDComm protocol

    Proprietary methods allow aggressive optimization. They may be appropriate when it's known that the caller and the coprotocol will share the same process space on a single device, and the code for both will come from a single codebase. In such cases, there is no need to use DIDComm to communicate.

    Answer 2 may be more chatty, but is better when the coprotocol might be invoked remotely (e.g., Acme's server A is in the middle of issuance and wants to invoke payment to run on server B), or where the codebases for each party to the interaction need some independence.

    The expectation is that co-protocols share a compatible trust domain; that is, coprotocol interactions occur within the scope of one identity rather than across identity boundaries. Thus, interoperability is not a strong requirement. Nonetheless, approaching this question as a standard protocol problem leads to a clean, loosely couple architecture with little incremental cost in an agent. Therefore, a protocol for coprotocol coordination has been developed. This is the subject of sister document Aries RFC 0482: Coprotocol Protocol.

    "},{"location":"concepts/0478-coprotocols/#reference","title":"Reference","text":"

    More about optional fields and syntax in a coprotocol declaration.

    How to add a coprotocol decl to a protocol.

    "},{"location":"concepts/0478-coprotocols/#drawbacks","title":"Drawbacks","text":"

    Why should we not do this?

    "},{"location":"concepts/0478-coprotocols/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0478-coprotocols/#prior-art","title":"Prior art","text":"

    Coroutines \u2014 the computer science scaffolding against which coprotocols are modeled \u2014 are extensively discussed in the literature of various compiler developer communities. The discussion about adding support for this feature in Rust is particularly good background reading: https://users.rust-lang.org/t/coroutines-and-rust/9058

    "},{"location":"concepts/0478-coprotocols/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0478-coprotocols/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0519-goal-codes/","title":"0519: Goal Codes","text":""},{"location":"concepts/0519-goal-codes/#summary","title":"Summary","text":"

    Explain how different parties in an SSI ecosystem can communicate about their intentions in a way that is understandable by humans and by automated software.

    "},{"location":"concepts/0519-goal-codes/#motivation","title":"Motivation","text":"

    Agents exist to achieve the intents of their owners. Those intents largely unfold through protocols. Sometimes intelligent action in these protocols depends on a party declaring their intent. We need a standard way to do that.

    "},{"location":"concepts/0519-goal-codes/#tutorial","title":"Tutorial","text":"

    Our early learnings in SSI focused on VC-based proving with a very loose, casual approach to context. We did demos where Alice connects with a potential employer, Acme Corp -- and we assumed that each of the interacting parties had a shared understanding of one another's needs and purposes.

    But in a mature SSI ecosystem, where unknown agents can contact one another for arbitrary reasons, this context is not always easy to deduce. Acme Corp's agent may support many different protocols, and Alice may interact with Acme in the capacity of customer or potential employee or vendor. Although we have feature discovery to learn what's possible, and we have machine-readable governance frameworks to tell us what rules might apply in a given context, we haven't had a way to establish the context in the first place. When Alice contacts Acme, a context is needed before a governance framework is selectable, and before we know which ../../features are desirable.

    The key ingredient in context is intent. If Alice says to Acme, \"I'd like to connect,\", Acme wants to be able to trigger different behavior depending on whether Alice's intent is to be a customer, apply for a job, or audit Acme's taxes. This is the purpose of a goal code.

    "},{"location":"concepts/0519-goal-codes/#the-goal-code-datatype","title":"The goal code datatype","text":"

    To express intent, this RFC formally introduces the goal code datatype. When a field in a DIDComm message contains a goal code, its semantics and format match the description given here. (Goal codes are often declared via the ~thread decorator, but may also appear in ordinary message fields. See the Scope section below. Convention is to name this field \"goal_code\" where possible; however, this is only a convention, and individual protocols may adapt to it however they wish.)

    TODO: should we make a decorator out of this, so protocols don't have to declare it, and so any message can have a goal code? Or should we just let protocols declare a field in whatever message makes sense?

    Protocols use fields of this type as a way to express the intent of the message sender, thus coloring the larger context. In a sense, goal codes are to DIDComm what the subject: field is to email -- except that goal codes have formalized meanings to make them recognizable to automation.

    Goal codes use a standard format. They are lower-cased, kebab-punctuated strings. ASCII and English are recommended, as they are intended to be read by the software developer community, not by human beings; however, full UTF-8 is allowed. They support hierarchical dotted notation, where more general categories are to the left of a dot, and more specific categories are to the right. Some example goal codes might be:

    Goals are inherently self-attested. Thus, goal codes don't represent objective fact that a recipient can rely upon in a strong sense; subsequent interactions can always yield surprises. Even so, goal codes let agents triage interactions and find misalignments early; there's no point in engaging if their goals are incompatible. This has significant benefits for spam prevention, among other things.

    "},{"location":"concepts/0519-goal-codes/#verbs","title":"Verbs","text":"

    Notice the verbs in the examples: sell, date, hire, and arrange. Goals typically involve action; a complete goal code should have one or more verbs in it somewhere. Turning verbs into nouns (e.g., employment.references instead of employment.check-references) is considered bad form. (Some namespaces may put the verbs at the end; some may put them in the middle. That's a purely stylistic choice.)

    "},{"location":"concepts/0519-goal-codes/#directionality","title":"Directionality","text":"

    Notice, too, that the verbs may imply directionality. A goal with the sell verb implies that the person announcing the goal is a would-be seller, not a buyer. We could imagine a more general verb like engage-in-commerce that would allow either behavior. However, that would often be a mistake. The value of goal codes is that they let agents align around intent; announcing that you want to engage in general commerce without clarifying whether you intend to sell or buy may be too vague to help the other party make decisions.

    It is conceivable that this would lead to parallel branchs of a goal ontology that differ only in the direction of their verb. Thus, we could imagine sell.A and sell.B being shadowed by buy.A and buy.B. This might be necessary if a family of protocols allow either party to initiate an interaction and declare the goal, and if both parties view the goals as perfect mirror images. However, practical considerations may make this kind of parallelism unlikely. A random party contacting an individual to sell something may need to be quite clear about the type of selling they intend, to make it past a spam filter. In contrast, a random individual arriving at the digital storefront of a mega retailer may be quite vague about the type of buying they intend. Thus, the buy.* side of the namespace may need much less detail than the sell.* side.

    "},{"location":"concepts/0519-goal-codes/#goals-for-others","title":"Goals for others","text":"

    Related to directionality, it may occasionally be desirable to propose goals to others, rather than adovcating your own: \"Let <parties = us = Alice, Bob, and Carol> <goal = hold an auction> -- I nominate Carol to be the <role = auctioneer> and get us started.\" The difference between a normal message and an unusual one like this is not visible in the goal code; it should be exposed in additional fields that associate the goal with a particular identifier+role pair. Essentially, you are proposing a goal to another party, and these extra fields clarify who should receive the proposal, and what role/perspective they might take with respect to the goal.

    Making proposals like this may be a feature in some protocols. Where it is, the protocols determine the message field names for the goal code, the role, and the DID associated with the role and goal.

    "},{"location":"concepts/0519-goal-codes/#matching","title":"Matching","text":"

    The goal code cci.healthcare is considered a more general form of the code cci.healthcare.procedure, which is more general than cci.healthcare.procedure.schedule. Because these codes are hierarchical, wildcards and fuzzy matching are possible for either a sender or a recipient of a message. Filename-style globbing semantics are used.

    A sender agent can specify that their owner's goal is just meetupcorp.personal without clarifying more; this is like specifying that a file is located under a folder named \"meetupcorp/personal\" without specifying where; any file \"under\" that folder -- or the folder itself -- would match the pattern. A recipient agent can have a policy that says, \"Reject any attempts to connect if the goal code of the other party is aries.sell.*. Notice how this differs from aries.sell*; the first looks for things \"inside\" aries.sell; the latter looks for things \"inside\" aries that have names beginning with sell.

    "},{"location":"concepts/0519-goal-codes/#scope","title":"Scope","text":"

    When is a declared goal known to color interactions, and when is it undefined?

    We previously noted that goal codes are a bit like the subject: header on an email; they contextualize everything that follows in that thread. We don't generally want to declare a goal outside of a thread context, because that would prevent an agent from engaging in two goals at the same time.

    Given these two observations, we can say that a goal applies as soon as it is declared, and it continues to apply to all messages in the same thread. It is also inherited by implication through a thread's pthid field; that is, a parent thread's goal colors the child thread unless/until overridden.

    "},{"location":"concepts/0519-goal-codes/#namespacing","title":"Namespacing","text":"

    To avoid collision and ambiguity in code values, we need to support namespacing in our goal codes. Since goals are only a coarse-grained alignment mechanism, however, we don't need perfect decentralized precision. Confusion isn't much more than an annoyance; the worst that could happen is that two agents discover one or two steps into a protocol that they're not as aligned as they supposed. They need to be prepared to tolerate that outcome in any case.

    Thus, we follow the same general approach that's used in java's packaging system, where organizations and communities use a self-declared prefix for their ecosystem as the leftmost segment or segments of a family of identifiers (goal codes) they manage. Unlike java, though, these need not be tied to DNS in any way. We recommend a single segment namespace that is a unique string, and that is an alias for a URI identifying the origin ecosystem. (In other words, you don't need to start with \"com.yourcorp.yourproduct\" -- \"yourcorp\" is probably fine.)

    The aries namespace alias is reserved for goal codes defined in Aries RFCs. The URI aliased by this name is TBD. See the Reference section for more details.

    "},{"location":"concepts/0519-goal-codes/#versioning","title":"Versioning","text":"

    Semver-style semantics don't map to goals in an simple way; it is not obvious what constitutes a \"major\" versus a \"minor\" difference in a goal, or a difference that's not worth tracking at all. The content of a goal \u2014 the only thing that might vary across versions \u2014 is simply its free-form description, and that varies according to human judgment. Many different versions of a protocol are likely to share the goal to make a payment or to introduce two strangers. A goal is likely to be far more stable than the details of how it is accomplished.

    Because of these considerations, goal codes do not impose an explicit versioning mechanism. However, one is reserved for use, in the unusual cases where it may be helpful. It is to append -v plus a numeric suffix: my-goal-code-v1, my-goal-code-v2, etc. Goal codes that vary only by this suffix should be understood as ordered-by-numeric-suffix evolutions of one another, and goal codes that do not intend to express versioning should not use this convention for something else. A variant of the goal code without any version suffix is equivalent to a variant with the -v1 suffix. This allows human intuition about the relatedness of different codes, and it allows useful wildcard matching across versions. It also treats all version-like changes to a goal as breaking (semver \"major\") changes, which is probably a safe default.

    Families of goal codes are free to use this convention if they need it, or to invent a non-conflicting one of their own. However, we repeat our observation that versioning in goal codes is often inappropriate and unnecessary.

    "},{"location":"concepts/0519-goal-codes/#declaring-goal-codes","title":"Declaring goal codes","text":""},{"location":"concepts/0519-goal-codes/#standalone-rfcs-or-similar-sources","title":"Standalone RFCs or Similar Sources","text":"

    Any URI-referencable document can declare famlies or ontologies of goal codes. In the context of Aries, we encourage standalone RFCs for this purpose if the goals seem likely to be relevant in many contexts. Other communities may of course document goal codes in their own specs -- either dedicated to goal codes, or as part of larger topics. The following block is a sample of how we recommend that such goal codes be declared. Note that each code is individually hyperlink-able, and each is associated with a brief human-friendly description in one or more languages. This description may be used in menuing mechanisms such as the one described in Action Menu Protocol.

    "},{"location":"concepts/0519-goal-codes/#goal-codes","title":"goal codes","text":""},{"location":"concepts/0519-goal-codes/#ariessell","title":"aries.sell","text":"

    en: Sell something. Assumes two parties (buyer/seller). es: Vender algo. Asume que dos partes participan (comprador/vendedor).

    "},{"location":"concepts/0519-goal-codes/#ariessellgoodsconsumer","title":"aries.sell.goods.consumer","text":"

    en: Sell tangible goods of interest to general consumers.

    "},{"location":"concepts/0519-goal-codes/#ariessellservicesconsumer","title":"aries.sell.services.consumer","text":"

    en: Sell services of interest to general consumers.

    "},{"location":"concepts/0519-goal-codes/#ariessellservicesenterprise","title":"aries.sell.services.enterprise","text":"

    en: Sell services of interest to enterprises.

    "},{"location":"concepts/0519-goal-codes/#in-didcomm-based-protocol-specs","title":"In DIDComm-based Protocol Specs","text":"

    Occasionally, goal codes may have meaning only within the context of a specific protocol. In such cases, it may be appropriate to declare the goal codes directly in a protocol spec. This can be done using a section of the RFC as described above.

    More commonly, however, a protocol will accomplish one or more goals (e.g., when the protocol is fulfilling a co-protocol interface), or will require a participant to identify a goal at one or more points in a protocol flow. In such cases, the goal codes are probably declared external to the protocol. If they can be enumerated, they should still be referenced (hyperlinked to their respective definitions) in the protocol RFC.

    "},{"location":"concepts/0519-goal-codes/#in-governance-frameworks","title":"In Governance Frameworks","text":"

    Goal codes can also be (re-)declared in a machine-readable governance framework.

    "},{"location":"concepts/0519-goal-codes/#reference","title":"Reference","text":""},{"location":"concepts/0519-goal-codes/#known-namespace-aliases","title":"Known Namespace Aliases","text":"

    No central registry of namespace aliases is maintained; you need not register with an authority to create a new one. Just pick an alias with good enough uniqueness, and socialize it within your community. For convenience of collision avoidance, however, we maintain a table of aliases that are typically used in global contexts, and welcome PRs from anyone who wants to update it.

    alias used by URI aries Hyperledger Aries Community TBD"},{"location":"concepts/0519-goal-codes/#well-known-goal-codes","title":"Well-known goal codes","text":"

    The following goal codes are defined here because they already have demonstrated utility, based on early SSI work in Aries and elsewhere.

    "},{"location":"concepts/0519-goal-codes/#ariesvc","title":"aries.vc","text":"

    Participate in some form of VC-based interaction.

    "},{"location":"concepts/0519-goal-codes/#ariesvcissue","title":"aries.vc.issue","text":"

    Issue a verifiable credential.

    "},{"location":"concepts/0519-goal-codes/#ariesvcverify","title":"aries.vc.verify","text":"

    Verify or validate VC-based assertions.

    "},{"location":"concepts/0519-goal-codes/#ariesvcrevoke","title":"aries.vc.revoke","text":"

    Revoke a VC.

    "},{"location":"concepts/0519-goal-codes/#ariesrel","title":"aries.rel","text":"

    Create, maintain, or end something that humans would consider a relationship. This may be accomplished by establishing, updating or deleting a DIDComm messaging connection that provides a secure communication channel for the relationship. The DIDComm connection itself is not the relationship, but would be used to carry out interactions between the parties to facilitate the relationship.

    "},{"location":"concepts/0519-goal-codes/#ariesrelbuild","title":"aries.rel.build","text":"

    Create a relationship. Carries the meaning implied today by a LinkedIn invitation to connect or a Facebook \"Friend\" request. Could be as limited as creating a DIDComm Connection.

    "},{"location":"concepts/0519-goal-codes/#ariesvcverifieronce","title":"aries.vc.verifier.once","text":"

    Create a DIDComm connection for the sole purpose of doing the one-time execution of a Present Proof protocol. Once the protocol execution is complete, both sides SHOULD delete the connection, as it will not be used again by either side.

    The purpose of the goal code flow is to accomplish the equivalent of a \"connection-less\" present proof by having the agents establish a DIDComm connection, execute the present proof protocol, and delete the connection. The need for this goal code is when an actual connection-less present proof cannot be used because the out-of-band (OOB) message (including the presentation request) is too large for the transport being used--most often a QR code (although it may be useful for Bluetooth scenarios as well)--and a URL shortner option is not available. By using a one-time connection, the OOB message is small enough to fit into easily into a QR code, the present proof protocol can be executed using the established connection, and at the end of the interaction, no connection remains for either side to use or manage.

    "},{"location":"concepts/0519-goal-codes/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0559-pppu/","title":"Aries RFC 0559: Privacy-Preserving Proof of Uniqueness","text":""},{"location":"concepts/0559-pppu/#summary","title":"Summary","text":"

    Documents two techniques that, while preserving holder privacy, can guarantee a single use of a verifiable credential by any given unique holder -- the so-called \"one person one vote\" outcome that's often desirable in VC use cases.

    "},{"location":"concepts/0559-pppu/#motivation","title":"Motivation","text":"

    Many actions need to be constrained such that a given actor (usually, a human being) can only perform the action once. In government and stockholder elections, we want each voter to cast a single vote. At national borders, we want a visa to allow entrance only a single time before a visitor leaves. In refugee camps, homeless shelters, and halfway houses, we want each guest to access food or medication a single time per distribution event.

    Solving this problem without privacy is relatively straightforward. We require credentials that disclose a person\u2019s identity, and we track the identities to make sure each is authorized once. This pattern can be used with physical credentials, or with their digital equivalent.

    The problem is that each actor\u2019s behavior is tracked with this method, because it requires the recording of identity. Instead of just enforcing one-person-one-vote, we create a history of every instance when person X voted, which voting station they attended, what time they cast their vote, and so forth. We create similar records about personal travel or personal medication usage. Such information can be abused to surveil, to harass, to intrude, or to spam.

    What we need is a way to prove that an action is associated with a unique actor, and thus enforce the one-actor-one-action constraint, without disclosing that actor\u2019s identity in a way that erodes privacy. Although we began with examples of privacy for humans, we also want a solution for groups or institutions wishing to remain anonymous, or for devices, software entities, or other internet-of-things actors that have a similar need.

    "},{"location":"concepts/0559-pppu/#tutorial","title":"Tutorial","text":""},{"location":"concepts/0559-pppu/#solution-1","title":"Solution 1","text":"

    This solution allows uniqueness to be imposed on provers during an arbitrary context chosen by the verifier, with no unusual setup at issuance time. For example, a verifier could decide to constrain a particular credential holder to proving something only once per hour, or once during a given contest or election. The price of this flexibility is that the credential holder must have a digital credential that already has important uniqueness guarantees (e.g., a driver's license, a passport, etc).

    In contrast, solution 2 imposes uniqueness at issuance time, but requires no other credential with special guarantees.

    "},{"location":"concepts/0559-pppu/#components","title":"Components","text":"

    The following components are required to solve this problem:

    "},{"location":"concepts/0559-pppu/#a","title":"A","text":"

    one issuance to identified holder \u2014 A trustworthy process that issues verifiable credentials exactly once to an identified holder. (This is not new. Governments have such processes today to prevent issuing two driver\u2019s licenses or two passports to the same person.)

    "},{"location":"concepts/0559-pppu/#b","title":"B","text":"

    one issuance to anonymous holder \u2014 A method of issuing a credential only once to an anonymous holder. (This is not new. Scanning a biometric from an anonymous party, and then checking it against a list of known patterns with no additional metadata, is one way to do this. There are other, more cryptographic methods, as discussed below.)

    "},{"location":"concepts/0559-pppu/#c","title":"C","text":"

    strong binding \u2014 A mechanism for strongly associating credentials with a specific credential holder, such that they are not usable by anyone other than the proper holder. (This is not new. Embedding a biometric such as a fingerprint or a photo in a credential is a simple example of such a mechanism.)

    "},{"location":"concepts/0559-pppu/#d","title":"D","text":"

    linking mechanism \u2014 A mechanism for proving that it is valid to combine information from multiple credentials because they describe the same credential holder, without revealing the common link between those credentials. (An easy and familiar way to prove combinability is to embed a common characteristic in each credential. For example, two credentials that are both about a person with the same social security number can be assumed to describe the same person. What is required here goes one step further--we need a way to prove that two credentials contain the same data, or were built from the same data, without revealing that data at all. This is also not new. Cryptographic commitments provide one possible answer.)

    "},{"location":"concepts/0559-pppu/#e","title":"E","text":"

    proving without revealing \u2014 A method for proving the correctness of information derived from a credential, without sharing the credential information itself. (This is not new. In cryptographic circles, one such technique is known as a zero-knowledge proof. It allows Alice to hold a credential that contains her birthdate, but to prove she is over 65 years old instead of revealing the birthdate itself.)

    "},{"location":"concepts/0559-pppu/#walkthru","title":"Walkthru","text":"

    We will describe how this solution uses components A-E to help a fictional voter, Alice, in an interaction with a fictional government, G. Alice wishes to remain anonymous but still cast her vote in an election; G wishes to guarantee one-citizen-one-vote, but to retain no additional information that would endanger Alice\u2019s privacy. Extrapolating from a voting scenario to other situations that require uniqueness is left as an exercise for the reader.

    The solution works like this:

    1. Alice receives a voter credential, C1, from G. C1 strongly identifies Alice, perhaps containing her name, address, birthdate, and so forth. It is possession of this credential that proves a right to vote. G issues only one such credential to each actor. (component A)

    2. C1 is bound to Alice so it can\u2019t be used by anyone else. (component C)

    3. C1 also contains data provided by Alice, and derived from a secret that only Alice knows, such that Alice can link C1 to other credentials with similarly derived data because she knows the secret. (component D)

      Steps 1-3: Alice receives a voter credential from G.

    4. Alice arrives to vote and asserts her privilege to a different government agency, G\u2019, that administers the election.

    5. G\u2019 chooses a random identifier, X, for the anonymous person (Alice) that wants to vote.

    6. G\u2019 asks this anonymous voter (Alice) to provide data suitable for embedding in a new credential, such that the new credential and her old credential can be proved combinable. (component D).

    7. G\u2019 verifies that it has not issued a credential to this anonymous person previously. (component B)

    8. G\u2019 issues a new credential, C2, to the anonymous voter. C2 contains the random identifier X, plus the data that Alice provided in step 6. (This means the party playing the role of Verifier temporarily becomes a JIT Issuer.)

      Steps 4-8: Anonymous (Alice) receives a unique credential from G\u2019.

    9. G\u2019 asks the anonymous voter to prove, without revealing any identifying information from C1 (component E) the following assertions:

      • They possess a C1 and C2 that are combinable (component D)
      • The C2 possessed by this anonymous voter contains the randomly-generated value X that was just chosen and embedded in the C2 issued by G\u2019. At this point X is revealed.

      Step 9: Alice proves C1 and C2 are combinable and C2 contains X.

    This solves the problem because:

    Both credentials are required. If a person only has C1, then there is no way to enforce single usage while remaining anonymous. If a person only has C2, then there is no reason to believe the unique person who shows up to vote actually deserves the voting privilege. It is the combination that proves uniqueness of the person in the voting event, plus privilege to cast a vote. Zero-knowledge proving is also required, or else the strongly identifying information in C1 may leak.

    As mentioned earlier, this same mechanism can be applied to scenarios besides voting, and can be used by actors other than individual human beings. G (the issuer of C1) and G\u2019 (the verifier of C1 and issuer of C2) do not need to be related entities, as long as G1 trusts G. What is common to all applications of the technique is that uniqueness is proved in a context chosen by the verifier, privilege is based on previously issued and strongly identifying credentials, and yet the anonymity of the credential holder is preserved.

    "},{"location":"concepts/0559-pppu/#building-in-aries","title":"Building in Aries","text":"

    Ingredients to build this solution are available in Aries or other Hyperledger projects (Ursa, Indy) today:

    "},{"location":"concepts/0559-pppu/#solution-2","title":"Solution 2","text":"

    This is another solution that accomplishes approximately the same goal as solution 1. It is particularly helpful in voting. It has much in common with the earlier approach, but differs in that uniqueness must be planned for at time of issuance (instead of being imposed just in time at verification). The issuer signs a serial number to each unique holder, and the holder then makes a Pedersen Commitment to their unique serial number while the voting is open. The holder cannot vote twice or change their vote. The voter\u2019s privacy is preserved.

    "},{"location":"concepts/0559-pppu/#walkthru_1","title":"Walkthru","text":"

    Suppose a poll is being conducted with p number of options as m1, m2, m3,... mp and each poll has a unique id I. Acme Corp is conducting the poll and Alice is considered an eligible voter by Acme Corp because Alice has a credential C from Acme Corp.

    "},{"location":"concepts/0559-pppu/#goals","title":"Goals","text":"

    Additional condition: In some cases the poll conduction entity, Acme Corp, in this case, may be accused of creating Sybil identities vote to influence the poll. This can be mitigated if an additional constraint is enforced where only those are eligible to vote who can prove that their credential C was issued before the poll started (or at least some time t before the poll started), i.e. Alice should be able to prove to anyone that her credential C was issued to anyone.

    "},{"location":"concepts/0559-pppu/#setup","title":"Setup","text":"

    Acme corp has hosted an application AS that maintains a merkle tree and the application follows some rules to update the tree. This application should be auditable meaning that anyone should be able to check whether the application is updating the tree as per the rules and the incoming data. Thus this application could be hosted on a blockchain, or within a trusted execution environment like SGX. Also the application server maintains a dynamic set where set membership check is efficient. The merkle tree is readable by all poll participants.

    There are 2 different functions defined F1 and F2 both of which take 2 inputs and return one output and they are not invertible, even knowing one input and output should not reveal other input. The output of both on same input must be different. Thus these can be modeled as different hash functions like SHA2 and SHA3 or SHA2 with domain separation. But we want these functions to R1CS friendly so we choose a hash function like MiMC with domain separation.

    "},{"location":"concepts/0559-pppu/#basic-idea","title":"Basic idea","text":"

    Alice generates a serial number and gets a blind signature from Acme Corp over the serial number. Then Alice creates her vote and sends the \"encrypted\" vote with the serial number and signature to the application server. Application server accepts the vote if the signature is valid and it has not seen that serial number before. It will then update the merkle tree with the \"encrypted\" vote and return a signed proof to Alice of the update. When the poll terminates, Alice will then submit the decryption key to the application server which can then decrypt the vote and do the tally.

    "},{"location":"concepts/0559-pppu/#detailed-description","title":"Detailed description","text":""},{"location":"concepts/0559-pppu/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0559-pppu/#prior-art","title":"Prior art","text":""},{"location":"concepts/0559-pppu/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0559-pppu/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0566-issuer-hosted-custodidal-agents/","title":"0566: Issuer-Hosted Custodial Agents","text":"

    In the fully realized world of Self Soverign Identity, credential holders are equipped with capable agents to help them manage credentials and other SSI interactions. Before we arrive in that world, systems that facilitate the transition from the old model of centralized systems to the new decentralized models will be necessary and useful.

    One of the common points for a transition system is the issuance of credentials. Today's centralized systems contain information within an information silo. Issuing credentials requires the recipient to have an agent capable of receiving and managing the credential. Until the SSI transition is complete, some users will not have an agent of their own.

    Some users don't have the technology or the skills to use an agent, and there may be users who don't want to participate.

    In spite of the difficulties, there are huge advantages to transition to a decentralized system. Even when users don't understand the technology, they do care about the benefits it provides.

    This situation leaves the issuer with a choice: Maintain both a centralized system AND a decentralized SSI one, or enable their users to participate in the decentralized world.

    This paper addresses the second option: How to facilitate a transition to a decentralized world by providing issuer-hosted custodial agents.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#issuer-hosted-custodial-agents","title":"Issuer-Hosted Custodial Agents","text":"

    A custodial agent is an agent hosted on behalf of someone else. This model is common in the cryptocurrency space. An Issuer-Hosted Custodial Agent is exactly what it sounds like: an agent hosted for the holder of a credential by the issuer of the credential.

    This custodial arrangement involves managing the credentials for the user, but also managing the keys for the user. Key management on behalf of another is often called guardianship.

    An alternative to hosting the agent directly is to pay for the hosting by a third party provider. This arrangement addresses some, but not all, of the issues in this paper.

    This custodial arrangement is only necessary for the users without their own agents. Users running their own agents (often a mobile app), will manage their own keys and their own credentials.

    For the users with their own agents, the decentralized world has taken full effect: they have their own data, and can participate fully in the SSI ecosystem.

    For the users with hosted custodial agents, they have only made a partial transition. The data is still hosted by the issuer. With appropriate limits, this storage model is no worse than a centralized system. Despite the data storage being the same, a hosted agent provides the ability to migrate to another agent if the user desires.

    Hosting agents for users might sound like a costly endeavor, but hosted agents contain an advantage. Most hosted agents will only be used by their owners for a small amount of time, most likely similar to their interaction with the centralized system it replaces. This means that the costs are substantially lower than hosting a full agent.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#hosted-agent-interaction","title":"Hosted Agent Interaction","text":"

    Hosted agents have some particular challenges in providing effective user interaction. Detailed below are several options that can be used alone or in combination.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#browser-based","title":"Browser Based","text":"

    Providing a browser based user interface for a user is a common solution when the user will have access to a computer. Authentication will likely use something familiar like a username and password.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#authorizing-actions","title":"Authorizing Actions","text":"

    The user will often need a way to authorize actions that their agent will perform. A good option for this is via the use of a basic cell phone through SMS text messages or voice prompts. Less urgent actions can use an email sent to the user, prompting the user to login and authorize the actions.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#offline-paper-based","title":"Offline / Paper based","text":"

    At times the user will have no available technology for their use. In this case, providing QR codes printed on paper with accompanying instructions will allow the user to facilitate verifier (and perhaps another issuer) access to their cloud agent. QR Codes, such as those detailed in the Out Of Band Protocol, can contain both information for connecting to agent AND an interaction to perform. Presenting the QR code for scanning can serve as a form of consent for the prescribed action within the QR code. Printed QR codes can be provided by the issuer at the time of custodial agent creation, or from within a web interface available to the user.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#kiosk-based","title":"Kiosk based","text":"

    Kiosks can be useful to provide onsite interaction with a hosted agent. Kiosk authentication might take place via username and password, smartcard, or USB crypto key, with the possible inclusion of a biometric. Kiosks must be careful to fully remove any cached data when a session closes. Any biometric data used must be carefully managed between the kiosk and the hosted agent.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#smartphone-app","title":"Smartphone App","text":"

    While it is common for a smartphone app to be an agent by itself, there are cases where a smartphone app can act as a remote for the hosted agent. In this iteraction, keys, credentials, and other wallet related data is held in the custodial agent. The mobile app acts as a remote viewer and a way for the user to authorize actions taken by the custodial agent.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#best-practices","title":"Best Practices","text":"

    The following best practices should be followed to ensure proper operation and continued transition to a fully realized SSI architecture. Most of these practices depend upon and support one another.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#defend-the-ssi-architecture","title":"Defend the SSI architecture","text":"

    When issuers host custodial agents, care must be taken to avoid shortcuts that would violate SSI architecture. Deviations will frequently lead to incompatibilities.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#didcomm-protocol-based-integration","title":"DIDComm Protocol based Integration","text":"

    Communication between hosted agents and credential issuing agent must be based on published DIDComm protocols. Any communication which eliminates the use of a DID must be avoided. Whenever possible, these should be well adopted community protocols. If the case a new protocol is needed for a particular interaction, this must be fully documented and published, to allow other agents to become compatible by adopting the new protocol.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#allow-bring-your-own-agents","title":"Allow bring-your-own agents","text":"

    The onboarding process must allow users to bring their own compatible agents. This will be possible as long as any communication is protocol based. No ../../features available to hosted agents should be blocked from user provided agents.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#limit-wallet-scope-to-data-originating-from-the-issuer","title":"Limit wallet scope to data originating from the issuer","text":"

    Issuer hosted agents should have limits placed on them to prevent general use. This will prevent the agent from accepting additional credentials and data outside the scope of the issuer, therefore introducing responsibility for data that was never intended. This limtation must not limit the user in how they use the credentials issued, only in the acceptance of credentials and data from other issuers or parties. The use of policy and filters should be used to limit the types of credentials that can be held, which issuers should be allowed, and which protocols are enabled. None of these restrictions are necessary for bring-your-own agents provided by users.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#allow-migrate-from-hosted-to-bring-your-own","title":"Allow migrate from hosted to bring-your-own","text":"

    Users must be allowed to transition from an issuer-hosted agent to an agent of their choosing. This can happen either via a backup in a standard format, or via re-issuing relevant credentials.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#transparent-to-the-verifier","title":"Transparent to the verifier","text":"

    A verifier should not be able to tell the difference between a custodial hosted agent vs a bring-your-own agent.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#action-log","title":"Action Log","text":"

    All actions taken by the wallet should be preserved in a log viewable to the user. This includes how actions were authorized, such as a named policy or confirmation via text message.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#encrypted-wallets","title":"Encrypted Wallets","text":"

    Hosted wallet data should be encrypted at rest.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#independant-key-management","title":"Independant key management","text":"

    Keys used for hosted agents should have key mangement isolated from the issuer keys. Access to the keys for hosted agents should be carefully limited to the minimum required personnel. All key access should be logged.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#hosted-agent-isolation","title":"Hosted Agent Isolation","text":"

    Agents must be sufficiently isolated from each other to prevent a malicious user from accessing another user's agent or data or causing interruptions to the operation of another agent.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0700-oob-through-redirect/","title":"Aries RFC 0700: Out-of-Band through redirect","text":""},{"location":"concepts/0700-oob-through-redirect/#summary","title":"Summary","text":"

    Describes how one party can redirect to another party by passing out-of-band message as a query string and also recommends how to redirect back once protocol is over.

    "},{"location":"concepts/0700-oob-through-redirect/#motivation","title":"Motivation","text":"

    In current day e-commerce applications, while performing checkout users are usually presented with various payment options, like direct payment options or through some payment gateways. User then chooses an option, gets redirected to a payment application and then gets redirected back once transaction is over.

    Similarly, sending an out-of-band invitation through redirect plays an important role in web based applications where an inviter who is aware of invitee application or a selection service should be able to send invitation through redirect. Once invitee accepts the invitation and protocol gets over, then invitee should also be able to redirect back to URL shared through DIDComm message during protocol execution. The redirect can happen within the same device (ex: clicking a link) or between devices (ex: scanning a QR code).

    "},{"location":"concepts/0700-oob-through-redirect/#scenario","title":"Scenario","text":"

    Best example scenario would be how an issuer or a verifier applications trying to connect to holder applications for performing present proof or issuer credential protocol. A user who visits an issuer application can click on a link or scan a QR code to redirect to a holder application with an out-of-band message in query string (or redirect to a selection service showing available holder applications to choose from). User's holder application decodes invitation from query string, performs issue credential protocol and redirects user back to URL it received through DIDComm message from issuer during execution of protocol.

    "},{"location":"concepts/0700-oob-through-redirect/#tutorial","title":"Tutorial","text":"

    There are 2 roles in this flow,

    "},{"location":"concepts/0700-oob-through-redirect/#redirect-invitation-url","title":"Redirect Invitation URL","text":"

    A redirect URL from inviter to connect can consist of following elements,

    "},{"location":"concepts/0700-oob-through-redirect/#sample-1-redirect-invitation","title":"Sample 1: redirect invitation","text":"

    Invitation:

    {\n  \"@type\": \"https://didcomm.org/out-of-band/1.0/invitation\",\n  \"@id\": \"69212a3a-d068-4f9d-a2dd-4741bca89af3\",\n  \"label\": \"Faber College\",\n  \"goal_code\": \"issue-vc\",\n  \"goal\": \"To issue a Faber College Graduate credential\",\n  \"handshake_protocols\": [\"https://didcomm.org/didexchange/1.0\", \"https://didcomm.org/connections/1.0\"],\n  \"services\": [\"did:sov:LjgpST2rjsoxYegQDRm7EL\"]\n}\n

    Whitespace removed:

    {\"@type\":\"https://didcomm.org/out-of-band/1.0/invitation\",\"@id\":\"69212a3a-d068-4f9d-a2dd-4741bca89af3\",\"label\":\"Faber College\",\"goal_code\":\"issue-vc\",\"goal\":\"To issue a Faber College Graduate credential\",\"handshake_protocols\":[\"https://didcomm.org/didexchange/1.0\",\"https://didcomm.org/connections/1.0\"],\"services\":[\"did:sov:LjgpST2rjsoxYegQDRm7EL\"]}\n

    Base 64 URL Encoded:

    eyJAdHlwZSI6Imh0dHBzOi8vZGlkY29tbS5vcmcvb3V0LW9mLWJhbmQvMS4wL2ludml0YXRpb24iLCJAaWQiOiI2OTIxMmEzYS1kMDY4LTRmOWQtYTJkZC00NzQxYmNhODlhZjMiLCJsYWJlbCI6IkZhYmVyIENvbGxlZ2UiLCAiZ29hbF9jb2RlIjoiaXNzdWUtdmMiLCJnb2FsIjoiVG8gaXNzdWUgYSBGYWJlciBDb2xsZWdlIEdyYWR1YXRlIGNyZWRlbnRpYWwiLCJoYW5kc2hha2VfcHJvdG9jb2xzIjpbImh0dHBzOi8vZGlkY29tbS5vcmcvZGlkZXhjaGFuZ2UvMS4wIiwiaHR0cHM6Ly9kaWRjb21tLm9yZy9jb25uZWN0aW9ucy8xLjAiXSwic2VydmljZSI6WyJkaWQ6c292OkxqZ3BTVDJyanNveFllZ1FEUm03RUwiXX0\n

    Example URL: targeting recipient 'recipient.example.com'

    http://recipient.example.com/handle?oob=eyJAdHlwZSI6Imh0dHBzOi8vZGlkY29tbS5vcmcvb3V0LW9mLWJhbmQvMS4wL2ludml0YXRpb24iLCJAaWQiOiI2OTIxMmEzYS1kMDY4LTRmOWQtYTJkZC00NzQxYmNhODlhZjMiLCJsYWJlbCI6IkZhYmVyIENvbGxlZ2UiLCAiZ29hbF9jb2RlIjoiaXNzdWUtdmMiLCJnb2FsIjoiVG8gaXNzdWUgYSBGYWJlciBDb2xsZWdlIEdyYWR1YXRlIGNyZWRlbnRpYWwiLCJoYW5kc2hha2VfcHJvdG9jb2xzIjpbImh0dHBzOi8vZGlkY29tbS5vcmcvZGlkZXhjaGFuZ2UvMS4wIiwiaHR0cHM6Ly9kaWRjb21tLm9yZy9jb25uZWN0aW9ucy8xLjAiXSwic2VydmljZSI6WyJkaWQ6c292OkxqZ3BTVDJyanNveFllZ1FEUm03RUwiXX0\n

    Out-of-band invitation redirect URLs can be transferred via text message, email, SMS, posting on a website, or QR Code.

    Example URL encoded as a QR Code:

    "},{"location":"concepts/0700-oob-through-redirect/#sample-2-redirect-invitation-url","title":"Sample 2: redirect invitation URL","text":"

    Invitation URL from requestor which resolves to an out-of-band invitation:

    https://requestor.example.com/ssi?id=5f0e3ffb-3f92-4648-9868-0d6f8889e6f3\n

    Base 64 URL Encoded:

    aHR0cHM6Ly9yZXF1ZXN0b3IuZXhhbXBsZS5jb20vc3NpP2lkPTVmMGUzZmZiLTNmOTItNDY0OC05ODY4LTBkNmY4ODg5ZTZmMw==\n

    Example URL: targeting recipient 'recipient.example.com'

    http://recipient.example.com/handle?oobid=aHR0cHM6Ly9yZXF1ZXN0b3IuZXhhbXBsZS5jb20vc3NpP2lkPTVmMGUzZmZiLTNmOTItNDY0OC05ODY4LTBkNmY4ODg5ZTZmMw==\n

    Out-of-band invitation redirect URLs can be transferred via text message, email, SMS, posting on a website, or QR Code.

    Example URL encoded as a QR Code:

    "},{"location":"concepts/0700-oob-through-redirect/#web-redirect-decorator","title":"~web-redirect Decorator","text":"

    In some scenarios, requestor would require recipient to redirect back after completion of protocol execution to proceed with further processing. For example, a verifier would request a holder application to redirect back once present-proof protocol is over so that it can show credential verification results to user and navigate the user to further steps.

    The optional ~web-redirect SHOULD be used in DIDComm message sent by requestor during protocol execution to send redirect information to recipient if required.

    This decorator may not be needed in many cases where requestor has control over the flow of application based on protocol status. But this will be helpful in cases where an application has little or no control over user's navigation. For example, in a web browser where user redirected from a verifier web application to his wallet application in a same window through some wallet selection wizard and some third party logins. In this case once the protocol execution is over, verifier can send a URL to wallet application requesting redirect. This decorator is also useful switching from wallet mobile app to verifier mobile app in a mobile device.

    \"~web-redirect\": {\n  \"status\": \"OK\",\n  \"url\": \"https://example.com/handle-success/51e63a5f-93e1-46ac-b269-66bb22591bfa\"\n}\n

    where,

    Some of the DIDComm messages which can use ~web-redirect details to send redirect request.

    "},{"location":"concepts/0700-oob-through-redirect/#putting-all-together","title":"Putting all together","text":""},{"location":"concepts/0700-oob-through-redirect/#sending-invitation-to-recipient-through-redirect","title":"Sending Invitation to Recipient through redirect","text":""},{"location":"concepts/0700-oob-through-redirect/#sending-invitation-to-selection-service-through-redirect","title":"Sending Invitation to Selection Service through redirect","text":"

    This flow is similar to previous flow but target domain and path of invitation redirect URL will be selection service which presents user with various options to choose recipient application of choice. So in Step 3, user redirects to a selection service which guides user to select right recipient. For example a scenario where user is presented with various holder application providers to choose from while sharing/saving his or her verifiable credentials.

    "},{"location":"concepts/0700-oob-through-redirect/#reference","title":"Reference","text":""},{"location":"concepts/0700-oob-through-redirect/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    How different recipient applications registers with a selection service and establishing trust between requestor, recipient and selection service are out of scope of this RFC.

    "},{"location":"concepts/0700-oob-through-redirect/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0757-push-notification/","title":"0757: Push Notification","text":""},{"location":"concepts/0757-push-notification/#summary","title":"Summary","text":"

    This RFC Describes the general concept of push notification as it applies to Aries Agents. There are a variety of push notification systems and methods, each of which is described in it's own feature RFC.

    Note: These protocols operate only between a mobile app and it's mediator(s). There is no requirement to use these protocols when mobile apps and mediator services are provided as a bundle. These protocols exist to facilitate cooperation between open source mediators and mobile apps not necessarily developed between the same parties.

    "},{"location":"concepts/0757-push-notification/#motivation","title":"Motivation","text":"

    Mobile agents typically require the use of Mediators to receive DIDComm Messages. When messages arrive at a mediator, it is optimal to send a push notification to the mobile device to signal that a message is waiting. This provides a good user experience and allows mobile agents to be responsive without sacrificing battery life by routinly checking for new messages.

    "},{"location":"concepts/0757-push-notification/#tutorial","title":"Tutorial","text":"

    Though push notification is common mobile platforms, there are a variety of different systems with various requirements and mecanisms. Most of them follow a familiar pattern:

    "},{"location":"concepts/0757-push-notification/#setup-phase","title":"Setup Phase","text":"
    1. Notification Sender (mediator) registers with a push notification service. This typically involves some signup procedure.
    2. Notification Recipient (mobile app) registers with the push notification service. This typically involves some signup procedure. For some platforms, or for a mediator and mobile app by the same vendor, this will be accomplished in step 1.
    3. Notification Recipient (mobile app) adds code (with config values obtained in step 2) to connect to the push notification service.
    4. Notification Recipient (mobile app) communicates necessary information to the Notification Sender (mediator) for use in sending notifications.
    "},{"location":"concepts/0757-push-notification/#notification-phase","title":"Notification Phase","text":"
    1. A message arrives at the Notification Sender (mediator) destined for the Notification Recipient (mobile app).
    2. Notification Sender (mediator) calls an API associated with the push notification service with notification details, typically using the information obtained in step 4.
    3. Notification Recipient (mobile app) is notified (typically via a callback function) of the notification details.
    4. Notification Recipient (mobile app) then connects to the Notification Sender (mediator) and receives waiting messages.

    In spite of the flow similarities between the push notification platforms, the implementations, libraries used, and general code paths vary substantially. Each push notification method is described in it's own protocol. This allows the protocol to fit the specific needs and terminology of the notification method it enables. Feature Discovery can be used between the Notification Sender and the Notification Recipient to discover push notification compatibility.

    "},{"location":"concepts/0757-push-notification/#public-mediators","title":"Public Mediators","text":"

    Some push notification methods require matching keys or secrets to be used in both sending and receiving notifications. This requirement makes these push notification methods unusable by public mediators.

    Public mediators SHOULD only implement push notification methods that do not require sharing secrets or keys with application implementations.

    "},{"location":"concepts/0757-push-notification/#push-notification-protcols","title":"Push Notification Protcols","text":"

    0699 - Push Notification APNS 1.0 (Apple Push Notification Service)

    "},{"location":"concepts/0757-push-notification/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0799-long-term-support/","title":"0799: Aries Long Term Support Releases","text":"

    Long Term Support Releases of Aries projects will assist those using the software to integrate within their development processes.

    "},{"location":"concepts/0799-long-term-support/#motivation","title":"Motivation","text":"

    Long Term Support releases allow stable use of projects without frequent code updates. Designating LTS releases frees projects to develop ../../features without worry of disrupting those seeking feature stable deployments.

    "},{"location":"concepts/0799-long-term-support/#project-lts-releases","title":"Project LTS Releases","text":""},{"location":"concepts/0799-long-term-support/#lts-release-tagging","title":"LTS Release Tagging","text":""},{"location":"concepts/0799-long-term-support/#lts-support-timeline","title":"LTS Support Timeline","text":""},{"location":"concepts/0799-long-term-support/#lts-release-updates","title":"LTS Release Updates","text":""},{"location":"concepts/0799-long-term-support/#references","title":"References","text":"

    This policy is inspired by the Fabric LTS Policy https://hyperledger.github.io/fabric-rfcs/text/0005-lts-release-strategy.html

    "},{"location":"features/0015-acks/","title":"Aries RFC 0015: ACKs","text":""},{"location":"features/0015-acks/#summary","title":"Summary","text":"

    Explains how one party can send acknowledgment messages (ACKs) to confirm receipt and clarify the status of complex processes.

    "},{"location":"features/0015-acks/#change-log","title":"Change Log","text":""},{"location":"features/0015-acks/#motivation","title":"Motivation","text":"

    An acknowledgment or ACK is one of the most common procedures in protocols of all types. We need a flexible, powerful, and easy way to send such messages in agent-to-agent interactions.

    "},{"location":"features/0015-acks/#tutorial","title":"Tutorial","text":"

    Confirming a shared understanding matters whenever independent parties interact. We buy something on Amazon; moments later, our email client chimes to tell us of a new message with subject \"Thank you for your recent order.\" We verbally accept a new job, but don't rest easy until we've also emailed the signed offer letter back to our new boss. We change a password on an online account, and get a text at our recovery phone number so both parties know the change truly originated with the account's owner.

    When formal acknowledgments are missing, we get nervous. And rightfully so; most of us have a story of a package that was lost in the mail, or a web form that didn't submit the way we expected.

    Agents interact in very complex ways. They may use multiple transport mechanisms, across varied protocols, through long stretches of time. While we usually expect messages to arrive as sent, and to be processed as expected, a vital tool in the agent communication repertoire is the receipt of acknowledgments to confirm a shared understanding.

    "},{"location":"features/0015-acks/#implicit-acks","title":"Implicit ACKs","text":"

    Message threading includes a lightweight, automatic sort of ACK in the form of the ~thread.received_orders field. This allows Alice to report that she has received Bob's recent message that had ~thread.sender_order = N. We expect threading to be best practice in many use cases, and we expect interactions to often happen reliably enough and quickly enough that implicit ACKs provide high value. If you are considering ACKs but are not familiar with that mechanism, make sure you understand it, first. This RFC offers a supplement, not a replacement.

    "},{"location":"features/0015-acks/#explicit-acks","title":"Explicit ACKs","text":"

    Despite the goodness of implicit ACKs, there are many circumstances where a reply will not happen immediately. Explicit ACKs can be vital here.

    Explicit ACKS may also be vital at the end of an interaction, when work is finished: a credential has been issued, a proof has been received, a payment has been made. In such a flow, an implicit ACK meets the needs of the party who received the final message, but the other party may want explicit closure. Otherwise they can't know with confidence about the final outcome of the flow.

    Rather than inventing a new \"interaction has been completed successfully\" message for each protocol, an all-purpose ack message type is recommended. It looks like this:

    {\n  \"@type\": \"https://didcomm.org/notification/1.0/ack\",\n  \"@id\": \"06d474e0-20d3-4cbf-bea6-6ba7e1891240\",\n  \"status\": \"OK\",\n  \"~thread\": {\n    \"thid\": \"b271c889-a306-4737-81e6-6b2f2f8062ae\",\n    \"sender_order\": 4,\n    \"received_orders\": {\"did:sov:abcxyz\": 3}\n  }\n}\n

    It may also be appropriate to send an ack at other key points in an interaction (e.g., when a key rotation notice is received).

    "},{"location":"features/0015-acks/#adopting-acks","title":"Adopting acks","text":"

    As discussed in 0003: Protocols, a protocol can adopt the ack message into its own namespace. This allows the type of an ack to change from: https://didcomm.org/notification/1.0/ack to something like: https://didcomm.org/otherProtocol/2.0/ack. Thus, message routing logic can see the ack as part of the other protocol, and send it to the relevant handler--but still have all the standardization of generic acks.

    "},{"location":"features/0015-acks/#ack-status","title":"ack status","text":"

    The status field in an ack tells whether the ack is final or not with respect to the message being acknowledged. It has 2 predefined values: OK (which means an outcome has occurred, and it was positive); and PENDING, which acknowledges that no outcome is yet known.

    There is not an ack status of FAIL. In the case of a protocol failure a Report Problem message must be used to inform the other party(ies). For more details, see the next section.

    In addition, more advanced ack usage is possible. See the details in the Reference section.

    "},{"location":"features/0015-acks/#relationship-to-problem-report","title":"Relationship to problem-report","text":"

    Negative outcomes do not necessarily mean that something bad happened; perhaps Alice comes to hope that Bob rejects her offer to buy his house because she's found something better--and Bob does that, without any error occurring. This is not a FAIL in a problem sense; it's a FAIL in the sense that the offer to buy did not lead to the outcome Alice intended when she sent it.

    This raises the question of errors. Any time an unexpected problem arises, best practice is to report it to the sender of the message that triggered the problem. This is the subject of the problem reporting mechanism.

    A problem_report is inherently a sort of ACK. In fact, the ack message type and the problem_report message type are both members of the same notification message family. Both help a sender learn about status. Therefore, a requirement for an ack is that a status of FAIL be satisfied by a problem_report message.

    However, there is some subtlety in the use of the two types of messages. Some acks may be sent before a final outcome, so a final problem_report may not be enough. As well, an ack request may be sent after a previous ack or problem_report was lost in transit. Because of these caveats, developers whose code creates or consumes acks should be thoughtful about where the two message types overlap, and where they do not. Carelessness here is likely to cause subtle, hard-to-duplicate surprises from time to time.

    "},{"location":"features/0015-acks/#custom-acks","title":"Custom ACKs","text":"

    This mechanism cannot address all possible ACK use cases. Some ACKs may require custom data to be sent, and some acknowledgment schemes may be more sophisticated or fine-grained that the simple settings offered here. In such cases, developers should write their own ACK message type(s) and maybe their own decorators. However, reusing the field names and conventions in this RFC may still be desirable, if there is significant overlap in the ../../concepts.

    "},{"location":"features/0015-acks/#reference","title":"Reference","text":""},{"location":"features/0015-acks/#ack-message","title":"ack message","text":""},{"location":"features/0015-acks/#status","title":"status","text":"

    Required, values OK or PENDING. As discussed above, this tells whether the ack is final or not with respect to the message being acknowledged.

    "},{"location":"features/0015-acks/#threadthid","title":"~thread.thid","text":"

    Required. This links the ack back to the message that requested it.

    All other fields in an ack are present or absent per requirements of ordinary messages.

    "},{"location":"features/0015-acks/#drawbacks-and-alternatives","title":"Drawbacks and Alternatives","text":"

    None identified.

    "},{"location":"features/0015-acks/#prior-art","title":"Prior art","text":"

    See notes above about the implicit ACK mechanism in ~thread.received_orders.

    "},{"location":"features/0015-acks/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0015-acks/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes RFC 0036: Issue Credential Protocol ACKs are adopted by this protocol. RFC 0037: Present Proof Protocol ACKs are adopted by this protocol. RFC 0193: Coin Flip Protocol ACKs are adopted as a subprotocol. Aries Cloud Agent - Python Contributed by the Government of British Columbia."},{"location":"features/0019-encryption-envelope/","title":"Aries RFC 0019: Encryption Envelope","text":""},{"location":"features/0019-encryption-envelope/#summary","title":"Summary","text":"

    There are two layers of messages that combine to enable interoperable self-sovereign agent-to-agent communication. At the highest level are DIDComm Plaintext Messages - messages sent between identities to accomplish some shared goal (e.g., establishing a connection, issuing a verifiable credential, sharing a chat). DIDComm Plaintext Messages are delivered via the second, lower layer of messaging - DIDComm Encrypted Envelopes. A DIDComm Encrypted Envelope is a wrapper (envelope) around a plaintext message to permit secure sending and routing. A plaintext message going from its sender to its receiver passes through many agents, and an encryption envelope is used for each hop of the journey.

    This RFC describes the DIDComm Encrypted Envelope format and the pack() and unpack() functions that implement this format.

    "},{"location":"features/0019-encryption-envelope/#motivation","title":"Motivation","text":"

    Encryption envelopes use a standard format built on JSON Web Encryption - RFC 7516. This format is not captive to Aries; it requires no special Aries worldview or Aries dependencies to implement. Rather, it is a general-purpose solution to the question of how to encrypt, decrypt, and route messages as they pass over any transport(s). By documenting the format here, we hope to provide a point of interoperability for developers of agents inside and outside the Aries ecosystem.

    We also document how Aries implements its support for the DIDComm Encrypted Envelope format through the pack() and unpack() functions. For developers of Aries, this is a sort of design doc; for those who want to implement the format in other tech stacks, it may be a useful reference.

    "},{"location":"features/0019-encryption-envelope/#tutorial","title":"Tutorial","text":""},{"location":"features/0019-encryption-envelope/#assumptions","title":"Assumptions","text":"

    We assume that each sending agent knows:

    The assumptions can be made because either the message is being sent to an agent within the sending agent's domain and so the sender knows the internal configuration of agents, or the message is being sent outside the sending agent's domain and interoperability requirements are in force to define the sending agent's behaviour.

    "},{"location":"features/0019-encryption-envelope/#example-scenario","title":"Example Scenario","text":"

    The example of Alice and Bob's sovereign domains is used for illustrative purposes in defining this RFC.

    In the diagram above:

    For the purposes of this discussion we are defining the Encryption Envelope agent message flow to be:

    1 \u2192 2 \u2192 8 \u2192 9 \u2192 3 \u2192 4

    However, that flow is just one of several that could match this configuration. What we know for sure is that:

    "},{"location":"features/0019-encryption-envelope/#encrypted-envelopes","title":"Encrypted Envelopes","text":"

    An encrypted envelope is used to transport any plaintext message from one agent directly to another. In our example message flow above, there are five encrypted envelopes sent, one for each hop in the flow. The process to send an encrypted envelope consists of the following steps:

    This is repeated with each hop, but the encrypted envelopes are nested, such that the plaintext is never visible until it reaches its final recipient.

    "},{"location":"features/0019-encryption-envelope/#implementation","title":"Implementation","text":"

    We will describe the pack and unpack algorithms, and their output, in terms of Aries' initial implementation, which may evolve over time. Other implementations could be built, but they would need to emit and consume similar inputs and outputs.

    The data structures emitted and consumed by these algorithms are described in a formal schema.

    "},{"location":"features/0019-encryption-envelope/#authcrypt-mode-vs-anoncrypt-mode","title":"Authcrypt mode vs. Anoncrypt mode","text":"

    When packing and unpacking are done in a way that the sender is anonymous, we say that we are in anoncrypt mode. When the sender is revealed, we are in authcrypt mode. Authcrypt mode reveals the sender to the recipient only; it is not the same as a non-repudiable signature. See the RFC about non-repudiable signatures, and this discussion about the theory of non-repudiation.

    "},{"location":"features/0019-encryption-envelope/#pack-message","title":"Pack Message","text":""},{"location":"features/0019-encryption-envelope/#pack_message-interface","title":"pack_message() interface","text":"

    packed_message = pack_message(wallet_handle, message, receiver_verkeys, sender_verkey)

    "},{"location":"features/0019-encryption-envelope/#pack_message-params","title":"pack_message() Params:","text":""},{"location":"features/0019-encryption-envelope/#pack_message-return-value-authcrypt-mode","title":"pack_message() return value (Authcrypt mode)","text":"

    This is an example of an outputted message encrypting for two verkeys using Authcrypt.

    {\n    \"protected\": \"eyJlbmMiOiJ4Y2hhY2hhMjBwb2x5MTMwNV9pZXRmIiwidHlwIjoiSldNLzEuMCIsImFsZyI6IkF1dGhjcnlwdCIsInJlY2lwaWVudHMiOlt7ImVuY3J5cHRlZF9rZXkiOiJMNVhEaEgxNVBtX3ZIeFNlcmFZOGVPVEc2UmZjRTJOUTNFVGVWQy03RWlEWnl6cFJKZDhGVzBhNnFlNEpmdUF6IiwiaGVhZGVyIjp7ImtpZCI6IkdKMVN6b1d6YXZRWWZOTDlYa2FKZHJRZWpmenRONFhxZHNpVjRjdDNMWEtMIiwiaXYiOiJhOEltaW5zdFhIaTU0X0otSmU1SVdsT2NOZ1N3RDlUQiIsInNlbmRlciI6ImZ0aW13aWlZUkc3clJRYlhnSjEzQzVhVEVRSXJzV0RJX2JzeERxaVdiVGxWU0tQbXc2NDE4dnozSG1NbGVsTThBdVNpS2xhTENtUkRJNHNERlNnWkljQVZYbzEzNFY4bzhsRm9WMUJkREk3ZmRLT1p6ckticUNpeEtKaz0ifX0seyJlbmNyeXB0ZWRfa2V5IjoiZUFNaUQ2R0RtT3R6UkVoSS1UVjA1X1JoaXBweThqd09BdTVELTJJZFZPSmdJOC1ON1FOU3VsWXlDb1dpRTE2WSIsImhlYWRlciI6eyJraWQiOiJIS1RBaVlNOGNFMmtLQzlLYU5NWkxZajRHUzh1V0NZTUJ4UDJpMVk5Mnp1bSIsIml2IjoiRDR0TnRIZDJyczY1RUdfQTRHQi1vMC05QmdMeERNZkgiLCJzZW5kZXIiOiJzSjdwaXU0VUR1TF9vMnBYYi1KX0pBcHhzYUZyeGlUbWdwWmpsdFdqWUZUVWlyNGI4TVdtRGR0enAwT25UZUhMSzltRnJoSDRHVkExd1Z0bm9rVUtvZ0NkTldIc2NhclFzY1FDUlBaREtyVzZib2Z0d0g4X0VZR1RMMFE9In19XX0=\",\n    \"iv\": \"ZqOrBZiA-RdFMhy2\",\n    \"ciphertext\": \"K7KxkeYGtQpbi-gNuLObS8w724mIDP7IyGV_aN5AscnGumFd-SvBhW2WRIcOyHQmYa-wJX0MSGOJgc8FYw5UOQgtPAIMbSwVgq-8rF2hIniZMgdQBKxT_jGZS06kSHDy9UEYcDOswtoLgLp8YPU7HmScKHSpwYY3vPZQzgSS_n7Oa3o_jYiRKZF0Gemamue0e2iJ9xQIOPodsxLXxkPrvvdEIM0fJFrpbeuiKpMk\",\n    \"tag\": \"kAuPl8mwb0FFVyip1omEhQ==\"\n}\n

    The base64URL encoded protected decodes to this:

    {\n    \"enc\": \"xchacha20poly1305_ietf\",\n    \"typ\": \"JWM/1.0\",\n    \"alg\": \"Authcrypt\",\n    \"recipients\": [\n        {\n            \"encrypted_key\": \"L5XDhH15Pm_vHxSeraY8eOTG6RfcE2NQ3ETeVC-7EiDZyzpRJd8FW0a6qe4JfuAz\",\n            \"header\": {\n                \"kid\": \"GJ1SzoWzavQYfNL9XkaJdrQejfztN4XqdsiV4ct3LXKL\",\n                \"iv\": \"a8IminstXHi54_J-Je5IWlOcNgSwD9TB\",\n                \"sender\": \"ftimwiiYRG7rRQbXgJ13C5aTEQIrsWDI_bsxDqiWbTlVSKPmw6418vz3HmMlelM8AuSiKlaLCmRDI4sDFSgZIcAVXo134V8o8lFoV1BdDI7fdKOZzrKbqCixKJk=\"\n            }\n        },\n        {\n            \"encrypted_key\": \"eAMiD6GDmOtzREhI-TV05_Rhippy8jwOAu5D-2IdVOJgI8-N7QNSulYyCoWiE16Y\",\n            \"header\": {\n                \"kid\": \"HKTAiYM8cE2kKC9KaNMZLYj4GS8uWCYMBxP2i1Y92zum\",\n                \"iv\": \"D4tNtHd2rs65EG_A4GB-o0-9BgLxDMfH\",\n                \"sender\": \"sJ7piu4UDuL_o2pXb-J_JApxsaFrxiTmgpZjltWjYFTUir4b8MWmDdtzp0OnTeHLK9mFrhH4GVA1wVtnokUKogCdNWHscarQscQCRPZDKrW6boftwH8_EYGTL0Q=\"\n            }\n        }\n    ]\n}\n

    "},{"location":"features/0019-encryption-envelope/#pack-output-format-authcrypt-mode","title":"pack output format (Authcrypt mode)","text":"
        {\n        \"protected\": \"b64URLencoded({\n            \"enc\": \"xchachapoly1305_ietf\",\n            \"typ\": \"JWM/1.0\",\n            \"alg\": \"Authcrypt\",\n            \"recipients\": [\n                {\n                    \"encrypted_key\": base64URLencode(libsodium.crypto_box(my_key, their_vk, cek, cek_iv))\n                    \"header\": {\n                          \"kid\": \"base58encode(recipient_verkey)\",\n                           \"sender\" : base64URLencode(libsodium.crypto_box_seal(their_vk, base58encode(sender_vk)),\n                            \"iv\" : base64URLencode(cek_iv)\n                }\n            },\n            ],\n        })\",\n        \"iv\": <b64URLencode(iv)>,\n        \"ciphertext\": b64URLencode(encrypt_detached({'@type'...}, protected_value_encoded, iv, cek),\n        \"tag\": <b64URLencode(tag)>\n    }\n
    "},{"location":"features/0019-encryption-envelope/#authcrypt-pack-algorithm","title":"Authcrypt pack algorithm","text":"
    1. generate a content encryption key (symmetrical encryption key)
    2. encrypt the CEK for each recipient's public key using Authcrypt (steps below)
      1. set encrypted_key value to base64URLencode(libsodium.crypto_box(my_key, their_vk, cek, cek_iv))
        • Note it this step we're encrypting the cek, so it can be decrypted by the recipient
      2. set sender value to base64URLencode(libsodium.crypto_box_seal(their_vk, sender_vk_string))
        • Note in this step we're encrypting the sender_verkey to protect sender anonymity
      3. base64URLencode(cek_iv) and set to iv value in the header
        • Note the cek_iv in the header is used for the encrypted_key where as iv is for ciphertext
    3. base64URLencode the protected value
    4. encrypt the message using libsodium.crypto_aead_chacha20poly1305_ietf_encrypt_detached(message, protected_value_encoded, iv, cek) this is the ciphertext.
    5. base64URLencode the iv, ciphertext, and tag then serialize the format into the output format listed above.

    For a reference implementation, see https://github.com/hyperledger/indy-sdk/blob/master/libindy/src/commands/crypto.rs

    "},{"location":"features/0019-encryption-envelope/#pack_message-return-value-anoncrypt-mode","title":"pack_message() return value (Anoncrypt mode)","text":"

    This is an example of an outputted message encrypted for two verkeys using Anoncrypt.

    {\n    \"protected\": \"eyJlbmMiOiJ4Y2hhY2hhMjBwb2x5MTMwNV9pZXRmIiwidHlwIjoiSldNLzEuMCIsImFsZyI6IkFub25jcnlwdCIsInJlY2lwaWVudHMiOlt7ImVuY3J5cHRlZF9rZXkiOiJYQ044VjU3UTF0Z2F1TFcxemdqMVdRWlEwV0RWMFF3eUVaRk5Od0Y2RG1pSTQ5Q0s1czU4ZHNWMGRfTlpLLVNNTnFlMGlGWGdYRnZIcG9jOGt1VmlTTV9LNWxycGJNU3RqN0NSUHNrdmJTOD0iLCJoZWFkZXIiOnsia2lkIjoiR0oxU3pvV3phdlFZZk5MOVhrYUpkclFlamZ6dE40WHFkc2lWNGN0M0xYS0wifX0seyJlbmNyeXB0ZWRfa2V5IjoiaG5PZUwwWTl4T3ZjeTVvRmd0ZDFSVm05ZDczLTB1R1dOSkN0RzRsS3N3dlljV3pTbkRsaGJidmppSFVDWDVtTU5ZdWxpbGdDTUZRdmt2clJEbkpJM0U2WmpPMXFSWnVDUXY0eVQtdzZvaUE9IiwiaGVhZGVyIjp7ImtpZCI6IjJHWG11Q04ySkN4U3FNUlZmdEJITHhWSktTTDViWHl6TThEc1B6R3FRb05qIn19XX0=\",\n    \"iv\": \"M1GneQLepxfDbios\",\n    \"ciphertext\": \"iOLSKIxqn_kCZ7Xo7iKQ9rjM4DYqWIM16_vUeb1XDsmFTKjmvjR0u2mWFA48ovX5yVtUd9YKx86rDVDLs1xgz91Q4VLt9dHMOfzqv5DwmAFbbc9Q5wHhFwBvutUx5-lDZJFzoMQHlSAGFSBrvuApDXXt8fs96IJv3PsL145Qt27WLu05nxhkzUZz8lXfERHwAC8FYAjfvN8Fy2UwXTVdHqAOyI5fdKqfvykGs6fV\",\n    \"tag\": \"gL-lfmD-MnNj9Pr6TfzgLA==\"\n}\n

    The protected data decodes to this:

    {\n    \"enc\": \"xchacha20poly1305_ietf\",\n    \"typ\": \"JWM/1.0\",\n    \"alg\": \"Anoncrypt\",\n    \"recipients\": [\n        {\n            \"encrypted_key\": \"XCN8V57Q1tgauLW1zgj1WQZQ0WDV0QwyEZFNNwF6DmiI49CK5s58dsV0d_NZK-SMNqe0iFXgXFvHpoc8kuViSM_K5lrpbMStj7CRPskvbS8=\",\n            \"header\": {\n                \"kid\": \"GJ1SzoWzavQYfNL9XkaJdrQejfztN4XqdsiV4ct3LXKL\"\n            }\n        },\n        {\n            \"encrypted_key\": \"hnOeL0Y9xOvcy5oFgtd1RVm9d73-0uGWNJCtG4lKswvYcWzSnDlhbbvjiHUCX5mMNYulilgCMFQvkvrRDnJI3E6ZjO1qRZuCQv4yT-w6oiA=\",\n            \"header\": {\n                \"kid\": \"2GXmuCN2JCxSqMRVftBHLxVJKSL5bXyzM8DsPzGqQoNj\"\n            }\n        }\n    ]\n}\n
    "},{"location":"features/0019-encryption-envelope/#pack-output-format-anoncrypt-mode","title":"pack output format (Anoncrypt mode)","text":"
        {\n         \"protected\": \"b64URLencoded({\n            \"enc\": \"xchachapoly1305_ietf\",\n            \"typ\": \"JWM/1.0\",\n            \"alg\": \"Anoncrypt\",\n            \"recipients\": [\n                {\n                    \"encrypted_key\": base64URLencode(libsodium.crypto_box_seal(their_vk, cek)),\n                    \"header\": {\n                        \"kid\": base58encode(recipient_verkey),\n                    }\n                },\n            ],\n         })\",\n         \"iv\": b64URLencode(iv),\n         \"ciphertext\": b64URLencode(encrypt_detached({'@type'...}, protected_value_encoded, iv, cek),\n         \"tag\": b64URLencode(tag)\n    }\n
    "},{"location":"features/0019-encryption-envelope/#anoncrypt-pack-algorithm","title":"Anoncrypt pack algorithm","text":"
    1. generate a content encryption key (symmetrical encryption key)
    2. encrypt the CEK for each recipient's public key using Anoncrypt (steps below)
      1. set encrypted_key value to base64URLencode(libsodium.crypto_box_seal(their_vk, cek))
        • Note it this step we're encrypting the cek, so it can be decrypted by the recipient
    3. base64URLencode the protected value
    4. encrypt the message using libsodium.crypto_aead_chacha20poly1305_ietf_encrypt_detached(message, protected_value_encoded, iv, cek) this is the ciphertext.
    5. base64URLencode the iv, ciphertext, and tag then serialize the format into the output format listed above.

    For a reference implementation, see https://github.com/hyperledger/indy-sdk/blob/master/libindy/src/commands/crypto.rs

    "},{"location":"features/0019-encryption-envelope/#unpack-message","title":"Unpack Message","text":""},{"location":"features/0019-encryption-envelope/#unpack_message-interface","title":"unpack_message() interface","text":"

    unpacked_message = unpack_message(wallet_handle, jwe)

    "},{"location":"features/0019-encryption-envelope/#unpack_message-params","title":"unpack_message() Params","text":""},{"location":"features/0019-encryption-envelope/#unpack-algorithm","title":"Unpack Algorithm","text":"
    1. seralize data, so it can be used
      • For example, in rust-lang this has to be seralized as a struct.
    2. Lookup the kid for each recipient in the wallet to see if the wallet possesses a private key associated with the public key listed
    3. Check if a sender field is used.
      • If a sender is included use auth_decrypt to decrypt the encrypted_key by doing the following:
        1. decrypt sender verkey using libsodium.crypto_box_seal_open(my_private_key, base64URLdecode(sender))
        2. decrypt cek using libsodium.crypto_box_open(my_private_key, sender_verkey, encrypted_key, cek_iv)
        3. decrypt ciphertext using libsodium.crypto_aead_chacha20poly1305_ietf_open_detached(base64URLdecode(ciphertext_bytes), base64URLdecode(protected_data_as_bytes), base64URLdecode(nonce), cek)
        4. return message, recipient_verkey and sender_verkey following the authcrypt format listed below
      • If a sender is NOT included use anon_decrypt to decrypt the encrypted_key by doing the following:
        1. decrypt encrypted_key using libsodium.crypto_box_seal_open(my_private_key, encrypted_key)
        2. decrypt ciphertext using libsodium.crypto_aead_chacha20poly1305_ietf_open_detached(base64URLdecode(ciphertext_bytes), base64URLdecode(protected_data_as_bytes), base64URLdecode(nonce), cek)
        3. return message and recipient_verkey following the anoncrypt format listed below

    NOTE: In the unpack algorithm, the base64url decode implementation used MUST correctly decode padded and unpadded base64URL encoded data.

    For a reference unpack implementation, see https://github.com/hyperledger/indy-sdk/blob/master/libindy/src/commands/crypto.rs

    "},{"location":"features/0019-encryption-envelope/#unpack_message-return-values-authcrypt-mode","title":"unpack_message() return values (authcrypt mode)","text":"
    {\n    \"message\": \"{ \\\"@id\\\": \\\"123456780\\\",\\\"@type\\\":\\\"https://didcomm.org/basicmessage/1.0/message\\\",\\\"sent_time\\\": \\\"2019-01-15 18:42:01Z\\\",\\\"content\\\": \\\"Your hovercraft is full of eels.\\\"}\",\n    \"recipient_verkey\": \"HKTAiYM8cE2kKC9KaNMZLYj4GS8uWCYMBxP2i1Y92zum\",\n    \"sender_verkey\": \"DWwLsbKCRAbYtfYnQNmzfKV7ofVhMBi6T4o3d2SCxVuX\"\n}\n
    "},{"location":"features/0019-encryption-envelope/#unpack_message-return-values-anoncrypt-mode","title":"unpack_message() return values (anoncrypt mode)","text":"
    {\n    \"message\": \"{ \\\"@id\\\": \\\"123456780\\\",\\\"@type\\\":\\\"https://didcomm.org/basicmessage/1.0/message\\\",\\\"sent_time\\\": \\\"2019-01-15 18:42:01Z\\\",\\\"content\\\": \\\"Your hovercraft is full of eels.\\\"}\",\n    \"recipient_verkey\": \"2GXmuCN2JCxSqMRVftBHLxVJKSL5bXyzM8DsPzGqQoNj\"\n}\n
    "},{"location":"features/0019-encryption-envelope/#additional-notes","title":"Additional Notes","text":""},{"location":"features/0019-encryption-envelope/#drawbacks","title":"Drawbacks","text":"

    The current implementation of the pack() message is currently Hyperledger Aries specific. It is based on common crypto libraries (NaCl), but the wrappers are not commonly used outside of Aries. There's currently work being done to fine alignment on a cross-ecosystem interoperable protocol, but this hasn't been achieved yet. This work will hopefully bridge this gap.

    "},{"location":"features/0019-encryption-envelope/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    As the JWE standard currently stands, it does not follow this format. We're actively working with the lead writer of the JWE spec to find alignment and are hopeful the changes needed can be added.

    We've also looked at using the Message Layer Security (MLS) specification. This specification shows promise for adoption later on with more maturity. Additionally because they aren't hiding metadata related to the sender (Sender Anonymity), we would need to see some changes made to the specification before we could adopt this spec.

    "},{"location":"features/0019-encryption-envelope/#prior-art","title":"Prior art","text":"

    The JWE family of encryption methods.

    "},{"location":"features/0019-encryption-envelope/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0019-encryption-envelope/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community Aries Framework - .NET .NET framework for building agents of all types Streetcred.id Commercial mobile and web app built using Aries Framework - .NET Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases. Aries Framework - Go For building agents, hubs and other DIDComm ../../features in GoLang. Aries Protocol Test Suite"},{"location":"features/0019-encryption-envelope/schema/","title":"Schema","text":"

    This spec is according JSON Schema v0.7

    {\n    \"id\": \"https://github.com/hyperledger/indy-agent/wiremessage.json\",\n    \"$schema\": \"http://json-schema.org/draft-07/schema#\",\n    \"title\": \"Json Web Message format\",\n    \"type\": \"object\",\n    \"required\": [\"ciphertext\", \"iv\", \"protected\", \"tag\"],\n    \"properties\": {\n        \"protected\": {\n            \"type\": \"object\",\n            \"description\": \"Additional authenticated message data base64URL encoded, so it can be verified by the recipient using the tag\",\n            \"required\": [\"enc\", \"typ\", \"alg\", \"recipients\"],\n            \"properties\": {\n                \"enc\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"xchacha20poly1305_ietf\"],\n                    \"description\": \"The authenticated encryption algorithm used to encrypt the ciphertext\"\n                },\n                \"typ\": { \n                    \"type\": \"string\",\n                    \"description\": \"The message type. Ex: JWM/1.0\"\n                },\n                \"alg\": {\n                    \"type\": \"string\",\n                    \"enum\": [ \"authcrypt\", \"anoncrypt\"]\n                },\n                \"recipients\": {\n                    \"type\": \"array\",\n                    \"description\": \"A list of the recipients who the message is encrypted for\",\n                    \"items\": {\n                        \"type\": \"object\",\n                        \"required\": [\"encrypted_key\", \"header\"],\n                        \"properties\": {\n                            \"encrypted_key\": {\n                                \"type\": \"string\",\n                                \"description\": \"The key used for encrypting the ciphertext. This is also referred to as a cek\"\n                            },\n                            \"header\": {\n                                \"type\": \"object\",\n                                \"required\": [\"kid\"],\n                                \"description\": \"The recipient to whom this message will be sent\",\n                                \"properties\": {\n                                    \"kid\": {\n                                        \"type\": \"string\",\n                                        \"description\": \"base58 encoded verkey of the recipient.\"\n                                    }\n                                }\n                            }\n                        }\n                    }\n                 },     \n            },\n        },\n        \"iv\": {\n            \"type\": \"string\",\n            \"description\": \"base64 URL encoded nonce used to encrypt ciphertext\"\n        },\n        \"ciphertext\": {\n            \"type\": \"string\",\n            \"description\": \"base64 URL encoded authenticated encrypted message\"\n        },\n        \"tag\": {\n            \"type\": \"string\",\n            \"description\": \"Integrity checksum/tag base64URL encoded to check ciphertext, protected, and iv\"\n        }\n    }\n}\n

    "},{"location":"features/0023-did-exchange/","title":"Aries RFC 0023: DID Exchange v1","text":""},{"location":"features/0023-did-exchange/#summary","title":"Summary","text":"

    This RFC describes the protocol to exchange DIDs between agents when establishing a DID based relationship.

    "},{"location":"features/0023-did-exchange/#motivation","title":"Motivation","text":"

    Aries agent developers want to create agents that are able to establish relationships with each other and exchange secure information using keys and endpoints in DID Documents. For this to happen there must be a clear protocol to exchange DIDs.

    "},{"location":"features/0023-did-exchange/#version-change-log","title":"Version Change Log","text":""},{"location":"features/0023-did-exchange/#version-11-signed-rotations-without-did-documents","title":"Version 1.1 - Signed Rotations without DID Documents","text":"

    Added the optional did_rotate~attach attachment for provenance of rotation without an attached DID Document.

    "},{"location":"features/0023-did-exchange/#tutorial","title":"Tutorial","text":"

    We will explain how DIDs are exchanged, with the roles, states, and messages required.

    "},{"location":"features/0023-did-exchange/#roles","title":"Roles","text":"

    The DID Exchange Protocol uses two roles: requester and responder.

    The requester is the party that initiates this protocol after receiving an invitation message (using RFC 0434 Out of Band) or by using an implied invitation from a public DID. For example, a verifier might get the DID of the issuer of a credential they are verifying, and use information in the DIDDoc for that DID as the basis for initiating an instance of this protocol.

    Since the requester receiving an explicit invitation may not have an Aries agent, it is desirable, but not strictly, required that sender of the invitation (who has the responder role in this protocol) have the ability to help the requester with the process and/or costs associated with acquiring an agent capable of participating in the ecosystem. For example, the sender of an invitation may often be sponsoring institutions.

    The responder, who is the sender of an explicit invitation or the publisher of a DID with an implicit invitation, must have an agent capable of interacting with other agents via DIDComm.

    In cases where both parties already possess SSI capabilities, deciding who plays the role of requester and responder might be a casual matter of whose phone is handier.

    "},{"location":"features/0023-did-exchange/#states","title":"States","text":""},{"location":"features/0023-did-exchange/#requester","title":"Requester","text":"

    The requester goes through the following states per the State Machine Tables below

    "},{"location":"features/0023-did-exchange/#responder","title":"Responder","text":"

    The responder goes through the following states per the State Machine Tables below

    "},{"location":"features/0023-did-exchange/#state-machine-tables","title":"State Machine Tables","text":"

    The following are the requester and responder state machines.

    The invitation-sent and invitation-received are technically outside this protocol, but are useful to show in the state machine as the invitation is the trigger to start the protocol and is referenced from the protocol as the parent thread (pthid). This is discussed in more detail below.

    The abandoned and completed states are terminal states and there is no expectation that the protocol can be continued (or even referenced) after reaching those states.

    "},{"location":"features/0023-did-exchange/#errors","title":"Errors","text":"

    After receiving an explicit invitation, the requester may send a problem-report to the responder using the information in the invitation to either restart the invitation process (returning to the start state) or to abandon the protocol. The problem-report may be an adopted Out of Band protocol message or an adopted DID Exchange protocol message, depending on where in the processing of the invitation the error was detected.

    During the request / response part of the protocol, there are two protocol-specific error messages possible: one for an active rejection and one for an unknown error. These errors are sent using a problem_report message type specific to the DID Exchange Protocol. These errors do not transition the protocol to the abandoned state. The following list details problem-codes that may be sent in these cases:

    request_not_accepted - The error indicates that the request message has been rejected for a reason listed in the error_report. Typical reasons include not accepting the method of the provided DID, unknown endpoint protocols, etc. The request can be resent after the appropriate corrections have been made.

    request_processing_error - This error is sent when the responder was processing the request with the intent to accept the request, but some processing error occurred. This error indicates that the request should be resent as-is.

    response_not_accepted - The error indicates that the response has been rejected for a reason listed in the error_report. Typical reasons include not accepting the method of the provided DID, unknown endpoint protocols, invalid signature, etc. The response can be resent after the appropriate corrections have been made.

    response_processing_error - This error is sent when the requester was processing the response with the intent to accept the response, but some processing error occurred. This error indicates that the response should be resent as-is.

    If other errors occur, the corresponding party may send a problem-report to inform the other party they are abandoning the protocol.

    No errors are sent in timeout situations. If the requester or responder wishes to retract the messages they sent, they record so locally and return a request_not_accepted or response_not_accepted error when the other party sends a request or response.

    "},{"location":"features/0023-did-exchange/#error-message-example","title":"Error Message Example","text":"
    {\n  \"@type\": \"https://didcomm.org/didexchange/1.1/problem_report\",\n  \"@id\": \"5678876542345\",\n  \"~thread\": { \"thid\": \"<@id of message related to problem>\" },\n  \"~l10n\": { \"locale\": \"en\"},\n  \"problem-code\": \"request_not_accepted\", // matches codes listed above\n  \"explain\": \"Unsupported DID method for provided DID.\"\n}\n
    "},{"location":"features/0023-did-exchange/#error-message-attributes","title":"Error Message Attributes","text":""},{"location":"features/0023-did-exchange/#flow-overview","title":"Flow Overview","text":""},{"location":"features/0023-did-exchange/#implicit-and-explicit-invitations","title":"Implicit and Explicit Invitations","text":"

    The DID Exchange Protocol is preceded by - either knowledge of a resolvable DID (an implicit invitation) - or by a out-of-band/%VER/invitation message from the Out Of Band Protocols RFC.

    The information needed to construct the request message to start the protocol is used - either from the resolved DID Document - or the service element of the handshake_protocols attribute of the invitation.

    "},{"location":"features/0023-did-exchange/#1-exchange-request","title":"1. Exchange Request","text":"

    The request message is used to communicate the DID document of the requester to the responder using the provisional service information present in the (implicit or explicit) invitation.

    The requester may provision a new DID according to the DID method spec. For a Peer DID, this involves creating a matching peer DID and key. The newly provisioned DID and DID Doc is presented in the request message as follows:

    "},{"location":"features/0023-did-exchange/#request-message-example","title":"Request Message Example","text":"
    {\n  \"@id\": \"5678876542345\",\n  \"@type\": \"https://didcomm.org/didexchange/1.1/request\",\n  \"~thread\": { \n      \"thid\": \"5678876542345\",\n      \"pthid\": \"<id of invitation>\"\n  },\n  \"label\": \"Bob\",\n  \"goal_code\": \"aries.rel.build\",\n  \"goal\": \"To create a relationship\",\n  \"did\": \"B.did@B:A\",\n  \"did_doc~attach\": {\n      \"@id\": \"d2ab6f2b-5646-4de3-8c02-762f553ab804\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n         \"base64\": \"eyJ0eXAiOiJKV1Qi... (bytes omitted)\",\n         \"jws\": {\n            \"header\": {\n               \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n            },\n            \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n            \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n            }\n      }\n   }\n}\n
    "},{"location":"features/0023-did-exchange/#request-message-attributes","title":"Request Message Attributes","text":"

    The label property was intended to be declared as an optional property, but was added to the RFC as a required property. If an agent wishes to not use a label in the request, an empty string (\"\") or the set value Unspecified may be used to indicate a non-value. This approach ensures existing AIP 2.0 implementations do not break.

    "},{"location":"features/0023-did-exchange/#correlating-requests-to-invitations","title":"Correlating requests to invitations","text":"

    An invitation is presented in one of two forms:

    When a request responds to an explicit invitation, its ~thread.pthid MUST be equal to the @id property of the invitation as described in the out-of-band RFC.

    When a request responds to an implicit invitation, its ~thread.pthid MUST contain a DID URL that resolves to the specific service on a DID document that contains the invitation.

    "},{"location":"features/0023-did-exchange/#example-referencing-an-explicit-invitation","title":"Example Referencing an Explicit Invitation","text":"
    {\n  \"@id\": \"a46cdd0f-a2ca-4d12-afbf-2e78a6f1f3ef\",\n  \"@type\": \"https://didcomm.org/didexchange/1.1/request\",\n  \"~thread\": { \n      \"thid\": \"a46cdd0f-a2ca-4d12-afbf-2e78a6f1f3ef\",\n      \"pthid\": \"032fbd19-f6fd-48c5-9197-ba9a47040470\" \n  },\n  \"label\": \"Bob\",\n  \"goal_code\": \"aries.rel.build\",\n  \"goal\": \"To create a relationship\",\n  \"did\": \"B.did@B:A\",\n  \"did_doc~attach\": {\n      \"@id\": \"d2ab6f2b-5646-4de3-8c02-762f553ab804\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n         \"base64\": \"eyJ0eXAiOiJKV1Qi... (bytes omitted)\",\n         \"jws\": {\n            \"header\": {\n               \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n            },\n            \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n            \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n            }\n      }\n   }\n}\n
    "},{"location":"features/0023-did-exchange/#example-referencing-an-implicit-invitation","title":"Example Referencing an Implicit Invitation","text":"
    {\n  \"@id\": \"a46cdd0f-a2ca-4d12-afbf-2e78a6f1f3ef\",\n  \"@type\": \"https://didcomm.org/didexchange/1.1/request\",\n  \"~thread\": { \n      \"thid\": \"a46cdd0f-a2ca-4d12-afbf-2e78a6f1f3ef\",\n      \"pthid\": \"did:example:21tDAKCERh95uGgKbJNHYp#didcomm\" \n  },\n  \"label\": \"Bob\",\n  \"goal_code\": \"aries.rel.build\",\n  \"goal\": \"To create a relationship\",\n  \"did\": \"B.did@B:A\",\n  \"did_doc~attach\": {\n      \"@id\": \"d2ab6f2b-5646-4de3-8c02-762f553ab804\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n         \"base64\": \"eyJ0eXAiOiJKV1Qi... (bytes omitted)\",\n         \"jws\": {\n            \"header\": {\n               \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n            },\n            \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n            \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n            }\n      }\n   }\n}\n
    "},{"location":"features/0023-did-exchange/#request-transmission","title":"Request Transmission","text":"

    The request message is encoded according to the standards of the Encryption Envelope, using the recipientKeys present in the invitation.

    If the routingKeys attribute was present and non-empty in the invitation, each key must be used to wrap the message in a forward request, then encoded in an Encryption Envelope. This processing is in order of the keys in the list, with the last key in the list being the one for which the serviceEndpoint possesses the private key.

    The message is then transmitted to the serviceEndpoint.

    The requester is in the request-sent state. When received, the responder is in the request-received state.

    "},{"location":"features/0023-did-exchange/#request-processing","title":"Request processing","text":"

    After receiving the exchange request, the responder evaluates the provided DID and DID Doc according to the DID Method Spec.

    The responder should check the information presented with the keys used in the wire-level message transmission to ensure they match.

    The responder MAY look up the corresponding invitation identified in the request's ~thread.pthid to determine whether it should accept this exchange request.

    If the responder wishes to continue the exchange, they will persist the received information in their wallet. They will then either update the provisional service information to rotate the key, or provision a new DID entirely. The choice here will depend on the nature of the DID used in the invitation.

    The responder will then craft an exchange response using the newly updated or provisioned information.

    "},{"location":"features/0023-did-exchange/#request-errors","title":"Request Errors","text":"

    See Error Section above for message format details.

    "},{"location":"features/0023-did-exchange/#request-rejected","title":"Request Rejected","text":"

    Possible reasons:

    "},{"location":"features/0023-did-exchange/#request-processing-error","title":"Request Processing Error","text":""},{"location":"features/0023-did-exchange/#2-exchange-response","title":"2. Exchange Response","text":"

    The exchange response message is used to complete the exchange. This message is required in the flow, as it updates the provisional information presented in the invitation.

    "},{"location":"features/0023-did-exchange/#response-message-example","title":"Response Message Example","text":"
    {\n  \"@type\": \"https://didcomm.org/didexchange/1.1/response\",\n  \"@id\": \"12345678900987654321\",\n  \"~thread\": {\n    \"thid\": \"<The Thread ID is the Message ID (@id) of the first message in the thread>\"\n  },\n  \"did\": \"B.did@B:A\",\n  \"did_doc~attach\": {\n      \"@id\": \"d2ab6f2b-5646-4de3-8c02-762f553ab804\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n         \"base64\": \"eyJ0eXAiOiJKV1Qi... (bytes omitted)\",\n         \"jws\": {\n            \"header\": {\n               \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n            },\n            \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n            \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n            }\n      }\n   },\n   \"did_rotate~attach\": {\n      \"mime-type\": \"text/string\",\n      \"data\": {\n         \"base64\": \"Qi5kaWRAQjpB\",\n         \"jws\": {\n         \"header\": {\n            \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n         },\n         \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n         \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n         }\n      }\n   }\n}\n

    The invitation's recipientKeys should be dedicated to envelopes authenticated encryption throughout the exchange. These keys are usually defined in the KeyAgreement DID verification relationship.

    "},{"location":"features/0023-did-exchange/#response-message-attributes","title":"Response Message Attributes","text":"

    In addition to a new DID, the associated DID Doc might contain a new endpoint. This new DID and endpoint are to be used going forward in the relationship.

    "},{"location":"features/0023-did-exchange/#response-transmission","title":"Response Transmission","text":"

    The message should be packaged in the encrypted envelope format, using the keys from the request, and the new keys presented in the internal did doc.

    When the message is sent, the responder are now in the response-sent state. On receipt, the requester is in the response-received state.

    "},{"location":"features/0023-did-exchange/#response-processing","title":"Response Processing","text":"

    When the requester receives the response message, they will decrypt the authenticated envelope which confirms the source's authenticity. After decryption validation, the signature on the did_doc~attach or did_rotate~attach MUST be validated, if present. The key used in the signature MUST match the key used in the invitation. After attachment signature validation, they will update their wallet with the new information, and use that information in sending the complete message.

    "},{"location":"features/0023-did-exchange/#response-errors","title":"Response Errors","text":"

    See Error Section above for message format details.

    "},{"location":"features/0023-did-exchange/#response-rejected","title":"Response Rejected","text":"

    Possible reasons:

    "},{"location":"features/0023-did-exchange/#response-processing-error","title":"Response Processing Error","text":""},{"location":"features/0023-did-exchange/#3-exchange-complete","title":"3. Exchange Complete","text":"

    The exchange complete message is used to confirm the exchange to the responder. This message is required in the flow, as it marks the exchange complete. The responder may then invoke any protocols desired based on the context expressed via the pthid in the DID Exchange protocol.

    "},{"location":"features/0023-did-exchange/#complete-message-example","title":"Complete Message Example","text":"
    {\n  \"@type\": \"https://didcomm.org/didexchange/1.1/complete\",\n  \"@id\": \"12345678900987654321\",\n  \"~thread\": {\n    \"thid\": \"<The Thread ID is the Message ID (@id) of the first message in the thread>\",\n    \"pthid\": \"<pthid used in request message>\"\n  }\n}\n

    The pthid is required in this message, and must be identical to the pthid used in the request message.

    After a complete message is sent, the requester is in the completed terminal state. Receipt of the message puts the responder into the completed state.

    "},{"location":"features/0023-did-exchange/#complete-errors","title":"Complete Errors","text":"

    See Error Section above for message format details.

    "},{"location":"features/0023-did-exchange/#complete-rejected","title":"Complete Rejected","text":"

    This is unlikely to occur with other than an unknown processing error (covered below), so no possible reasons are listed. As experience is gained with the protocol, possible reasons may be added.

    "},{"location":"features/0023-did-exchange/#complete-processing-error","title":"Complete Processing Error","text":""},{"location":"features/0023-did-exchange/#next-steps","title":"Next Steps","text":"

    The exchange between the requester and the responder has been completed. This relationship has no trust associated with it. The next step should be to increase the trust to a sufficient level for the purpose of the relationship, such as through an exchange of proofs.

    "},{"location":"features/0023-did-exchange/#peer-did-maintenance","title":"Peer DID Maintenance","text":"

    When Peer DIDs are used in an exchange, it is likely that both the requester and responder will want to perform some relationship maintenance such as key rotations. Future RFC updates will add these maintenance ../../features.

    "},{"location":"features/0023-did-exchange/#reference","title":"Reference","text":""},{"location":"features/0023-did-exchange/#drawbacks","title":"Drawbacks","text":"

    N/A at this time

    "},{"location":"features/0023-did-exchange/#prior-art","title":"Prior art","text":""},{"location":"features/0023-did-exchange/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0023-did-exchange/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Trinsic.id Commercial mobile and web app built using Aries Framework - .NET MISSING test results"},{"location":"features/0024-didcomm-over-xmpp/","title":"Aries RFC 0024: DIDComm over XMPP","text":""},{"location":"features/0024-didcomm-over-xmpp/#summary","title":"Summary","text":"

    While DIDComm leaves its users free to choose any underlying communication protocol, for peer-to-peer DID relationships with one or both parties behind a firewall actually getting the messages to the other party is not straightforward.

    Fortunately this is a classical problem, encountered by all realtime communication protocols, and it is therefore natural to use one of these protocols to deal with the obstacles posed by firewalls. The DIDComm-over-XMPP feature provides an architecture to exchange DIDComm connection protocol messages over XMPP, using XMPP to solve any firewall issues.

    DIDComm-over-XMPP enables:

    and all of this in spite of the presence of firewalls.

    Editor's note: A reference should be added to Propose HIPE: Transports #94

    "},{"location":"features/0024-didcomm-over-xmpp/#motivation","title":"Motivation","text":"

    Currently, all examples of service endpoint in the W3C DID specification use HTTP. This assumes that the endpoint is running an HTTP server and firewalls have been opened to allow this traffic to pass through. This assumption typically fails for DIDComm agents behind LAN firewalls or using cellular networks. As a consequence, such DIDComm agents can be expected to be unavailable for incoming DIDComm messages, whereas several use cases require this. The following is an example of this.

    A consumer contacts a customer service agent of his health insurance company, and is subsequently asked for proof of identity before getting answers to his personal health related questions. DIDcom could be of use here, replacing privacy sensitive and time consuming questions in order to establish the consumers' identity with an exchange of verifiable credentials using DIDcom. In that case, the agent would just send a DIDComm message to the caller to link the ongoing human-to-human communication session to a DIDComm agent-to-agent communication session. The DIDComm connection protocol would then enable the setting up and maintenance of a trusted electronic relationship, to be used to exchange verifiable credentials. Replace insurance company with any sizeable business to consumer company and one realizes that this use case is far from insignificant.

    Unfortunately, by itself, the parties DIDcom agents will be unable to bypass the firewalls involved and exchange DIDcom messages. Therefore XMPP is called to the rescue to serve as the transport protocol which is capable with firewalls. Once the firewalls issue is solved, DIDcom can be put to use in all of these cases.

    The XMPP protocol is a popular protocol for chat and messaging. It has a client-server structure that bypasses any firewall issues.

    "},{"location":"features/0024-didcomm-over-xmpp/#tutorial","title":"Tutorial","text":"

    The DIDComm-over-XMPP feature provides an architecture for the transport of DIDComm messages over an XMPP network, using XMPP to bypass any firewalls at the receiving side.

    "},{"location":"features/0024-didcomm-over-xmpp/#didcomm","title":"DIDComm","text":"

    The DIDComm wire message format is specified in HIPE 0028-wire-message-format. It can carry among others the DIDComm connection protocol, as specified in Hyperledger Indy Hipe 0031. The purpose of the latter protocol is to set up a trusted electronic relationship between two parties (natural person, legal person, ...). Technically, the trust relationship involves the following

    W3C specifies Data Model and Syntaxes for Decentralized Identifiers (DIDs). This specification introduces Decentralized Identifiers, DIDs, for identification. A DID can be resolved into a DID Document that contains the associated keys and service endpoints, see also W3C's A Primer for Decentralized Identifiers. W3C provides a DID Method Registry for a complete list of all known DID Method specifications. Many of the DID methods use an unambiguous source of truth to resolve a DID Document, e.g. a well governed public blockchain. An exception is the Peer DID method that relies on the peers, i.e. parties in the trusted electronic relationship to maintain the DID Document.

    "},{"location":"features/0024-didcomm-over-xmpp/#xmpp","title":"XMPP","text":"

    Extensible Messaging and Presence Protocol (XMPP) is a communication protocol for message-oriented middleware based on XML (Extensible Markup Language). It enables the near-real-time exchange of structured yet extensible data between any two or more network entities. Designed to be extensible, the protocol has been used also for publish-subscribe systems, signalling for VoIP, video, file transfer, gaming, the Internet of Things applications such as the smart grid, and social networking services.

    Unlike most instant messaging protocols, XMPP is defined in an open standard and uses an open systems approach of development and application, by which anyone may implement an XMPP service and interoperate with other organizations' implementations. Because XMPP is an open protocol, implementations can be developed using any software license and many server, client, and library implementations are distributed as free and open-source software. Numerous freeware and commercial software implementations also exist.

    XMPP uses 3 types of messages:

    Message Type Description PRESENSE Inform listeners that agent is online MESSAGE Sending message to other agent IQ MESSAGE Asking for response from other agent

    "},{"location":"features/0024-didcomm-over-xmpp/#didcomm-over-xmpp","title":"DIDComm over XMPP","text":""},{"location":"features/0024-didcomm-over-xmpp/#use-of-message-normative","title":"Use of MESSAGE (normative)","text":"

    A DIDComm wire message shall be sent send as plaintext XMPP MESSAGE, without any additional identifiers.

    "},{"location":"features/0024-didcomm-over-xmpp/#service-endpoint-normative","title":"Service endpoint (normative)","text":"

    A DIDComm-over-XMPP service shall comply to the following.

    1. The id shall have a DID fragment \"#xmpp\".
    2. The type shall be \"XmppService\".
    3. The serviceEndpoint
    4. shall not have a resource part (i.e. \"/...resource...\")
    5. shall comply to the following ABNF.
    xmpp-service-endpoint = \"xmpp:\" userpart \"@did.\" domainpart\n  userpart = 1\\*CHAR\n  domainpart = 1\\*CHAR 1\\*(\".\" 1\\*char)\n  CHAR = %x01-7F\n

    The reason for not allowing a resources part is that DIDComm messages are addressed to the person/entity associated with the DID, and not to any particular device.

    A receiving XMPP client shall identify an incoming XMPP message as a DIDComm message, if the serviceEndpoint complies to the above. It shall pass any DIDComm message to its DIDComm agent.

    The following is an example of a complient DIDComm-over-XMPP service endpoint.

    {\n  \"service\": [{\n    \"id\": \"did:example:123456789abcdefghi#xmpp\",\n    \"type\": \"XmppService\",\n    \"serviceEndpoint\": \"xmpp:bob@did.bar.com\"\n  }]\n}\n
    "},{"location":"features/0024-didcomm-over-xmpp/#userpart-generation-informative","title":"Userpart generation (informative)","text":"

    There are multiple methods how the userpart of the DIDComm-over-XMPP serviceEndpoint may be generated.

    Editor's note: Should the description below be interpreted as informative, or should there be any signalling to indicate which userpart-generating method was used?

    Method 1: Same userpart as for human user

    In this method, the userpart is the same as used for human-to-human XMPP-based chat, and the resource part is removed. Here is an example.

    Human-to-human XMPP address: xmpp:alice@foo.com/phone\n-->\nDIDComm-over-XMPP serviceEndpoint: xmpp:alice@did.foo.com\n

    The advantage of this method is its simplicity. An XMPP servicer needs to be configured only once to support this convention. No further registration actions are needed by any of the the users for its XMPP clients.

    The disadvantage of this method is that it creates a strong correlation point, which may conflict with privacy requirements.

    Editor's note: More advantages or disadvantages?

    A typical application of Method 1 is when there is an ongoing human-to-human (or human-to-bot) chat session that uses XMPP and the two parties what to set up a pairwise DID relationship. One can skip Step 0 \"Invitation to Connect\" (HIPE 0031) and immediately perform Step 1 \"Connection Request\".

    Method 2: Random userpart

    In this method, the userpart is randomly generated by either the XMPP client or the XMPP server, and it is rotated at a regular basis. Here is an example.

    DIDComm-over-XMPP serviceEndpoint: xmpp:RllH91rcFdE@did.foo.com\n

    The advantage of this method is low correlation and hence high privacy. If the DIDComm-over-XMPP serviceEndpoint is rotated after each set of XMPP exchange (\"session\"), then it cannot be correlated with subsequent XMPP exchanges.

    The disadvantage of this method is the high operational complexity of this method. It requires a client to keep a reserve of random XMPP addresses with the XMPP server. It significantly increases the routing tables of the XMPP server. It also places a burden on both DIDComm agents, because of the rapid rotation of DID Documents.

    Editor's note: More advantages or disadvantages?

    "},{"location":"features/0024-didcomm-over-xmpp/#reference","title":"Reference","text":"

    For use of XMPP, it is recommended to use Openfire Server open source project, including 2 plugins to enable server caching and message carbon copy. This will enable sending DIDcom to mulitple endpoints of the same person.

    Editor's note: Add references to the 2 plugins

    XMPP servers handle messages sent to a user@host (or \"bare\") XMPP address with no resource by delivering that message only to the resource with the highest priority for the target user. Some server implementations, however, have chosen to send these messages to all of the online resources for the target user. If the target user is online with multiple resources when the original message is sent, a conversation ensues on one of the user's devices; if the user subsequently switches devices, parts of the conversation may end up on the alternate device, causing the user to be confused, misled, or annoyed.

    To solve this is is recommended to use the plugin \"Message Carbons\". It will ensure that all of target user devices get both sides of all conversations in order to avoid user confusion. As a pleasant side-effect, information about the current state of a conversation is shared between all of a user's clients that implement this protocol.

    Editor's note: Add reference to \"Message Carbons\"

    "},{"location":"features/0024-didcomm-over-xmpp/#drawbacks","title":"Drawbacks","text":"

    Editor's note: Add drawbacks

    "},{"location":"features/0024-didcomm-over-xmpp/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    All service endpoint examples from W3C's Data Model and Syntaxes for Decentralized Identifiers (DIDs) are HTTP. So if a consumer would want to be reachable for incoming DIDComm messages, then it should run an HTTP service on its consumer device and take actions to open firewalls (and handle network-address translations) towards its device. Such scenario is technically completely unrealistic, not to mention the security implications of such scenario.

    XMPP was specifically designed for incoming messages to consumer devices. XMPP's client-server structure overcomes any firewall issues.

    "},{"location":"features/0024-didcomm-over-xmpp/#prior-art","title":"Prior art","text":"

    Editor's note: Add prior art

    "},{"location":"features/0024-didcomm-over-xmpp/#unresolved-questions","title":"Unresolved questions","text":"

    Editor's note: Any unresolved questions?

    "},{"location":"features/0024-didcomm-over-xmpp/#security-considerations","title":"Security considerations","text":"

    Editor's note: Add security considerations

    "},{"location":"features/0024-didcomm-over-xmpp/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0025-didcomm-transports/","title":"Aries RFC 0025: DIDComm Transports","text":""},{"location":"features/0025-didcomm-transports/#summary","title":"Summary","text":"

    This RFC Details how different transports are to be used for Agent Messaging.

    "},{"location":"features/0025-didcomm-transports/#motivation","title":"Motivation","text":"

    Agent Messaging is designed to be transport independent, including message encryption and agent message format. Each transport does have unique ../../features, and we need to standardize how the transport ../../features are (or are not) applied.

    "},{"location":"features/0025-didcomm-transports/#reference","title":"Reference","text":"

    Standardized transport methods are detailed here.

    "},{"location":"features/0025-didcomm-transports/#https","title":"HTTP(S)","text":"

    HTTP(S) is the first, and most used transport for DID Communication that has received heavy attention.

    While it is recognized that all DIDComm messages are secured through strong encryption, making HTTPS somewhat redundant, it will likely cause issues with mobile clients because venders (Apple and Google) are limiting application access to the HTTP protocol. For example, on iOS 9 or above where [ATS])(https://developer.apple.com/documentation/bundleresources/information_property_list/nsapptransportsecurity) is in effect, any URLs using HTTP must have an exception hard coded in the application prior to uploading to the iTunes Store. This makes DIDComm unreliable as the agent initiating the the request provides an endpoint for communication that the mobile client must use. If the agent provides a URL using the HTTP protocol it will likely be unusable due to low level operating system limitations.

    As a best practice, when HTTP is used in situations where a mobile client (iOS or Android) may be involved it is highly recommended to use the HTTPS protocol, specifically TLS 1.2 or above.

    Other important notes on the subject of using HTTP(S) include:

    "},{"location":"features/0025-didcomm-transports/#known-implementations","title":"Known Implementations","text":"

    Aries Cloud Agent - Python Aries Framework - .NET

    "},{"location":"features/0025-didcomm-transports/#websocket","title":"Websocket","text":"

    Websockets are an efficient way to transmit multiple messages without the overhead of individual requests.

    "},{"location":"features/0025-didcomm-transports/#known-implementations_1","title":"Known Implementations","text":"

    Aries Cloud Agent - Python Aries Framework - .NET

    "},{"location":"features/0025-didcomm-transports/#xmpp","title":"XMPP","text":"

    XMPP is an effective transport for incoming DID-Communication messages directly to mobile agents, like smartphones.

    "},{"location":"features/0025-didcomm-transports/#known-implementations_2","title":"Known Implementations","text":"

    XMPP is implemented in the Openfire Server open source project. Integration with DID Communication agents is work-in-progress.

    "},{"location":"features/0025-didcomm-transports/#other-transports","title":"Other Transports","text":"

    Other transports may be used for Agent messaging. As they are developed, this RFC should be updated with appropriate standards for the transport method. A PR should be raised against this doc to facilitate discussion of the proposed additions and/or updates. New transports should highlight the common elements of the transport (such as an HTTP response code for the HTTP transport) and how they should be applied.

    "},{"location":"features/0025-didcomm-transports/#message-routing","title":"Message Routing","text":"

    The transports described here are used between two agents. In the case of message routing, a message will travel across multiple agent connections. Each intermediate agent (see Mediators and Relays) may use a different transport. These transport details are not made known to the sender, who only knows the keys of Mediators and the first endpoint of the route.

    "},{"location":"features/0025-didcomm-transports/#message-context","title":"Message Context","text":"

    The transport used from a previous agent can be recorded in the message trust context. This is particularly true of controlled network environments, where the transport may have additional security considerations not applicable on the public internet. The transport recorded in the message context only records the last transport used, and not any previous routing steps as described in the Message Routing section of this document.

    "},{"location":"features/0025-didcomm-transports/#transport-testing","title":"Transport Testing","text":"

    Transports which operate on IP based networks can be tested by an Agent Test Suite through a transport adapter. Some transports may be more difficult to test in a general sense, and may need specialized testing frameworks. An agent with a transport not yet supported by any testing suites may have non-transport testing performed by use of a routing agent.

    "},{"location":"features/0025-didcomm-transports/#drawbacks","title":"Drawbacks","text":"

    Setting transport standards may prevent some uses of each transport method.

    "},{"location":"features/0025-didcomm-transports/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0025-didcomm-transports/#prior-art","title":"Prior art","text":"

    Several agent implementations already exist that follow similar conventions.

    "},{"location":"features/0025-didcomm-transports/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0025-didcomm-transports/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0028-introduce/","title":"Aries RFC 0028: Introduce Protocol 1.0","text":""},{"location":"features/0028-introduce/#summary","title":"Summary","text":"

    Describes how a go-between can introduce two parties that it already knows, but that do not know each other.

    "},{"location":"features/0028-introduce/#change-log","title":"Change Log","text":""},{"location":"features/0028-introduce/#motivation","title":"Motivation","text":"

    Introductions are a fundamental activity in human relationships. They allow us to bootstrap contact information and trust. They are also a source of virality. We need a standard way to do introductions in an SSI ecosystem, and it needs to be flexible, secure, privacy-respecting, and well documented.

    "},{"location":"features/0028-introduce/#tutorial","title":"Tutorial","text":""},{"location":"features/0028-introduce/#name-and-version","title":"Name and Version","text":"

    This is the Introduce 1.0 protocol. It is uniquely identified by the URI:

    \"https://didcomm.org/introduce/1.0\"\n
    "},{"location":"features/0028-introduce/#key-concepts","title":"Key Concepts","text":""},{"location":"features/0028-introduce/#basic-use-case","title":"Basic Use Case","text":"

    Introductions target scenarios like this:

    Alice knows Bob and Carol, and can talk to each of them. She wants to introduce them in a way that allows a relationship to form.

    This use case is worded carefully; it is far more adaptable than it may appear at first glance. The Advanced Use Cases section later in the doc explores many variations. But the early part of this document focuses on the simplest reading of the use case.

    "},{"location":"features/0028-introduce/#goal","title":"Goal","text":"

    When we introduce two friends, we may hope that a new friendship ensues. But technically, the introduction is complete when we provide the opportunity for a relationship--what the parties do with that opportunity is a separate question.

    Likewise, the goal of our formal introduction protocol should be crisply constrained. Alice wants to gather consent and contact information from Bob and Carol; then she wants to invite them to connect. What they do with her invitation after that is not under her control, and is outside the scope of the introduction.

    This suggests an important insight about the relationship between the introduce protocol and the Out-Of-Band protocols: they overlap. The invitation to form a relationship, which begins the Out-Of-Band protocols, is also the final step in an introduction.

    Said differently, the goal of the introduce protocol is to start the Out-Of-Band protocols.

    "},{"location":"features/0028-introduce/#transferring-trust","title":"Transferring Trust","text":"

    [TODO: talk about how humans do introductions instead of just introducing themselves to strangers because it raises trust. Example of Delta Airlines introducing you to Heathrow Airport; you trust that you're really talking to Heathrow based on Delta's asertion.]

    "},{"location":"features/0028-introduce/#roles","title":"Roles","text":"

    There are three [TODO:do we want to support introducing more than 2 at a time?] participants in the protocol, but only two roles.

    The introducer begins the process and must know the other two parties. Alice is the introducer in the diagram above. The other two participants are both introducees.

    "},{"location":"features/0028-introduce/#states","title":"States","text":"

    In a successful introduction, the introducer state progresses from [start] -> arranging -> delivering -> confirming (optional) -> [done]. Confirming is accomplished with an ACK to an introducee to let them know that their out-of-band message was forwarded.

    Meanwhile, each introducee progresses from [start] -> deciding -> waiting -> [done].

    Of course, errors and optional choices complicate the possibilities. The full state machine for each party are:

    The subtleties are explored in the Advanced Use Cases section.

    "},{"location":"features/0028-introduce/#messages","title":"Messages","text":""},{"location":"features/0028-introduce/#proposal","title":"proposal","text":"

    This message informs an introducee that an introducer wants to perform an introduction, and requests approval to do so. It works the same way that proposals do in double-opt-in introductions in the non-agent world:

    The DIDComm message looks like this:

    {\n  \"@type\": \"https://didcomm.org/introduce/1.0/proposal\",\n  \"@id\": \"df3b699d-3aa9-4fd0-bb67-49594da545bd\",\n  \"to\": {\n    \"name\": \"Bob\"\n  }\n}\n

    The to field contains an introducee descriptor that provides context about the introduction, helping the party receiving the proposal to evaluate whether they wish to accept it. Depending on how much context is available between introducer and introducee independent of the formal proposal message, this can be as simple as a name, or something fancier (see Advanced Use Cases below).

    "},{"location":"features/0028-introduce/#response","title":"response","text":"

    A standard example of the message that an introducee sends in response to an introduction proposal would be:

    {\n  \"@type\": \"https://didcomm.org/introduce/1.0/response\",\n  \"@id\": \"283e15b5-a3f7-43e7-bac8-b75e4e7a0a25\",\n  \"~thread\": {\"thid\": \"df3b699d-3aa9-4fd0-bb67-49594da545bd\"},\n  \"approve\": true,\n  \"oob-message\": {\n    \"@type\": \"https://didcomm.org/out-of-band/1.0/invitation\",\n    \"@id\": \"12345678900987654321\",\n    \"label\": \"Robert\",\n    \"goal\": \"To issue a Faber College Graduate credential\",\n    \"goal_code\": \"issue-vc\",\n    \"handshake_protocols\": [\n      \"https://didcomm.org/didexchange/1.0\",\n      \"https://didcomm.org/connections/1.0\"\n    ],\n    \"service\": [\"did:sov:LjgpST2rjsoxYegQDRm7EL\"]\n  }\n}\n

    A simpler response, also valid, might look like this:

    {\n  \"@type\": \"https://didcomm.org/introduce/1.0/response\",\n  \"@id\": \"283e15b5-a3f7-43e7-bac8-b75e4e7a0a25\",\n  \"~thread\": {\"thid\": \"df3b699d-3aa9-4fd0-bb67-49594da545bd\"},\n  \"approve\": true\n}\n

    The difference between the two forms is whether the response contains a valid out-of-band message (see RFC 0434). Normally, it should--but sometimes, an introducee may not be able to (or may not want to) share a DIDComm endpoint to facilitate the introduction. In such cases, the stripped-down variant may be the right choice. See the Advanced Use Cases section for more details.

    At least one of the more complete variants must be received by an introducer to successfully complete the introduction, because the final step in the protocol is to begin one of the Out-Of-Band protocols by forwarding the message from one introducee to the other.

    "},{"location":"features/0028-introduce/#note-on-the-ouf-of-band-messages","title":"Note on the ouf-of-band messages","text":"

    These messages are not a member of the introductions/1.0 protocol; they are not even adopted. They belong to the out-of-band protocols, and are no different from the message that two parties would generate when one invites the other with no intermediary, except that:

    "},{"location":"features/0028-introduce/#request","title":"request","text":"

    This message asks for an introduction to be made. This message also uses the introducee descriptor block, to tell the potential introducer which introducee is the object of the sender's interest:

    {\n  \"@type\": \"https://didcomm.org/introduce/1.0/request\",\n  \"@id\": \"df3b699d-3aa9-4fd0-bb67-49594da545bd\",\n  \"please_introduce_to\": {\n    \"name\": \"Carol\",\n    \"description\": \"The woman who spoke after you at the PTA meeting last night.\",\n    \"expected\": true\n  },\n  \"nwise\": false,\n  \"~timing\": { \"expires_time\": \"2019-04-23 18:00Z\" }\n}\n

    The recipient can choose whether or not to honor it in their own way, on their own schedule. However, a problem_report could be returned if the recipient chooses not to honor it.

    "},{"location":"features/0028-introduce/#advanced-use-cases","title":"Advanced Use Cases","text":"

    Any of the parties can be an organization or thing instead of a person.

    Bob and Carol may actually know each other already, without Alice realizing it. The introduction may be rejected. It may create a new pairwise relationship between Bob and Carol that is entirely invisible to Alice. Or it may create an n-wise relationship in which Alice, Bob, and Carol know one another by the same identifiers.

    Some specific examples follow.

    "},{"location":"features/0028-introduce/#one-introducee-cant-do-didcomm","title":"One introducee can't do DIDComm","text":"

    The Out-Of-Band Protocols allow the invited party to be onboarded (acquire software and an agent) as part of the workflow.

    Introductions support this use case, too. In such a case, the introducer sends a standard proposal to the introducee that DOES have DIDComm capabilities, but conveys the equivalent of a proposal over a non-DIDComm channel to the other introducee. The response from the DIDComm-capable introducee must include an out-of-band message with a deep link for onboarding, and this is sent to the introducee that needs onboarding.

    "},{"location":"features/0028-introduce/#neither-introducee-can-do-didcomm","title":"Neither introducee can do DIDComm","text":"

    In this case, the introducer first goes through onboarding via one of the Out-Of-Band protocols with one introducee. Once that introducee can do DIDComm, the previous workflow is used.

    "},{"location":"features/0028-introduce/#introducer-doesnt-have-didcomm-capabilities","title":"Introducer doesn't have DIDComm capabilities","text":"

    This might happen if AliceCorp wants to connect two of its customers. AliceCorp may not be able to talk to either of its customers over DIDComm channels, but it doesn't know whether they can talk to each other that way.

    In this case, the introducer conveys the same information that a proposal would contain, using non-DIDComm channels. As long as one of the introducees sends back some kind of response that includes approval and an out-of-band message, the message can be delivered. The entire interaction is DIDComm-less.

    "},{"location":"features/0028-introduce/#one-introducee-has-a-public-did-with-a-standing-invitation","title":"One introducee has a public DID with a standing invitation","text":"

    This might happen if Alice wants to introduce Bob to CarolCorp, and CarolCorp has published a connection-invitation for general use.

    As introducer, Alice simply has to forward CarolCorp's connection-invitation to Bob. No proposal message needs to be sent to CarolCorp; this is the skip proposal event shown in the introducer's state machine.

    "},{"location":"features/0028-introduce/#introducee-requests-introduction","title":"Introducee requests introduction","text":"

    Alice still acts as the introducer, but Bob now asks Alice to introduce him to a candidate introducee discovered a priori with the help-me-discover protocol:

    {\n  \"@type\": \"https://didcomm.org/introduce/1.0/request\",\n  \"@id\": \"df3b699d-3aa9-4fd0-bb67-49594da545bd\",\n  \"please_introduce_to\": {\n      \"discovered\": \"didcomm:///5f2396b5-d84e-689e-78a1-2fa2248f03e4/.candidates%7B.id+%3D%3D%3D+%22Carol%22%7D\"\n  },\n  \"~timing\": { \"expires_time\": \"2019-04-23 18:00Z\" }\n}\n

    This request message includes a discovered property with a linkable message path that uniquely identifies the candidate introducee.

    "},{"location":"features/0028-introduce/#requesting-confirmation","title":"Requesting confirmation","text":"

    [TODO: A field in the response where an introducee asks to be notified that the introduction has been made?]

    "},{"location":"features/0028-introduce/#other-stuff","title":"Other stuff","text":"

    [TODO: What if Alice is introducing Bob, a public entity with no connection to her, to Carol, a private person? Can she just relay Bob's invitation that he published on his website? Are there security or privacy implications? What if she is introducing 2 public entities and has a connection to neither?]

    "},{"location":"features/0028-introduce/#reference","title":"Reference","text":""},{"location":"features/0028-introduce/#proposal_1","title":"proposal","text":"

    In the tutorial narrative, only a simple proposal was presented. A fancier version might be:

    {\n  \"@type\": \"https://didcomm.org/introduce/1.0/proposal\",\n  \"@id\": \"df3b699d-3aa9-4fd0-bb67-49594da545bd\",\n  \"to\": {\n    \"name\": \"Kaiser Hospital\",\n    \"description\": \"Where I want to schedule your MRI. NOTE: NOT the one downtown!\",\n    \"description~l10n\": { \"locale\": \"en\", \"es\": \"Donde se toma el MRI; no en el centro\"},\n    \"where\": \"@34.0291739,-118.3589892,12z\",\n    \"img~attach\": {\n      \"description\": \"view from Marina Blvd\",\n      \"mime-type\": \"image/png\",\n      \"filename\": \"kaiser_culver_google.jpg\",\n      \"content\": {\n        \"link\": \"http://bit.ly/2FKkby3\",\n        \"byte_count\": 47738,\n        \"sha256\": \"cd5f24949f453385c89180207ddb1523640ac8565a214d1d37c4014910a4593e\"\n      }\n    },\n    \"proposed\": false\n  },\n  \"nwise\": true,\n  \"~timing\": { \"expires_time\": \"2019-04-23 18:00Z\" }\n}\n

    This adds a number of fields to the introducee descriptor. Each is optional and may be appropriate in certain circumstances. Most should be self-explanatory, but the proposed field deserves special comment. This tells whether the described introducee has received a proposal of their own, or will be introduced without that step.

    This example also adds the nwise field to the proposal. When nwise is present and its value is true, the proposal is to establish an nwise relationship in which the introducer participates, as opposed to a pairwise relationship in which only the introducees participate.

    [TODO: do we care about having a response signed? Security? MITM?]

    "},{"location":"features/0028-introduce/#errors","title":"Errors","text":"

    [TODO: What can go wrong.]

    "},{"location":"features/0028-introduce/#localization","title":"Localization","text":"

    [TODO: the description field in an introducee descriptor. Error codes/catalog.]

    "},{"location":"features/0028-introduce/#drawbacks","title":"Drawbacks","text":"

    Why should we not do this?

    "},{"location":"features/0028-introduce/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0028-introduce/#prior-art","title":"Prior art","text":"

    Discuss prior art, both the good and the bad, in relation to this proposal. A few examples of what this can include are:

    This section is intended to encourage you as an author to think about the lessons from other implementers, provide readers of your proposal with a fuller picture. If there is no prior art, that is fine - your ideas are interesting to us whether they are brand new or if they are an adaptation from other communities.

    Note that while precedent set by other communities is some motivation, it does not on its own motivate an enhancement proposal here. Please also take into consideration that Indy sometimes intentionally diverges from common identity ../../features.

    "},{"location":"features/0028-introduce/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0028-introduce/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0030-sync-connection/","title":"Aries RFC 0030: Sync Connection Protocol 1.0","text":""},{"location":"features/0030-sync-connection/#summary","title":"Summary","text":"

    Define a set of non-centralized protocols (that is, ones that do not involve a common store of state like a blockchain), whereby parties using peer DIDs can synchronize the state of their shared relationship by direct communication with one another.

    "},{"location":"features/0030-sync-connection/#change-log","title":"Change Log","text":""},{"location":"features/0030-sync-connection/#motivation","title":"Motivation","text":"

    For Alice and Bob to interact, they must establish and maintain state. This state includes all the information in a DID Document: endpoint, keys, and associated authorizations.

    The DID exchange protocol describes how these DID Docs are initially exchanged as a relationship is built. However, its mandate ends when a connection is established. This RFC focuses on how peers maintain their relationship thereafter, as DID docs evolve.

    "},{"location":"features/0030-sync-connection/#tutorial","title":"Tutorial","text":"

    Note 1: This RFC assumes you are thoroughly familiar with terminology and constructs from the peer DID method spec. Check there if you need background.

    Note 2: Most protocols between identity owners deal only with messages that cross a domain boundary--what Alice sends to Bob, or vice versa. What Alice does internally is generally none of Bob's business, since interoperability is a function of messages that are passed to external parties, not events that happen inside one's own domain. However, this protocol has some special requirements. Alice may have multiple agents, and Bob's behavior must account for the possibility that each of them has a different view of current relationship state. Alice has a responsibility to share and harmonize the view of state among her agents. Bob doesn't need to know exactly how she does it--but he does need to know that she's doing it, somehow--and he may need to cooperate with Alice to intelligently resolve divergences. For this reason, we describe the protocol as if it involved message passing within a domain in addition to message passing across domains. This is a simplification. The true, precise requirement for compliance is that implementers must pass messages across domains as described here, and they must appear to an outside observer as if they were passing messages within their domain as the protocol stipulates--but if they achieve the intra-domain results using some other mechanism besides DIDComm message passing, that is fine.

    "},{"location":"features/0030-sync-connection/#name-and-version","title":"Name and Version","text":"

    This RFC defines the sync_connection protocol, version 1.x, as identified by the following PIURI:

    https://didcomm.org/sync_connection/1.0\n

    Of course, subsequent evolutions of the protocol will replace 1.0 with an appropriate update per semver rules.

    A related, minor protocol is also defined in subdocs of this RFC:

    "},{"location":"features/0030-sync-connection/#roles","title":"Roles","text":"

    The only role defined in this protocol is peer. However, see this note in the peer DID method spec for some subtleties.

    "},{"location":"features/0030-sync-connection/#states","title":"States","text":"

    This is a steady-state protocol, meaning that the state of participants does not change. Instead, all participants are continuously in a syncing state.

    "},{"location":"features/0030-sync-connection/#messages","title":"Messages","text":""},{"location":"features/0030-sync-connection/#sync_state","title":"sync_state","text":"

    This message announces that the sender wants to synchronize state with the recipient. This could happen because the sender suspects they are out of sync, or because the sender wants to change the state by announcing new, never-before-seen information. The recipient can be another agent within the same sovereign domain, or it can be an agent on the other side of the relationship. A sample looks like this:

    {\n  \"@type\": \"https://didcomm.org/sync-connection/1.0/sync_state\",\n  \"@id\": \"e61586dd-f50e-4ed5-a389-716a49817207\",\n  \"for\": \"did:peer:11-479cbc07c3f991725836a3aa2a581ca2029198aa420b9d99bc0e131d9f3e2cbe\",\n  \"base_hash\": \"d48f058771956a305e12a3b062a3ac81bd8653d7b1a88dd07db8f663f37bf8e0\",\n  \"base_hash_time\": \"2019-07-23 18:05:06.123Z\",\n  \"deltas\": [\n    {\n      \"id\": \"040aaa5e-1a27-40d8-8d53-13a00b82d235\",\n      \"change\": \"ewogICJwdWJsaWNLZXkiOiBbCiAgICB...ozd1htcVBWcGZrY0pDd0R3biIKICAgIH0KICBdCn0=\",\n      \"by\": [ {\"key\": \"H3C2AVvL\", \"sig\": \"if8ooA+32YZc4SQBvIDDY9tgTa...i4VvND87PUqq5/0vsNFEGIIEDA==\"} ],\n      \"when\": \"2019-07-18T15:49:22.03Z\"\n    }\n  ]\n}\n

    Note that the values in the change and sig fields have been shortened for readability.

    The properties in this message include:

    *for: Identifies which state is being synchronized. * base_hash: Identifies a shared state against which deltas should be applied. See State Hashes for more details. * base_hash_time: An ISO 8601-formatted UTC timestamp, identifying when the sender believes that the base hash became the current state. This value need not be highly accurate, and different agents in Alice and Bob's ecosystem may have different opinions about an appropriate timestamp for the selected base hash. Like timestamps in email headers, it merely provides a rough approximation of timeframe. * deltas: Gives a list of deltas that should be applied to the DID doc, beginning at the specified state.

    When this message is received, the following processing happens:

    "},{"location":"features/0030-sync-connection/#state-hashes","title":"State Hashes","text":"

    To reliably describe the state of a DID doc at any given moment, we need a quick way to characterize its content. We could do this with a merkle tree, but the strong ordering of that structure is problematic--different participants may receive different deltas in different orders, and this is okay. What matters is whether they have applied the same set of deltas.

    To achieve this goal, the id properties of all received deltas are sorted and concatenated, and then the string undergoes a SHA256 hash. This produces a state hash.

    "},{"location":"features/0030-sync-connection/#best-practices","title":"Best Practices","text":"

    The following best practices will dramatically improve the robustness of state synchronization, both within and across domains. Software implementing this protocol is not required to do any of these things, but they are strongly recommended.

    "},{"location":"features/0030-sync-connection/#the-state-decorator","title":"The ~state decorator","text":"

    Agents using peer DIDs should attach the ~state decorator to messages to help each other discover when state synchronization is needed. This decorator has the following format:

    \"~state\": [\n  {\"did\": \"<my did>\", \"state_hash\": \"<my state hash>\"},\n  {\"did\": \"<your did>\", \"state_hash\": \"<your state hash>\"}\n]\n

    In n-wise relationships, there may be more than 2 entries in the list.

    The goal is to always describe the current known state hashes for each domain. It is also best practice for the recipient of the message to send a sync_state message back to the sender any time it detects a discrepancy.

    "},{"location":"features/0030-sync-connection/#pending-commits","title":"Pending Commits","text":"

    Agents should never commit to a change of state until they know that at least one other agent (on either side of the relationship) agrees to the change. This will significantly decrease the likelihood of merge conflicts. For example, an agent that wants to rotate a key should report the key rotation to someone, and receive an ACK, before it commits to use the new key. This guarantees that there will be gravitas and confirmation of the change, and is a reasonable requirement, since a change that nobody knows about is useless, anyway.

    "},{"location":"features/0030-sync-connection/#routing-cloud-agent-rules","title":"Routing (Cloud) Agent Rules","text":"

    It is best practice for routing agents (typically in the cloud) to enforce the following rules:

    "},{"location":"features/0030-sync-connection/#proactive-sync","title":"Proactive Sync","text":"

    Any time that an agent has reason to suspect that it may be out of sync, it should attempt to reconcile. For example, if a mobile device has been turned off for an extended period of time, it should check with other agents to see if state has evolved, once it is able to communicate again.

    "},{"location":"features/0030-sync-connection/#test-cases","title":"Test Cases","text":"

    Because this protocol encapsulates a lot of potential complexity, and many corner cases, it is particularly important that implementations exercise the full range of scenarios in the Test Cases doc. Community members are encouraged to submit new test cases if they find situations that are not covered.

    "},{"location":"features/0030-sync-connection/#reference","title":"Reference","text":""},{"location":"features/0030-sync-connection/#state-and-sequence-rules","title":"State and Sequence Rules","text":"

    [TODO: create state machine matrices that show which messages can be sent in which states, causing which transitions]

    "},{"location":"features/0030-sync-connection/#message-type-detail","title":"Message Type Detail","text":"

    [TODO: explain every possible field of every possible message type]

    "},{"location":"features/0030-sync-connection/#localized-message-catalog","title":"Localized Message Catalog","text":"

    [TODO: define some localized strings that could be used with these messages, in errors or at other generally useful points?]

    "},{"location":"features/0030-sync-connection/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0030-sync-connection/test_cases/","title":"Test Cases for Sync Connection Protocol","text":""},{"location":"features/0030-sync-connection/test_cases/#given","title":"Given","text":"

    Let us assume that Alice and Bob each have 4 agents (A.1-A.4 and B.1-B.4, respectively), and that each of these agents possesses one key pair that's authorized to authenticate and do certain things in the DID Doc.

    A.1 and B.1 are routing (cloud) agents, where A.2-4 and B.2-4 run on edge devices that are imperfectly connected. A.1 and B.1 do not appear in the authentication section of their respective DID Docs, and thus cannot login on Alice and Bob's behalf.

    Let us further assume that Alice and Bob each have two \"recovery keys\": A.5 and A.6; B.5 and B.6. These keys are not held by agents, but are printed on paper and held in a vault, or are sharded to friends. They are highly privileged but very difficult to use, since they would have to be digitized or unsharded and given to an agent before they would be useful.

    \"Admin\" operations like adding keys and granting privileges to them require either one of the privileged recovery keys, or 2 of the other agent keys to agree.

    Let us further assume that the initial state of Alice's domain, as described above, is known as A.state[0], and that Bob's state is B.state[0].

    These states may be represented by the following authorization section of each DID Doc:

    [TODO]

    "},{"location":"features/0030-sync-connection/test_cases/#scenarios-each-starts-over-at-the-initial-conditions","title":"Scenarios (each starts over at the initial conditions)","text":"
    1. A.1 attempts to rotate its key by sending a sync_state message to A.2. Expected outcome: Should receive ACK, and A.2's state should be updated. Once A.1 receives the ACK, it should commit the pending change in its own key. Until it receives the ACK, it should NOT commit the pending change.

    2. Like #1, except that message goes to B.1 and B.1's state is what should be updated.

    3. A.1 attempts to send a message to B.1, using the ~relstate decorator, claiming states with hash(A.state[0]) and hash(B.state[0]). Expected outcome: B.1 accepts the message.

    4. As #3, except that A.1 claims the current states are random hashes. Expected outcome: B.1 sends back a problem report, plus two sync_state messages (one with who = \"me\" and one with who = \"you\"). Each has an empty deltas array and base_state = the correct base state hash.

    5. A.1 attempts to rotate the key for A.2 by sending a sync_state message to any other agent. Expected outcome: change is rejected with a problem report that points out that A.1 is not authorized to rotate any key other than itself.

    "},{"location":"features/0030-sync-connection/abandon-connection-protocol/","title":"Abandon Connection Protocol 1.0","text":""},{"location":"features/0030-sync-connection/abandon-connection-protocol/#summary","title":"Summary","text":"

    Describes how parties using peer DIDs can notify one another that they are abandoning the connection.

    "},{"location":"features/0030-sync-connection/abandon-connection-protocol/#motivation","title":"Motivation","text":"

    We need a way to tell another party that we are abandoning the connection. This is not strictly required, but it is good hygiene.

    "},{"location":"features/0030-sync-connection/abandon-connection-protocol/#tutorial","title":"Tutorial","text":""},{"location":"features/0030-sync-connection/abandon-connection-protocol/#name-and-version","title":"Name and Version","text":"

    This RFC defines the abandon_connection protocol, version 1.x, as identified by the following PIURI:

    https://didcomm.org/abandon_connection/1.0\n

    Of course, subsequent evolutions of the protocol will replace 1.0 with an appropriate update per semver rules.

    "},{"location":"features/0030-sync-connection/abandon-connection-protocol/#roles","title":"Roles","text":"

    This is a classic one-step notification, so it uses the predefined roles of notifier and notified.

    "},{"location":"features/0030-sync-connection/abandon-connection-protocol/#state-machines","title":"State Machines","text":"

    No state changes during this protocol, although overarching state could change once it completes. Therefore no state machines are required.

    "},{"location":"features/0030-sync-connection/abandon-connection-protocol/#messages","title":"Messages","text":""},{"location":"features/0030-sync-connection/abandon-connection-protocol/#announce","title":"announce","text":"

    This message is used to announce that a party is abandoning the relationship. In a self-sovereign paradigm, abandoning a relationship can be done unilaterally, and does not require formal announcement. Indeed, sometimes a formal announcement is impossible, if one of the parties is offline. So while using this message is encouraged and best practice, it is not mandatory.

    An announce message from Alice to Bob looks like this:

    {\n  \"@type\": \"https://didcomm.org/abandon_connection/1.0/announce\",\n  \"@id\": \"c17147d2-ada6-4d3c-a489-dc1e1bf778ab\"\n}\n

    If Bob receives a message like this, he should assume that Alice no longer considers herself part of \"us\", and take appropriate action. This could include destroying data about Alice that he has accumulated over the course of their relationship, removing her peer DID and its public key(s) and endpoints from his wallet, and so forth. The nature of the relationship, the need for a historical audit trail, regulatory requirements, and many other factors may influence what's appropriate; the protocol simply requires that the message be understood to have permanent termination semantics.

    "},{"location":"features/0031-discover-features/","title":"Aries RFC 0031: Discover Features Protocol 1.0","text":""},{"location":"features/0031-discover-features/#summary","title":"Summary","text":"

    Describes how agents can query one another to discover which ../../features it supports, and to what extent.

    "},{"location":"features/0031-discover-features/#motivation","title":"Motivation","text":"

    Though some agents will support just one protocol and will be statically configured to interact with just one other party, many exciting uses of agents are more dynamic and unpredictable. When Alice and Bob meet, they won't know in advance which ../../features are supported by one another's agents. They need a way to find out.

    "},{"location":"features/0031-discover-features/#tutorial","title":"Tutorial","text":"

    This RFC introduces a protocol for discussing the protocols an agent can handle. The identifier for the message family used by this protocol is discover-features, and the fully qualified URI for its definition is:

    https://didcomm.org/discover-features/1.0\n

    This protocol is now superseded by v2.0 in RFC 0557. Prefer the new version where practical.

    "},{"location":"features/0031-discover-features/#roles","title":"Roles","text":"

    There are two roles in the discover-features protocol: requester and responder. The requester asks the responder about the protocols it supports, and the responder answers. Each role uses a single message type.

    "},{"location":"features/0031-discover-features/#states","title":"States","text":"

    This is a classic two-step request~response interaction, so it uses the predefined state machines for any requester and responder:

    "},{"location":"features/0031-discover-features/#messages","title":"Messages","text":""},{"location":"features/0031-discover-features/#query-message-type","title":"query Message Type","text":"

    A discover-features/query message looks like this:

    {\n  \"@type\": \"https://didcomm.org/discover-features/1.0/query\",\n  \"@id\": \"yWd8wfYzhmuXX3hmLNaV5bVbAjbWaU\",\n  \"query\": \"https://didcomm.org/tictactoe/1.*\",\n  \"comment\": \"I'm wondering if we can play tic-tac-toe...\"\n}\n

    Query messages say, \"Please tell me what your capabilities are with respect to the protocols that match this string.\" This particular example asks if another agent knows any 1.x versions of the tictactoe protocol.

    The query field may use the * wildcard. By itself, a query with just the wildcard says, \"I'm interested in anything you want to share with me.\" But usually, this wildcard will be to match a prefix that's a little more specific, as in the example that matches any 1.x version.

    Any agent may send another agent this message type at any time. Implementers of agents that intend to support dynamic relationships and rich ../../features are strongly encouraged to implement support for this message, as it is likely to be among the first messages exchanged with a stranger.

    "},{"location":"features/0031-discover-features/#disclose-message-type","title":"disclose Message Type","text":"

    A discover-features/disclose message looks like this:

    {\n  \"@type\": \"https://didcomm.org/discover-features/1.0/disclose\",\n  \"~thread\": { \"thid\": \"yWd8wfYzhmuXX3hmLNaV5bVbAjbWaU\" },\n  \"protocols\": [\n    {\n      \"pid\": \"https://didcomm.org/tictactoe/1.0\",\n      \"roles\": [\"player\"]\n    }\n  ]\n}\n

    The protocols field is a JSON array of protocol support descriptor objects that match the query. Each descriptor has a pid that contains a protocol version (fully qualified message family identifier such as https://didcomm.org/tictactoe/1.0), plus a roles array that enumerates the roles the responding agent can play in the associated protocol.

    Response messages say, \"Here are some protocols I support that matched your query, and some things I can do with each one.\"

    "},{"location":"features/0031-discover-features/#sparse-responses","title":"Sparse Responses","text":"

    Responses do not have to contain exhaustive detail. For example, the following response is probably just as good:

    {\n  \"@type\": \"https://didcomm.org/discover-features/1.0/disclose\",\n  \"~thread\": { \"thid\": \"yWd8wfYzhmuXX3hmLNaV5bVbAjbWaU\" },\n  \"protocols\": [\n    {\"pid\": \"https://didcomm.org/tictactoe/1.0\"}\n  ]\n}\n

    The reason why less detail probably suffices is that agents do not need to know everything about one another's implementations in order to start an interaction--usually the flow will organically reveal what's needed. For example, the outcome message in the tictactoe protocol isn't needed until the end, and is optional anyway. Alice can start a tictactoe game with Bob and will eventually see whether he has the right idea about outcome messages.

    The missing roles in this response does not say, \"I support no roles in this protocol.\" It says, \"I support the protocol but I'm providing no detail about specific roles.\"

    Even an empty protocols map does not say, \"I support no protocols that match your query.\" It says, \"I'm not telling you that I support any protocols that match your query.\" An agent might not tell another that it supports a protocol for various reasons, including: the trust that it imputes to the other party based on cumulative interactions so far, whether it's in the middle of upgrading a plugin, whether it's currently under high load, and so forth. And responses to a discover-features request are not guaranteed to be true forever; agents can be upgraded or downgraded, although they probably won't churn in their protocol support from moment to moment.

    "},{"location":"features/0031-discover-features/#privacy-considerations","title":"Privacy Considerations","text":"

    Because the regex in a request message can be very inclusive, the discover-features protocol could be used to mine information suitable for agent fingerprinting, in much the same way that browser fingerprinting works. This is antithetical to the ethos of our ecosystem, and represents bad behavior. Agents should use discover-features to answer legitimate questions, and not to build detailed profiles of one another. However, fingerprinting may be attempted anyway.

    For agents that want to maintain privacy, several best practices are recommended:

    "},{"location":"features/0031-discover-features/#follow-selective-disclosure","title":"Follow selective disclosure.","text":"

    Only reveal supported ../../features based on trust in the relationship. Even if you support a protocol, you may not wish to use it in every relationship. Don't tell others about protocols you do not plan to use with them.

    Patterns are easier to see in larger data samples. However, a pattern of ultra-minimal data is also a problem, so use good judgment about how forthcoming to be.

    "},{"location":"features/0031-discover-features/#vary-the-format-of-responses","title":"Vary the format of responses.","text":"

    Sometimes, you might prettify your agent plaintext message one way, sometimes another.

    "},{"location":"features/0031-discover-features/#vary-the-order-of-items-in-the-protocols-array","title":"Vary the order of items in the protocols array.","text":"

    If more than one key matches a query, do not always return them in alphabetical order or version order. If you do return them in order, do not always return them in ascending order.

    "},{"location":"features/0031-discover-features/#consider-adding-some-spurious-details","title":"Consider adding some spurious details.","text":"

    If a query could match multiple message families, then occasionally you might add some made-up message family names as matches. If a regex allows multiple versions of a protocol, then sometimes you might use some made-up versions. And sometimes not. (Doing this too aggressively might reveal your agent implementation, so use sparingly.)

    "},{"location":"features/0031-discover-features/#vary-how-you-query-too","title":"Vary how you query, too.","text":"

    How you ask questions may also be fingerprintable.

    "},{"location":"features/0031-discover-features/#reference","title":"Reference","text":""},{"location":"features/0031-discover-features/#localization","title":"Localization","text":"

    The query message contains a comment field that is localizable. This field is optional and may not be often used, but when it is, it is to provide a human-friendly justification for the query. An agent that consults its master before answering a query could present the content of this field as an explanation of the request.

    All message types in this family thus have the following implicit decorator:

    {\n\n  \"~l10n\": {\n    \"locales\": { \"en\": [\"comment\"] },\n    \"catalogs\": [\"https://github.com/hyperledger/aries-rfcs/blob/a9ad499../../features/0031-discover-features/catalog.json\"]\n  }\n\n}\n
    "},{"location":"features/0031-discover-features/#message-catalog","title":"Message Catalog","text":"

    As shown in the above ~l10n decorator, all agents using this protocol have a simple message catalog in scope. This allows agents to send problem-reports to complain about something related to discover-features issues. The catalog looks like this (see catalog.json):

    {\n  \"query-too-intrusive\": {\n    \"en\": \"Protocol query asked me to reveal too much information.\"\n  }\n}\n

    For more information, see the localization RFC.

    "},{"location":"features/0031-discover-features/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0031-discover-features/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Streetcred.id Commercial mobile and web app built using Aries Framework - .NET MISSING test results Aries Protocol Test Suite"},{"location":"features/0032-message-timing/","title":"Aries RFC 0032: Message Timing","text":""},{"location":"features/0032-message-timing/#summary","title":"Summary","text":"

    Explain how timing of agent messages can be communicated and constrained.

    "},{"location":"features/0032-message-timing/#motivation","title":"Motivation","text":"

    Many timing considerations influence asynchronous messaging delivery. We need a standard way to talk about them.

    "},{"location":"features/0032-message-timing/#tutorial","title":"Tutorial","text":"

    This RFC introduces a decorator to communicate about timing of messages. It is compatible with, but independent from, conventions around date and time fields in messages.

    Timing attributes of messages can be described with the ~timing decorator. It offers a number of optional subfields:

    \"~timing\": {\n  \"in_time\":  \"2019-01-23 18:03:27.123Z\",\n  \"out_time\": \"2019-01-23 18:03:27.123Z\",\n  \"stale_time\": \"2019-01-24 18:25Z\",\n  \"expires_time\": \"2019-01-25 18:25Z\",\n  \"delay_milli\": 12345,\n  \"wait_until_time\": \"2019-01-24 00:00Z\"\n}\n

    The meaning of these fields is:

    All information in these fields should be considered best-effort. That is, the sender makes a best effort to communicate accurately, and the receiver makes a best effort to use the information intelligently. In this respect, these values are like timestamps in email headers--they are generally useful, but not expected to be perfect. Receivers are not required to honor them exactly.

    An agent may ignore the ~timing decorator entirely or implement the ~timing decorator and silently ignore any of the fields it chooses not to support.

    "},{"location":"features/0032-message-timing/#timing-in-routing","title":"Timing in Routing","text":"

    Most usage of the ~timing decorator is likely to focus on application-oriented messages processed at the edge. in_time and out_time, for example, are mainly useful so Bob can know how long Alice took to ponder her response to his love letter. In onion routing, where one edge agent prepares all layers of the forward wrapping, it makes no sense to apply them to forward messages. However, if a relay is composing new forward messages dynamically, these fields could be used to measure the delay imposed by that relay. All the other fields have meaning in routing.

    "},{"location":"features/0032-message-timing/#timing-and-threads","title":"Timing and Threads","text":"

    When a message is a reply, then in_time on an application-focused message is useful. However, out_time and all other fields are meaningful regardless of whether threading is active.

    "},{"location":"features/0032-message-timing/#reference","title":"Reference","text":""},{"location":"features/0032-message-timing/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes RFC 0193: Coin Flip Protocol Uses ~timing.expires_time to time out each step of the coin flip."},{"location":"features/0034-message-tracing/","title":"Aries RFC 0034: Message Tracing","text":""},{"location":"features/0034-message-tracing/#summary","title":"Summary","text":"

    Define a mechanism to track what happens in a complex DIDComm interactions, to make troubleshooting and auditing easier.

    "},{"location":"features/0034-message-tracing/#motivation","title":"Motivation","text":"

    Anyone who has searched trash and spam folders for a missing email knows that when messages don't elicit the expected reaction, troubleshooting can be tricky. Aries-style agent-to-agent communication is likely to manifest many of the same challenges as email, in that it may be routed to multiple places, by multiple parties, with incomplete visibility into the meaning or state associated with individual messages. Aries's communication is even more opaque than ordinary email, in that it is transport agnostic and encrypted...

    In a future world where DIDComm technology is ubiquitous, people may send messages from one agent to another, and wonder why nothing happened, or why a particular error is reported. They will need answers.

    Also, developers and testers who are working with DIDComm-based protocols need a way to debug.

    "},{"location":"features/0034-message-tracing/#tutorial","title":"Tutorial","text":""},{"location":"features/0034-message-tracing/#basics","title":"Basics","text":"

    Many systems that deliver physical packages offer a \"cerified delivery\" or \"return receipt requested\" feature. To activate the feature, a sender affixes a special label to the package, announcing who should be notified, and how. Handlers of the package then cooperate to satisfy the request.

    DIDComm thread tracing works on a similar principle. When tracing is desired, a sender adds to the normal message metadata a special decorator that the message handler can see. If the handler notices the decorator and chooses to honor the request, it emits a notification to provide tracing.

    The main complication is that DIDComm message routing uses nested layers of encryption. What is visible to one message handler may not be visible to another. Therefore, the decorator must be repeated in every layer of nesting where tracing is required. Although this makes tracing somewhat verbose, it also provides precision; troubleshooting can focus only on one problematic section of an overall route, and can degrade privacy selectively.

    "},{"location":"features/0034-message-tracing/#decorator","title":"Decorator","text":"

    Tracing is requested by decorating the JSON plaintext of an DIDComm message (which will often be a forward message, but could also be the terminal message unpacked and handled at its final destination) with the ~trace attribute. Here is the simplest possible example:

    This example asks the handler of the message to perform an HTTP POST of a trace report about the message to the URI http://example.com/tracer.

    The service listening for trace reports--called the trace sink-- doesn't have to have any special characteristics, other than support for HTTP 1.1 or SMTP (for mailto: URIs) and the ability to receive small plaintext payloads rapidly. It may use TLS, but it is not required to. If TLS is used, the parties that submit reports should accept the certificate without strong checking, even if it is expired or invalid. The rationale for this choice is:

    1. It is the sender's trust in the tracing service, not the handler's trust, that matters.
    2. Tracing is inherently unsafe and non-privacy-preserving, in that it introduces an eavesdropper and a channel with uncertain security guarantees. Trying to secure the eavesdropper is a waste of effort.
    3. Introducing a strong dependency on PKI-based trust into a protocol that exists to improve PKI feels wrong-headed.
    4. When tracing is needed, the last thing we should do is create another fragility to troubleshoot.
    "},{"location":"features/0034-message-tracing/#trace-reports","title":"Trace Reports","text":"

    The body of the HTTP request (the trace report) is a JSON document that looks like this:

    "},{"location":"features/0034-message-tracing/#subtleties","title":"Subtleties","text":""},{"location":"features/0034-message-tracing/#message-ids","title":"Message IDs","text":"

    If messages have a different @id attribute at each hop in a delivery chain, then a trace of the message at hop 1 and a trace of the message at hop 2 will not appear to have any connection when the reports are analyzed together.

    To solve this problem, traced messages use an ID convention that permits ordering. Assume that the inner application message has a base ID, X. Containing messages (e.g., forward messages) have IDs in the form X.1, X.2, X.3, and so forth -- where numbers represent the order in which the messages will be handled. Notice in the sample trace report above that the for_id of the trace report message is 98fd8d72-80f6-4419-abc2-c65ea39d0f38.1. This implies that it is tracing the first hop of inner, application message with id 98fd8d72-80f6-4419-abc2-c65ea39d0f38.

    "},{"location":"features/0034-message-tracing/#delegation","title":"Delegation","text":"

    Sometimes, a message is sent before it is fully wrapped for all hops in its route. This can happen, for example, if Alice's edge agent delegates to Alice's cloud agent the message preparation for later stages of routing.

    In such cases, tracing for the delegated portion of the route should default to inherit the tracing choice of the portion of the route already seen. To override this, the ~trace decorator placed on the initial message from Alice's edge to Alice's cloud can include the optional full-route attribute, with its value set to true or false.

    This tells handlers that are wrapping subsequent portions of a routed message to either propagate or truncate the routing request in any new forward messages they compose.

    "},{"location":"features/0034-message-tracing/#timing-and-sequencing","title":"Timing and Sequencing","text":"

    Each trace report includes a UTC timestamp from the reporting handler. This timestamp should be computed at the instant a trace report is prepared--not when it is queued or delivered. Even so, it offers only a rough approximation of when something happened. Since system clocks from handlers may not be synchronized, there is no guarantee of precision or of agreement among timestamps.

    In addition, trace reports may be submitted asynchronously with respect to the message handling they document. Thus, a trace report could arrive out of sequence, even if the handling it describes occurred correctly. This makes it vital to order trace reports according to the ID sequencing convention described above.

    "},{"location":"features/0034-message-tracing/#tracing-the-original-sender","title":"Tracing the original sender","text":"

    The original sender may not run a message handling routine that triggers tracing. However, as a best practice, senders that enable tracing should send a trace report when they send, so the beginning of a routing sequence is documented. This report should reference X.0 in for_id, where X is the ID of the inner application message for the final recipient.

    "},{"location":"features/0034-message-tracing/#handling-a-message-more-than-once","title":"Handling a message more than once","text":"

    A particular handler may wish to document multiple phases of processing for a message. For example, it may choose to emit a trace report when the message is received, and again when the message is \"done.\" In such cases, the proper sequence of the two messages, both of which will have the same for_id attribute, is given by the relative sequence of the timestamps.

    Processing time for each handler--or for phases within a handler--is given by the elapsed_milli attribute.

    "},{"location":"features/0034-message-tracing/#privacy","title":"Privacy","text":"

    Tracing inherently compromises privacy. It is totally voluntary, and handlers should not honor trace requests if they have reason to believe they have been inserted for nefarious purposes. However, the fact that the trace reports can only be requested by the same entities that send the messages, and that they are encrypted in the same way as any other plaintext that a handler eventually sees, puts privacy controls in the hands of the ultimate sender and receiver.

    "},{"location":"features/0034-message-tracing/#tracing-entire-threads","title":"Tracing entire threads","text":"

    If a sender wishes to enable threading for an entire multi-step interaction between multiple parties, the full_thread attribute can be included on an inner application, with its value set to true. This signals to recipients that the sender wishes to have tracing turned on until the interaction is complete. Recipients may or may not honor such requests. If they don't, they may choose to send an error to the sender explaining why they are not honoring the request.

    "},{"location":"features/0034-message-tracing/#reference","title":"Reference","text":""},{"location":"features/0034-message-tracing/#trace-decorator-trace","title":"Trace decorator (~trace)","text":"

    Value is any URI. At least http, https, and mailto should be supported. If mail is sent, the message subject should be \"trace report for ?\", where ? is the value of the for_id attribute in the report, and the email body should contain the plaintext of the report, as utf8.

    "},{"location":"features/0034-message-tracing/#trace-report-attributes","title":"Trace Report Attributes","text":""},{"location":"features/0034-message-tracing/#drawbacks","title":"Drawbacks","text":"

    Tracing makes network communication quite noisy. It imposes a burden on message handlers. It may also incur performance penalties.

    "},{"location":"features/0034-message-tracing/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Wireshark and similar network monitoring tools could give some visibility into agent-to-agent interactions. However, it would be hard to make sense of bytes on the wire, due to encryption and the way individual messages may be divorced from routing or thread context.

    Proprietary tracing could be added to the agents built by particular vendors. However, this would have limited utility if an interaction involved software not made by that vendor.

    "},{"location":"features/0034-message-tracing/#prior-art","title":"Prior art","text":"

    The message threading RFC and the error reporting RFC touch on similar subjects, but are distinct.

    "},{"location":"features/0034-message-tracing/#unresolved-questions","title":"Unresolved questions","text":"

    None.

    "},{"location":"features/0034-message-tracing/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0035-report-problem/","title":"Aries RFC 0035: Report Problem Protocol 1.0","text":""},{"location":"features/0035-report-problem/#summary","title":"Summary","text":"

    Describes how to report errors and warnings in a powerful, interoperable way. All implementations of SSI agent or hub technology SHOULD implement this RFC.

    "},{"location":"features/0035-report-problem/#change-log","title":"Change Log","text":""},{"location":"features/0035-report-problem/#motivation","title":"Motivation","text":"

    Effective reporting of errors and warnings is difficult in any system, and particularly so in decentralized systems such as remotely collaborating agents. We need to surface problems, and their supporting context, to people who want to know about them (and perhaps separately, to people who can actually fix them). This is especially challenging when a problem is detected well after and well away from its cause, and when multiple parties may need to cooperate on a solution.

    Interoperability is perhaps more crucial with problem reporting than with any other aspect of DIDComm, since an agent written by one developer MUST be able to understand an error reported by an entirely different team. Notice how different this is from normal enterprise software development, where developers only need to worry about understanding their own errors.

    The goal of this RFC is to provide agents with tools and techniques possible to address these challenges. It makes two key contributions:

    "},{"location":"features/0035-report-problem/#tutorial","title":"Tutorial","text":""},{"location":"features/0035-report-problem/#error-vs-warning-vs-problem","title":"\"Error\" vs. \"Warning\" vs. \"Problem\"","text":"

    The distinction between \"error\" and \"warning\" is often thought of as one of severity -- errors are really bad, and warnings are only somewhat bad. This is reinforced by the way logging platforms assign numeric constants to ERROR vs. WARN log events, and by the way compilers let warnings be suppressed but refuse to ignore errors.

    However, any cybersecurity professional will tell you that warnings sometimes signal deep and scary problems that should not be ignored, and most veteran programmers can tell war stories that reinforce this wisdom. A deeper analysis of warnings reveals that what truly differentiates them from errors is not their lesser severity, but rather their greater ambiguity. Warnings are problems that require human judgment to evaluate, whereas errors are unambiguously bad.

    The mechanism for reporting problems in DIDComm cannot make a simplistic assumption that all agents are configured to run with a particular verbosity or debug level. Each agent must let other agents decide for themselves, based on policy or user preference, what do do about various issues. For this reason, we use the generic term \"problem\" instead of the more specific and semantically opinionated term \"error\" (or \"warning\") to describe the general situation we're addressing. \"Problem\" includes any deviation from the so-called \"happy path\" of an interaction. This could include situations where the severity is unknown and must be evaluated by a human, as well as surprising events (e.g., a decision by a human to alter the basis for in-flight messaging by moving from one device to another).

    "},{"location":"features/0035-report-problem/#specific-challenges","title":"Specific Challenges","text":"

    All of the following challenges need to be addressed.

    1. Report problems to external parties interacting with us. For example, AliceCorp has to be able to tell Bob that it can\u2019t issue the credential he requested because his payment didn\u2019t go through.
    2. Report problems to other entities inside our own domain. For example, AliceCorp\u2019s agent #1 has to be able to report to AliceCorp agent #2 that it is out of disk space.
    3. Report in a way that provides human beings with useful context and guidance to troubleshoot. Most developers know of cases where error reporting was technically correct but completely useless. Bad communication about problems is one of the most common causes of UX debacles. Humans using agents will speak different languages, have differing degrees of technical competence, and have different software and hardware resources. They may lack context about what their agents are doing, such as when a DIDComm interaction occurs as a result of scheduled or policy-driven actions. This makes context and guidance crucial.
    4. Map a problem backward in time, space, and circumstances, so when it is studied, its original context is available. This is particularly difficult in DIDComm, which is transport-agnostic and inherently asynchronous, and which takes place on an inconsistently connected digital landscape.
    5. Support localization using techniques in the l10n RFC.
    6. Provide consistent, locale-independent problem codes, not just localized text, so problems can be researched in knowledge bases, on Stack Overflow, and in other internet forums, regardless of the natural language in which a message displays. This also helps meaning remain stable as wording is tweaked.
    7. Provide a registry of well known problem codes that are carefully defined and localized, to maximize shared understanding. Maintaining an exhaustive list of all possible things that can go wrong with all possible agents in all possible interactions is completely unrealistic. However, it may be possible to maintain a curated subset. While we can't enumerate everything that can go wrong in a financial transaction, a code for \"insufficient funds\" might have near-universal usefulness. Compare the posix error inventory in errorno.h.
    8. Facilitate automated problem handling by agents, not just manual handling by humans. Perfect automation may be impossible, but high levels of automation should be doable.
    9. Clarify how the problem affects an in-progress interaction. Does a failure to process payment reset the interaction to the very beginning of the protocol, or just back to the previous step, where payment was requested? This requires problems to be matched in a formal way to the state machine of a protocol underway.
    "},{"location":"features/0035-report-problem/#the-report-problem-protocol","title":"The report-problem protocol","text":"

    Reporting problems uses a simple one-step notification protocol. Its official PIURI is:

    https://didcomm.org/report-problem/1.0\n

    The protocol includes the standard notifier and notified roles. It defines a single message type problem-report, introduced here.

    A problem-report communicates about a problem when an agent-to-agent message is possible and a recipient for the problem report is known. This covers, for example, cases where a Sender's message gets to an intended Recipient, but the Recipient is unable to process the message for some reason and wants to notify the Sender. It may also be relevant in cases where the recipient of the problem-report is not a message Sender. Of course, a reporting technique that depends on message delivery doesn't apply when the error reporter can't identify or communicate with the proper recipient.

    "},{"location":"features/0035-report-problem/#the-problem-report-message-type","title":"The problem-report message type","text":"

    Only description.code is required, but a maximally verbose problem-report could contain all of the following:

    {\n  \"@type\"            : \"https://didcomm.org/report-problem/1.0/problem-report\",\n  \"@id\"              : \"an identifier that can be used to discuss this error message\",\n  \"~thread\"          : \"info about the threading context in which the error occurred (if any)\",\n  \"description\"      : { \"en\": \"localized message\", \"code\": \"symbolic-name-for-error\" },\n  \"problem_items\"    : [ {\"<item descrip>\": \"value\"} ],\n  \"who_retries\"      : \"enum: you | me | both | none\",\n  \"fix_hint\"         : { \"en\": \"localized error-instance-specific hint of how to fix issue\"},\n  \"impact\"           : \"enum: message | thread | connection\",\n  \"where\"            : \"enum: you | me | other - enum: cloud | edge | wire | agency | ..\",\n  \"noticed_time\"     : \"<time>\",\n  \"tracking_uri\"     : \"\",\n  \"escalation_uri\"   : \"\"\n}\n
    "},{"location":"features/0035-report-problem/#field-reference","title":"Field Reference","text":"

    Some fields will be relevant and useful in many use cases, but not always. Including empty or null fields is discouraged; best practice is to include as many fields as you can fill with useful data, and to omit the others.

    @id: An identifier for this message, as described in the message threading RFC. This decorator is STRONGLY recommended, because enables a dialog about the problem itself in a branched thread (e.g., suggest a retry, report a resolution, ask for more information).

    ~thread: A thread decorator that places the problem-report into a thread context. If the problem was triggered in the processing of a message, then the triggering message is the head of a new thread of which the problem report is the second member (~thread.sender_order = 0). In such cases, the ~thread.pthid (parent thread id) here would be the @id of the triggering message. If the problem-report is unrelated to a message, the thread decorator is mostly redundant, as ~thread.thid must equal @id.

    description: Contains human-readable, localized alternative string(s) that explain the problem. It is highly recommended that the message follow use the guidance in the l10n RFC, allowing the error to be searched on the web and documented formally.

    description.code: Required. Contains the code that indicates the problem being communicated. Codes are described in protocol RFCs and other relevant places. New Codes SHOULD follow the Problem Code naming convention detailed in the DIDComm v2 spec.

    problem_items: A list of one or more key/value pairs that are parameters about the problem. Some examples might be:

    All items should have in common the fact that they exemplify the problem described by the code (e.g., each is an invalid param, or each is an unresponsive URL, or each is an unrecognized crypto algorithm, etc).

    Each item in the list must be a tagged pair (a JSON {key:value}, where the key names the parameter or item, and the value is the actual problem text/number/value. For example, to report that two different endpoints listed in party B\u2019s DID Doc failed to respond when they were contacted, the code might contain \"endpoint-not-responding\", and the problem_items property might contain:

    [\n  {\"endpoint1\": \"http://agency.com/main/endpoint\"},\n  {\"endpoint2\": \"http://failover.agency.com/main/endpoint\"}\n]\n

    who_retries: value is the string \"you\", the string \"me\", the string \"both\", or the string \"none\". This property tells whether a problem is considered permanent and who the sender of the problem report believes should have the responsibility to resolve it by retrying. Rules about how many times to retry, and who does the retry, and under what circumstances, are not enforceable and not expressed in the message text. This property is thus not a strong commitment to retry--only a recommendation of who should retry, with the assumption that retries will often occur if they make sense.

    [TODO: figure out how to identify parties > 2 in n-wise interaction]

    fix_hint: Contains human-readable, localized suggestions about how to fix this instance of the problem. If present, this should be viewed as overriding general hints found in a message catalog.

    impact: A string describing the breadth of impact of the problem. An enumerated type:

    where: A string that describes where the error happened, from the perspective of the reporter, and that uses the \"you\" or \"me\" or \"other\" prefix, followed by a suffix like \"cloud\", \"edge\", \"wire\", \"agency\", etc.

    noticed_time: Standard time entry (ISO-8601 UTC with at least day precision and up to millisecond precision) of when the problem was detected.

    [TODO: should we refer to timestamps in a standard way (\"date\"? \"time\"? \"timestamp\"? \"when\"?)]

    tracking_uri: Provides a URI that allows the recipient to track the status of the error. For example, if the error is related to a service that is down, the URI could be used to monitor the status of the service, so its return to operational status could be automatically discovered.

    escalation_uri: Provides a URI where additional help on the issue can be received. For example, this might be a \"mailto\" and email address for the Help Desk associated with a currently down service.

    "},{"location":"features/0035-report-problem/#sample","title":"Sample","text":"
    {\n  \"@type\": \"https://didcomm.org/notification/1.0/problem-report\",\n  \"@id\": \"7c9de639-c51c-4d60-ab95-103fa613c805\",\n  \"~thread\": {\n    \"pthid\": \"1e513ad4-48c9-444e-9e7e-5b8b45c5e325\",\n    \"sender_order\": 1\n  },\n  \"~l10n\"            : {\"catalog\": \"https://didcomm.org/error-codes\"},\n  \"description\"      : \"Unable to find a route to the specified recipient.\",\n  \"description~l10n\" : {\"code\": \"cant-find-route\" },\n  \"problem_items\"    : [\n      { \"recipient\": \"did:sov:C805sNYhMrjHiqZDTUASHg\" }\n  ],\n  \"who_retries\"      : \"you\",\n  \"impact\"           : \"message\",\n  \"noticed_time\"     : \"2019-05-27 18:23:06Z\"\n}\n
    "},{"location":"features/0035-report-problem/#categorized-examples-of-errors-and-current-best-practice-handling","title":"Categorized Examples of Errors and (current) Best Practice Handling","text":"

    The following is a categorization of a number of examples of errors and (current) Best Practice handling for those types of errors. The new problem-report message type is used for some of these categories, but not all.

    "},{"location":"features/0035-report-problem/#unknown-error","title":"Unknown Error","text":"

    Errors of a known error code will be processed according to the understanding of what the code means. Support of a protocol includes support and proper processing of the error codes detailed within that protocol.

    Any unknown error code that starts with w. in the DIDComm v2 style may be considered a warning, and the flow of the active protocol SHOULD continue. All other unknown error codes SHOULD be considered to be an end to the active protocol.

    "},{"location":"features/0035-report-problem/#error-while-processing-a-received-message","title":"Error While Processing a Received Message","text":"

    An Agent Message sent by a Sender and received by its intended Recipient cannot be processed.

    "},{"location":"features/0035-report-problem/#examples","title":"Examples:","text":""},{"location":"features/0035-report-problem/#recommended-handling","title":"Recommended Handling","text":"

    The Recipient should send the Sender a problem-report Agent Message detailing the issue.

    The last example deserves an additional comment about whether there should be a response sent at all. Particularly in cases where trust in the message sender is low (e.g. when establishing the connection), an Agent may not want to send any response to a rejected message as even a negative response could reveal correlatable information. That said, if a response is provided, the problem-report message type should be used.

    "},{"location":"features/0035-report-problem/#error-while-routing-a-message","title":"Error While Routing A Message","text":"

    An Agent in the routing flow of getting a message from a Sender to the Agent Message Recipient cannot route the message.

    "},{"location":"features/0035-report-problem/#examples_1","title":"Examples:","text":""},{"location":"features/0035-report-problem/#recommended-handling_1","title":"Recommended Handling","text":"

    If the Sender is known to the Agent having the problem, send a problem-report Agent Message detailing at least that a blocking issue occurred, and if relevant (such as in the first example), some details about the issue. If the message is valid, and the problem is related to a lack of resources (e.g. the second issue), also send a problem-report message to an escalation point within the domain.

    Alternatively, the capabilities described in 0034: Message Tracing could be used to inform others of the fact that an issue occurred.

    "},{"location":"features/0035-report-problem/#messages-triggered-about-a-transaction","title":"Messages Triggered about a Transaction","text":""},{"location":"features/0035-report-problem/#examples_2","title":"Examples:","text":""},{"location":"features/0035-report-problem/#recommended-handling_2","title":"Recommended Handling","text":"

    These types of error scenarios represent a gray error in handling between using the generic problem-report message format, or a message type that is part of the current transaction's message family. For example, the \"Your credential has been revoked\" might well be included as a part of the (TBD) standard Credentials Exchange message family. The \"more information\" example might be a generic error across a number of message families and so should trigger a problem-report) or, might be specific to the ongoing thread (e.g. Credential Exchange) and so be better handled by a defined message within that thread and that message family.

    The current advice on which to use in a given scenario is to consider how the recipient will handle the message. If the handler will need to process the response in a specific way for the transaction, then a message family-specific message type should be used. If the error is cross-cutting such that a common handler can be used across transaction contexts, then a generic problem-report should be used.

    \"Current advice\" implies that as we gain more experience with Agent To Agent messaging, the recommendations could get more precise.

    "},{"location":"features/0035-report-problem/#messaging-channel-settings","title":"Messaging Channel Settings","text":""},{"location":"features/0035-report-problem/#examples_3","title":"Examples","text":""},{"location":"features/0035-report-problem/#recommended-handling_3","title":"Recommended Handling","text":"

    These types of messages might or might not be triggered during the receipt and processing of a message, but either way, they are unrelated to the message and are really about the communication channel between the entities. In such cases, the recommended approach is to use a (TBD) standard message family to notify and rectify the issue (e.g. change the attributes of a connection). The definition of that message family is outside the scope of this RFC.

    "},{"location":"features/0035-report-problem/#timeouts","title":"Timeouts","text":"

    A special generic class of errors that deserves mention is the timeout, where a Sender sends out a message and does not receive back a response in a given time. In a distributed environment such as Agent to Agent messaging, these are particularly likely - and particularly difficult to handle gracefully. The potential reasons for timeouts are numerous:

    "},{"location":"features/0035-report-problem/#recommended-handling_4","title":"Recommended Handling","text":"

    Appropriate timeout handling is extremely contextual, with two key parameters driving the handling - the length of the waiting period before triggering the timeout and the response to a triggered timeout.

    The time to wait for a response should be dynamic by at least type of message, and ideally learned through experience. Messages requiring human interaction should have an inherently longer timeout period than a message expected to be handled automatically. Beyond that, it would be good for Agents to track response times by message type (and perhaps other parameters) and adjust timeouts to match observed patterns.

    When a timeout is received there are three possible responses, handled automatically or based on feedback from the user:

    An automated \"wait longer\" response might be used when first interacting with a particular message type or identity, as the response cadence is learned.

    If the decision is to retry, it would be good to have support in areas covered by other RFCs. First, it would be helpful (and perhaps necessary) for the threading decorator to support the concept of retries, so that a Recipient would know when a message is a retry of an already sent message. Next, on \"forward\" message types, Agents might want to know that a message was a retry such that they can consider refreshing DIDDoc/encryption key cache before sending the message along. It could also be helpful for a retry to interact with the Tracing facility so that more information could be gathered about why messages are not getting to their destination.

    Excessive retrying can exacerbate an existing system issue. If the reason for the timeout is because there is a \"too many messages to be processed\" situation, then sending retries simply makes the problem worse. As such, a reasonable backoff strategy should be used (e.g. exponentially increasing times between retries). As well, a strategy used at Uber is to flag and handle retries differently from regular messages. The analogy with Uber is not pure - that is a single-vendor system - but the notion of flagging retries such that retry messages can be handled differently is a good approach.

    "},{"location":"features/0035-report-problem/#caveat-problem-report-loops","title":"Caveat: Problem Report Loops","text":"

    Implementers should consider and mitigate the risk of an endless loop of error messages. For example:

    "},{"location":"features/0035-report-problem/#recommended-handling_5","title":"Recommended Handling","text":"

    How agents mitigate the risk of this problem is implementation specific, balancing loop-tracking overhead versus the likelihood of occurrence. For example, an agent implementation might have a counter on a connection object that is incremented when certain types of Problem Report messages are sent on that connection, and reset when any other message is sent. The agent could stop sending those types of Problem Report messages after the counter reaches a given value.

    "},{"location":"features/0035-report-problem/#reference","title":"Reference","text":"

    TBD

    "},{"location":"features/0035-report-problem/#drawbacks","title":"Drawbacks","text":"

    In many cases, a specific problem-report message is necessary, so formalizing the format of the message is also preferred over leaving it to individual implementations. There is no drawback to specifying that format now.

    As experience is gained with handling distributed errors, the recommendations provided in this RFC will have to evolve.

    "},{"location":"features/0035-report-problem/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    The error type specification mechanism builds on the same approach used by the message type specifications. It's possible that additional capabilities could be gained by making runtime use of the error type specification - e.g. for the broader internationalization of the error messages.

    The main alternative to a formally defined error type format is leaving it to individual implementations to handle error notifications, which will not lead to an effective solution.

    "},{"location":"features/0035-report-problem/#prior-art","title":"Prior art","text":"

    A brief search was done for error handling in messaging systems with few useful results found. Perhaps the best was the Uber article referenced in the \"Timeout\" section above.

    "},{"location":"features/0035-report-problem/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0035-report-problem/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes RFC 0036: Issue Credential Protocol The problem-report message is adopted by this protocol. MISSING test results RFC 0037: Present Proof Protocol The problem-report message is adopted by this protocol. MISSING test results Trinsic.id Commercial mobile and web app built using Aries Framework - .NET MISSING test results"},{"location":"features/0036-issue-credential/","title":"Aries RFC 0036: Issue Credential Protocol 1.0","text":""},{"location":"features/0036-issue-credential/#version-change-log","title":"Version Change Log","text":""},{"location":"features/0036-issue-credential/#11propose-credential","title":"1.1/propose-credential","text":"

    In version 1.1 of the propose-credential message, the following optional fields were added: schema_name, schema_version, and issuer_did.

    The previous version is 1.0/propose-credential.

    "},{"location":"features/0036-issue-credential/#summary","title":"Summary","text":"

    Formalizes messages used to issue a credential--whether the credential is JWT-oriented, JSON-LD-oriented, or ZKP-oriented. The general flow is similar, and this protocol intends to handle all of them. If you are using a credential type that doesn't fit this protocol, please raise a Github issue.

    "},{"location":"features/0036-issue-credential/#motivation","title":"Motivation","text":"

    We need a standard protocol for issuing credentials. This is the basis of interoperability between Issuers and Holders.

    "},{"location":"features/0036-issue-credential/#tutorial","title":"Tutorial","text":""},{"location":"features/0036-issue-credential/#roles","title":"Roles","text":"

    There are two roles in this protocol: Issuer and Holder. Technically, the latter role is only potential until the protocol completes; that is, the second party becomes a Holder of a credential by completing the protocol. However, we will use the term Holder throughout, to keep things simple.

    Note: When a holder of credentials turns around and uses those credentials to prove something, they become a Prover. In the sister RFC to this one, 0037: Present Proof, the Holder is therefore renamed to Prover. Sometimes in casual conversation, the Holder role here might be called \"Prover\" as well, but more formally, \"Holder\" is the right term at this phase of the credential lifecycle.

    "},{"location":"features/0036-issue-credential/#states","title":"States","text":"

    The choreography diagrams shown below detail how state evolves in this protocol, in a \"happy path.\" The states include:

    "},{"location":"features/0036-issue-credential/#states-for-issuer","title":"states for Issuer","text":""},{"location":"features/0036-issue-credential/#states-for-holder","title":"states for Holder","text":"

    Errors might occur in various places. For example, an Issuer might offer a credential for a price that the Holder is unwilling to pay. All errors are modeled with a problem-report message. Easy-to-anticipate errors reset the flow as shown in the diagrams, and use the code issuance-abandoned; more exotic errors (e.g., server crashed at Issuer headquarters in the middle of a workflow) may have different codes but still cause the flow to be abandoned in the same way. That is, in this version of the protocol, all errors cause the state of both parties (the sender and the receiver of the problem-report) to revert to null (meaning it is no longer engaged in the protocol at all). Future versions of the protocol may allow more granular choices (e.g., requesting and receiving a (re-)send of the issue-credential message if the Holder times out while waiting in the request-sent state).

    "},{"location":"features/0036-issue-credential/#messages","title":"Messages","text":"

    The Issue Credential protocol consists of these messages:

    In addition, the ack and problem-report messages are adopted into the protocol for confirmation and error handling.

    Note: This protocol is about the messages that must be exchanged to issue verifiable credentials, NOT about the specifics of particular verifiable credential schemes. DIDComm attachments are deliberately used in messages to isolate the protocol flow/semantics from the credential artifacts themselves as separate constructs. Attachments allow credential formats and this protocol to evolve through versioning milestones independently instead of in lockstep. Links are provided in the message descriptions below, to describe how the protocol adapts to specific verifiable credential implementations.

    "},{"location":"features/0036-issue-credential/#choreography-diagram","title":"Choreography Diagram","text":"Note: This diagram was made in draw.io. To make changes: - upload the drawing HTML from this folder to the [draw.io](https://draw.io) site (Import From...GitHub), - make changes, - export the picture and HTML to your local copy of this repo, and - submit a pull request.

    The protocol has 3 alternative beginnings:

    1. The Issuer can begin with an offer.
    2. The Holder can begin with a proposal.
    3. the Holder can begin with a request.

    The offer and proposal messages are part of an optional negotiation phase and may trigger back-and-forth counters. A request is not subject to negotiation; it can only be accepted or rejected.

    "},{"location":"features/0036-issue-credential/#propose-credential","title":"Propose Credential","text":"

    An optional message sent by the potential Holder to the Issuer to initiate the protocol or in response to a offer-credential message when the Holder wants some adjustments made to the credential data offered by Issuer.

    Note: In Hyperledger Indy, where the request-credential message can only be sent in response to an offer-credential message, the propose-credential message is the only way for a potential Holder to initiate the workflow.

    Schema:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/1.1/propose-credential\",\n    \"@id\": \"<uuid-of-propose-message>\",\n    \"comment\": \"some comment\",\n    \"credential_proposal\": <json-ld object>,\n    \"schema_issuer_did\": \"DID of the proposed schema issuer\",\n    \"schema_id\": \"Schema ID string\",\n    \"schema_name\": \"Schema name string\",\n    \"schema_version\": \"Schema version string\",\n    \"cred_def_id\": \"Credential Definition ID string\"\n    \"issuer_did\": \"DID of the proposed issuer\"\n}\n

    Description of attributes:

    "},{"location":"features/0036-issue-credential/#offer-credential","title":"Offer Credential","text":"

    A message sent by the Issuer to the potential Holder, describing the credential they intend to offer and possibly the price they expect to be paid. In Hyperledger Indy, this message is required, because it forces the Issuer to make a cryptographic commitment to the set of fields in the final credential and thus prevents Issuers from inserting spurious data. In credential implementations where this message is optional, an Issuer can use the message to negotiate the issuing following receipt of a request-credential message.

    Schema:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/1.0/offer-credential\",\n    \"@id\": \"<uuid-of-offer-message>\",\n    \"comment\": \"some comment\",\n    \"credential_preview\": <json-ld object>,\n    \"offers~attach\": [\n        {\n            \"@id\": \"libindy-cred-offer-0\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    The Issuer may add a ~payment-request decorator to this message to convey the need for payment before issuance. See the payment section below for more details.

    It is possible for an Issuer to add a ~timing.expires_time decorator to this message to convey the idea that the offer will expire at a particular point in the future. Such behavior is not a special part of this protocol, and support for it is not a requirement of conforming implementations; the ~timing decorator is simply a general possibility for any DIDComm message. We mention it here just to note that the protocol can be enriched in composable ways.

    "},{"location":"features/0036-issue-credential/#request-credential","title":"Request Credential","text":"

    This is a message sent by the potential Holder to the Issuer, to request the issuance of a credential. Where circumstances do not require a preceding Offer Credential message (e.g., there is no cost to issuance that the Issuer needs to explain in advance, and there is no need for cryptographic negotiation), this message initiates the protocol. In Hyperledger Indy, this message can only be sent in response to an Offer Credential message.

    Schema:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/1.0/request-credential\",\n    \"@id\": \"<uuid-of-request-message>\",\n    \"comment\": \"some comment\",\n    \"requests~attach\": [\n        {\n            \"@id\": \"attachment id\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        },\n    ]\n}\n

    Description of Fields:

    This message may have a ~payment-receipt decorator to prove to the Issuer that the potential Holder has satisfied a payment requirement. See the payment section below.

    "},{"location":"features/0036-issue-credential/#issue-credential","title":"Issue Credential","text":"

    This message contains as attached payload the credentials being issued and is sent in response to a valid Request Credential message.

    Schema:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/1.0/issue-credential\",\n    \"@id\": \"<uuid-of-issue-message>\",\n    \"comment\": \"some comment\",\n    \"credentials~attach\": [\n        {\n            \"@id\": \"libindy-cred-0\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    If the issuer wants an acknowledgement that the issued credential was accepted, this message must be decorated with ~please-ack, and it is then best practice for the new Holder to respond with an explicit ack message as described in 0317: Please ACK Decorator.

    "},{"location":"features/0036-issue-credential/#encoding-claims-for-indy-based-verifiable-credentials","title":"Encoding Claims for Indy-based Verifiable Credentials","text":"

    Claims in Hyperledger Indy-based verifiable credentials are put into the credential in two forms, raw and encoded. raw is the actual data value, and encoded is the (possibly derived) integer value that is used in presentations. At this time, Indy does not take an opinion on the method used for encoding the raw value. This will change with the Rich Schema work that is underway in the Indy/Aries community, where the encoding method will be part of the credential metadata available from the public ledger.

    Until the Rich Schema mechanism is deployed, Aries issuers and verifiers must agree on the encoding method so that the verifier can check that the raw value returned in a presentation corresponds to the proven encoded value. The following is the encoding algorithm that MUST be used by Issuers when creating credentials and SHOULD be verified by Verifiers receiving presentations:

    An example implementation in Python can be found here.

    A gist of test value pairs can be found here.

    "},{"location":"features/0036-issue-credential/#preview-credential","title":"Preview Credential","text":"

    This is not a message but an inner object for other messages in this protocol. It is used construct a preview of the data for the credential that is to be issued. Its schema follows:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/1.0/credential-preview\",\n    \"attributes\": [\n        {\n            \"name\": \"<attribute name>\",\n            \"mime-type\": \"<type>\",\n            \"value\": \"<value>\"\n        },\n        // more attributes\n    ]\n}\n

    The main element is attributes. It is an array of (object) attribute specifications; the subsections below outline their semantics.

    "},{"location":"features/0036-issue-credential/#attribute-name","title":"Attribute Name","text":"

    The mandatory \"name\" key maps to the attribute name as a string.

    "},{"location":"features/0036-issue-credential/#mime-type-and-value","title":"MIME Type and Value","text":"

    The optional mime-type advises the issuer how to render a binary attribute, to judge its content for applicability before issuing a credential containing it. Its value parses case-insensitively in keeping with MIME type semantics of RFC 2045. If mime-type is missing, its value is null.

    The mandatory value holds the attribute value:

    "},{"location":"features/0036-issue-credential/#threading","title":"Threading","text":"

    Threading can be used to initiate a sub-protocol during an issue credential protocol instance. For example, during credential issuance, the Issuer may initiate a child message thread to execute the Present Proof sub-protocol to have the potential Holder (now acting as a Prover) prove attributes about themselves before issuing the credential. Depending on circumstances, this might be a best practice for preventing credential fraud at issuance time.

    If threading were added to all of the above messages, a ~thread decorator would be present, and later messages in the flow would reference the @id of earlier messages to stitch the flow into a single coherent sequence. Details about threading can be found in the 0008: Message ID and Threading RFC.

    "},{"location":"features/0036-issue-credential/#payments-during-credential-exchange","title":"Payments during credential exchange","text":"

    Credentialing ecosystems may wish to associate credential issuance with payments by fiat currency or tokens. This is common with non-digital credentials today; we pay a fee when we apply for a passport or purchase a plane ticket. Instead or in addition, some circumstances may fit a mode where payment is made each time a credential is used, as when a Verifier pays a Prover for verifiable medical data to be used in research, or when a Prover pays a Verifier as part of a workflow that applies for admittance to a university. For maximum flexibility, we mention payment possibilities here as well as in the sister 0037: Present Proof RFC.

    "},{"location":"features/0036-issue-credential/#payment-decorators","title":"Payment decorators","text":"

    Wherever they happen and whoever they involve, payments are accomplished with optional payment decorators. See 0075: Payment Decorators.

    "},{"location":"features/0036-issue-credential/#payment-flow","title":"Payment flow","text":"

    A ~payment-request may decorate a Credential Offer from Issuer to Holder. When they do, a corresponding ~payment-receipt should be provided on the Credential Request returned to the Issuer.

    During credential presentation, the Verifier may pay the Holder as compensation for Holder for disclosing data. This would require a ~payment-request in a Presentation Proposal message, and a corresponding ~payment-receipt in the subsequent Presentation Request. If such a workflow begins with the Presentation Request, the Prover may sending back a Presentation (counter-)Proposal with appropriate decorator inside it.

    "},{"location":"features/0036-issue-credential/#limitations","title":"Limitations","text":"

    Smart contracts may be missed in ecosystem, so operation \"issue credential after payment received\" is not atomic. It\u2019s possible case that malicious issuer will charge first and then will not issue credential in fact. But this situation should be easily detected and appropriate penalty should be applied in such type of networks.

    "},{"location":"features/0036-issue-credential/#negotiation-and-preview","title":"Negotiation and Preview","text":"

    Negotiation prior to issuing the credential can be done using the offer-credential and propose-credential messages. A common negotiation use case would be about the data to go into the credential. For that, the credential_preview element is used.

    "},{"location":"features/0036-issue-credential/#reference","title":"Reference","text":""},{"location":"features/0036-issue-credential/#drawbacks","title":"Drawbacks","text":"

    Why should we not do this?

    "},{"location":"features/0036-issue-credential/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0036-issue-credential/#prior-art","title":"Prior art","text":"

    Similar (but simplified) credential exchanged was already implemented in von-anchor.

    "},{"location":"features/0036-issue-credential/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0036-issue-credential/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Streetcred.id Commercial mobile and web app built using Aries Framework - .NET MISSING test results"},{"location":"features/0037-present-proof/","title":"Aries RFC 0037: Present Proof Protocol 1.0","text":""},{"location":"features/0037-present-proof/#summary","title":"Summary","text":"

    Formalization and generalization of existing message formats used for presenting a proof according to existing RFCs about message formats.

    "},{"location":"features/0037-present-proof/#motivation","title":"Motivation","text":"

    We need to define a standard protocol for presenting a proof.

    "},{"location":"features/0037-present-proof/#tutorial","title":"Tutorial","text":"

    The present proof protocol consists of these messages:

    In addition, the ack and problem-report messages are adopted into the protocol for confirmation and error handling.

    This protocol is about the messages to support the presentation of verifiable claims, not about the specifics of particular verifiable presentation mechanisms. This is challenging since at the time of writing this version of the protocol, there is only one supported verifiable presentation mechanism(Hyperledger Indy). DIDComm attachments are deliberately used in messages to try to make this protocol agnostic to the specific verifiable presentation mechanism payloads. Links are provided in the message data element descriptions to details of specific verifiable presentation implementation data structures.

    Diagrams in this protocol were made in draw.io. To make changes:

    "},{"location":"features/0037-present-proof/#states","title":"States","text":""},{"location":"features/0037-present-proof/#states-for-verifier","title":"states for Verifier","text":""},{"location":"features/0037-present-proof/#states-for-prover","title":"states for Prover","text":"

    For the most part, these states map onto the transitions shown in the choreography diagram in obvious ways. However, a few subtleties are worth highlighting:

    Errors might occur in various places. For example, a Verifier might time out waiting for the Prover to supply a presentation. Errors trigger a problem-report. In this version of the protocol, all errors cause the state of both parties (the sender and the receiver of the problem-report) to revert to null (meaning it is no longer engaged in the protocol at all). Future versions of the protocol may allow more granular choices.

    "},{"location":"features/0037-present-proof/#choreography-diagram","title":"Choreography Diagram:","text":""},{"location":"features/0037-present-proof/#propose-presentation","title":"Propose Presentation","text":"

    An optional message sent by the Prover to the verifier to initiate a proof presentation process, or in response to a request-presentation message when the Prover wants to propose using a different presentation format. Schema:

    {\n    \"@type\": \"https://didcomm.org/present-proof/1.0/propose-presentation\",\n    \"@id\": \"<uuid-propose-presentation>\",\n    \"comment\": \"some comment\",\n    \"presentation_proposal\": <json-ld object>\n}\n

    Description of attributes:

    "},{"location":"features/0037-present-proof/#request-presentation","title":"Request Presentation","text":"

    From a verifier to a prover, the request-presentation message describes values that need to be revealed and predicates that need to be fulfilled. Schema:

    {\n    \"@type\": \"https://didcomm.org/present-proof/1.0/request-presentation\",\n    \"@id\": \"<uuid-request>\",\n    \"comment\": \"some comment\",\n    \"request_presentations~attach\": [\n        {\n            \"@id\": \"libindy-request-presentation-0\",\n            \"mime-type\": \"application/json\",\n            \"data\":  {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    "},{"location":"features/0037-present-proof/#presentation","title":"Presentation","text":"

    This message is a response to a Presentation Request message and contains signed presentations. Schema:

    {\n    \"@type\": \"https://didcomm.org/present-proof/1.0/presentation\",\n    \"@id\": \"<uuid-presentation>\",\n    \"comment\": \"some comment\",\n    \"presentations~attach\": [\n        {\n            \"@id\": \"libindy-presentation-0\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    "},{"location":"features/0037-present-proof/#verifying-claims-of-indy-based-verifiable-credentials","title":"Verifying Claims of Indy-based Verifiable Credentials","text":"

    Claims in Hyperledger Indy-based verifiable credentials are put into the credential in two forms, raw and encoded. raw is the actual data value, and encoded is the (possibly derived) integer value that is used in presentations. At this time, Indy does not take an opinion on the method used for encoding the raw value. This will change with the Rich Schema work that is underway in the Indy/Aries community, where the encoding method will be part of the credential metadata available from the public ledger.

    Until the Rich Schema mechanism is deployed, the Aries issuers and verifiers must agree on an encoding method so that the verifier can check that the raw value returned in a presentation corresponds to the proven encoded value. The following is the encoding algorithm that MUST be used by Issuers when creating credentials and SHOULD be verified by Verifiers receiving presentations:

    An example implementation in Python can be found here.

    A gist of test value pairs can be found here.

    "},{"location":"features/0037-present-proof/#presentation-preview","title":"Presentation Preview","text":"

    This is not a message but an inner object for other messages in this protocol. It is used to construct a preview of the data for the presentation. Its schema follows:

    {\n    \"@type\": \"https://didcomm.org/present-proof/1.0/presentation-preview\",\n    \"attributes\": [\n        {\n            \"name\": \"<attribute_name>\",\n            \"cred_def_id\": \"<cred_def_id>\",\n            \"mime-type\": \"<type>\",\n            \"value\": \"<value>\",\n            \"referent\": \"<referent>\"\n        },\n        // more attributes\n    ],\n    \"predicates\": [\n        {\n            \"name\": \"<attribute_name>\",\n            \"cred_def_id\": \"<cred_def_id>\",\n            \"predicate\": \"<predicate>\",\n            \"threshold\": <threshold>\n        },\n        // more predicates\n    ]\n}\n

    The preview identifies attributes and predicates to present.

    "},{"location":"features/0037-present-proof/#attributes","title":"Attributes","text":"

    The mandatory \"attributes\" key maps to a list (possibly empty to propose a presentation with no attributes) of specifications, one per attribute. Each such specification proposes its attribute's characteristics for creation within a presentation.

    "},{"location":"features/0037-present-proof/#attribute-name","title":"Attribute Name","text":"

    The mandatory \"name\" key maps to the name of the attribute.

    "},{"location":"features/0037-present-proof/#credential-definition-identifier","title":"Credential Definition Identifier","text":"

    The optional \"cred_def_id\" key maps to the credential definition identifier of the credential with the current attribute. Note that since it is the holder who creates the preview and the holder possesses the corresponding credential, the holder must know its credential definition identifier.

    If the key is absent, the preview specifies attribute's posture in the presentation as a self-attested attribute. A self-attested attribute does not come from a credential, and hence any attribute specification without the \"cred_def_id\" key cannot use a \"referent\" key as per Referent below.

    "},{"location":"features/0037-present-proof/#mime-type-and-value","title":"MIME Type and Value","text":"

    The optional mime-type advises the verifier how to render a binary attribute, to judge its content for applicability before accepting a presentation containing it. Its value parses case-insensitively in keeping with MIME type semantics of RFC 2045. If mime-type is missing, its value is null.

    The optional value, when present, holds the value of the attribute to reveal in presentation:

    An attribute specification must specify a value, a cred_def_id, or both:

    "},{"location":"features/0037-present-proof/#referent","title":"Referent","text":"

    The optional referent can be useful in specifying multiple-credential presentations. Its value indicates which credential will supply the attribute in the presentation. Sharing a referent value between multiple attribute specifications indicates that the holder's same credential supplies the attribute.

    Any attribute specification using a referent must also have a cred_def_id; any attribute specifications sharing a common referent value must all have the same cred_def_id value (see Credential Definition Identifier above).

    For example, a holder with multiple account credentials could use a presentation preview such as

    {\n    \"@type\": \"https://didcomm.org/present-proof/1.0/presentation-preview\",\n    \"attributes\": [\n        {\n            \"name\": \"account\",\n            \"cred_def_id\": \"BzCbsNYhMrjHiqZDTUASHg:3:CL:1234:tag\",\n            \"value\": \"12345678\",\n            \"referent\": \"0\"\n        },\n        {\n            \"name\": \"streetAddress\",\n            \"cred_def_id\": \"BzCbsNYhMrjHiqZDTUASHg:3:CL:1234:tag\",\n            \"value\": \"123 Main Street\",\n            \"referent\": \"0\"\n        },\n    ],\n    \"predicates\": [\n    ]\n}\n

    to prompt a verifier to request proof of account number and street address from the same account, rather than potentially an account number and street address from distinct accounts.

    "},{"location":"features/0037-present-proof/#predicates","title":"Predicates","text":"

    The mandatory \"predicates\" key maps to a list (possibly empty to propose a presentation with no predicates) of predicate specifications, one per predicate. Each such specification proposes its predicate's characteristics for creation within a presentation.

    "},{"location":"features/0037-present-proof/#attribute-name_1","title":"Attribute Name","text":"

    The mandatory \"name\" key maps to the name of the attribute.

    "},{"location":"features/0037-present-proof/#credential-definition-identifier_1","title":"Credential Definition Identifier","text":"

    The mandatory \"cred_def_id\" key maps to the credential definition identifier of the credential with the current attribute. Note that since it is the holder who creates the preview and the holder possesses the corresponding credential, the holder must know its credential definition identifier.

    "},{"location":"features/0037-present-proof/#predicate","title":"Predicate","text":"

    The mandatory \"predicate\" key maps to the predicate operator: \"<\", \"<=\", \">=\", \">\".

    "},{"location":"features/0037-present-proof/#threshold-value","title":"Threshold Value","text":"

    The mandatory \"threshold\" key maps to the threshold value for the predicate.

    "},{"location":"features/0037-present-proof/#negotiation-and-preview","title":"Negotiation and Preview","text":"

    Negotiation prior to the presentation can be done using the propose-presentation and request-presentation messages. A common negotiation use case would be about the data to go into the presentation. For that, the presentation-preview element is used.

    "},{"location":"features/0037-present-proof/#reference","title":"Reference","text":""},{"location":"features/0037-present-proof/#drawbacks","title":"Drawbacks","text":"

    The presentation preview as proposed above does not allow nesting of predicate logic along the lines of \"A and either B or C if D, otherwise A and B\", nor cross-credential-definition predicates such as proposing a legal name from either a financial institution or selected government entity.

    The presentation preview may be indy-centric, as it assumes the inclusion of at most one credential per credential definition. In addition, it prescribes exactly four predicates and assumes mutual understanding of their semantics (e.g., could \">=\" imply a lexicographic order for non-integer values, and if so, where to specify character collation algorithm?).

    Finally, the inclusion of non-revocation timestamps may become desirable at the preview stage; the standard as proposed does not accommodate such.

    "},{"location":"features/0037-present-proof/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0037-present-proof/#prior-art","title":"Prior art","text":"

    Similar (but simplified) credential exchange was already implemented in von-anchor.

    "},{"location":"features/0037-present-proof/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0037-present-proof/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Streetcred.id Commercial mobile and web app built using Aries Framework - .NET MISSING test results"},{"location":"features/0042-lox/","title":"Aries RFC 0042: LOX -- A more secure pluggable framework for protecting wallet keys","text":""},{"location":"features/0042-lox/#summary","title":"Summary","text":"

    Wallets are protected by secrets that must live outside of the wallet. This document proposes the Lox framework for managing the wallet access key(s).

    "},{"location":"features/0042-lox/#motivation","title":"Motivation","text":"

    Wallets currently use a single key to access the wallet. The key is provided directly or derived from a password. However, this is prone to misuse as most developers have little experience in key management. Right now there are no recommendations for protecting a key provided by Aries forcing implementors to choose methods based on their company's or organization's policies or practices.

    Here Millenial Mike demonstrates this process.

    Some implementors have no policy or practice in place at all to follow leaving them to make bad decisions about managing wallet key storage and protection. For example, when creating an API token for Amazon's AWS, Amazon generates a secret key on a user's behalf and is downloaded to a CSV file. Many programmers do not know how to best protect these downloaded credentials because they must be used in a program to make API calls. They don't which of the following is the best option. They typically:

    The less commonly used or known solution involves keyrings, hardware security modules (HSM), trusted execution environments (TEE), and secure enclaves.

    "},{"location":"features/0042-lox/#keyrings","title":"Keyrings","text":"

    Keyrings come preinstalled with modern operating systems without requiring installing additional software, but other keyrings software packages can be installed that function in a similar way. Operating systems protect keyring contents in encrypted files with access controls based on the logged in user and process accessing them. The keyring can only be unlocked if the same user, process, and keyring credentials are used when the keyring is created. Keyring credentials can be any combination of passwords, pins, keys, cyber tokens, and biometrics. In principle, a system's keyring should be able to keep credentials away from root (as in, the attacker can use the credential as long as they have access, but they can't extract the credential for persistence and assuming no other attacks like Foreshadow). Mac OS X, Windows, Linux Gnome-Keyring and KWallet, Android, and iOS have built-in enclaves that are protected by the operating system.

    Some systems back keyrings with hardware to increase security. The following flow chart illustrates how a keyring functions.

    "},{"location":"features/0042-lox/#secure-enclaves","title":"Secure Enclaves","text":"

    Secure enclaves are used to describe HSMs, TPMs, and TEEs. An explaination of how secure enclaves work is detailed here.

    "},{"location":"features/0042-lox/#details","title":"Details","text":"

    To avoid the overuse of repeating each of these to describe a highly secure environment, the term enclave will be used to indicate all of them synonymously. Enclaves are specially designed to safeguard secrets but can be complex to use with varying APIs and libraries and are accessed using multiple various combinations of credentials. However, these complexities cause many to avoid using them.

    Where to put the wallet access credentials that are directly used by applications or people is called the top-level credential problem. Lox aims to provide guidance and aid in adopting best practices and developing code to address the top level credential problem\u2013-the credential used to protect all others\u2013the keys to the kingdom\u2013or a secret that is used directly by Aries that if compromised would yield disastrous consequences and give access to the wallet.

    "},{"location":"features/0042-lox/#tutorial","title":"Tutorial","text":"

    Lox is a layer that is designed to be an API for storing secrets with pluggable backends that implement reasonable defaults but is flexible to support various others that may be used.

    The default enclave will be the operating system keychain. Lox will also allow for many different enclaves that are optimal for storing keys like YubiKey, Hashicorp Vault, Intel SGX, or other methods supported by Hyperledger Ursa. Other hardware security modules can be plugged into the system via USB or accessed via the cloud. Trusted Platform Modules (TPMs) now come standard with many laptops and higher end tablets. Communication to enclaves can be done using drivers or over Unix or TCP sockets, or the Windows Communication Framework.

    The goal of Lox is to remove the complexity of the various enclaves by choosing the best secure defaults and hiding details that are prone to be misused or misunderstood, making it easier to secure the wallet.

    "},{"location":"features/0042-lox/#reference","title":"Reference","text":"

    Currently, there are two methods that are used to open a wallet: provide the wallet encryption key or use a password based key derivation function to derive the key. Neither of these methods is directly terrible but there are concerns. Where should the symmetric encryption key be stored? What settings should be chosen for Argon2id? Argon2id also does not scale horizontally because settings that are secure for a desktop can be very slow to execute on a mobile device in the order of tens of seconds to minutes. To make it usable, the settings must be dialed down for the mobile device, but this allows faster attacks from a hacker. Also, passwords are often weak in that they are short, easily guessed, and have low entropy.

    Lox, on the other hand, allows a wallet user to access a wallet providing the ID of the credential that will be used to open the wallet, then letting the secure enclave handle authenticating the owner and securing access control. The enclave will restrict access to the currently logged in user so even an administrator cannot read the enclave contents or even access the hardware or TEE.

    For example, a user creates a new wallet and instead of specifying a key, can specify an ID like youthful_burnell. The user is prompted to provide the enclave credentials like biometrics, pins, authenticator tokens, etc. If successful, Lox creates the wallet access key, stores it in the enclave, opens the wallet, and securely wipes the memory holding the key. The calling program can even store the ID value since this alone is not enough to access the enclave. The program must also be running as the same owner of the secret. This allows static agents to store the ID in a config file without having to store the key.

    Enclaves will usually remain unlocked until certain events occur like: when the system goes to sleep, after a set time interval passes, the user logs out. When the event occurs, the enclave reverts to its locked state which requires providing the credentials again. These settings can be modified if needed like only going to the locked state after system boot up.

    The benefits provided by Lox are these

    1. Avoid having Aries users reinvent the wheel when they manage secrets just like an enclave but in less secure ways.
    2. Securely create keys with sufficient entropy from trusted cryptographic sources.
    3. Safely use keys and wipe them from memory when finished to limit side-channel attacks.
    4. Support for pluggable enclave backends by providing a single API. This flexible architecture allows for the wallet key to be protected by a different enclave on each system its stored.
    5. Hide various enclave implementations and complexity to increase misuse-resistance.

    The first API iteration proposal includes the following functions

    function lox_create_wallet(wallet_name: String, config: Map<String, ...>)\nfunction lox_open_wallet(wallet_name: String, config: Map<String, ...>)\n

    wallet_name can be any human readable string. This value will vary based on the enclaves requirements as some allow different characters than others. config can include anything needed to specify the enclave and access it like a service name, remote IP address, and other miscellaneous settings. If nothing is specified, the operating system's default enclave will be used.

    Lox will be used to access the wallet in addition to providing the raw key for those that do not want to use Lox and want to continue to manage their own keys in their own way. Essentially a raw key provided to the wallet is setting the enclave backend to be a null provider. The function for deriving the wallet key from a password should be deprecated for reasons described earlier and use Lox instead.

    "},{"location":"features/0042-lox/#drawbacks","title":"Drawbacks","text":"

    This adds another layer to wallet security. The APIs must be thought through to accommodate as many possible enclaves that could be used as possible. Hardware enclave vendor APIs are similar but are all different and have not unified behind a common standard yet. Trying to account for all of these will be difficult and may require changes to the API.

    "},{"location":"features/0042-lox/#prior-art","title":"Prior art","text":"

    A brief overview of enclaves and their services have been discussed in the Indy wallet HIPE.

    "},{"location":"features/0042-lox/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0042-lox/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Reference Code Example rust code that implements Lox using OS keychains"},{"location":"features/0042-lox/reference_code/","title":"Lox","text":"

    A command line tool for accessing various keychains or secure enclaves.

    "},{"location":"features/0042-lox/reference_code/#the-problem","title":"The problem","text":"

    Applications use several credentials today to secure data locally and during transmitted. However, bad habits happen when safeguarding these credentials. For example, when creating an API token for Amazon's AWS, Amazon generates a secret key on a user's behalf and is downloaded to a CSV file. Programmers do not know how to best sure these downloaded credentials because they must be used in a program to make API calls. They don't which of the following is the best option. They can:

    Where to put the credential that is directly used by applications or people is called the top level credential problem.

    There are services like LeakLooker that browse the internet looking for credentials that can be scrapped and unfortunately but often succeed. Some projects have documented how to test credentials to see if they have been revealed. See keyhacks.

    These document aims to provide guidance and aid in adopting best practices and developing code to address the top level credential problem\u2013-the credential used to protect all others\u2013the keys to the kingdom\u2013or a secret that is used directly by a program that if compromised would yield disastrous consequences.

    "},{"location":"features/0042-lox/reference_code/#the-solution","title":"The solution","text":"

    Lox is a layer that is designed to be a command line tool or API library for storing secrets. The default is to use the operating system keychain. The goal is to add to Lox to allow for many different enclaves that are optimal for storing the keys to the kingdom like YubiKey, Intel SGX, or Arm Trustzone. In principle, a system's secure enclave should be able to keep some credentials away from root (as in, the attacker can use the credential as long as they have access, but they can't extract the credential for persistence), and assuming no other attacks like Foreshadow.

    Mac OS X, Linux, and Android have built-in keychains that are guarded by the operating system. iOS and Android come with hardware secure enclaves or trusted execution environments for managing the secrets stored in the keychain.

    This first iteration uses the OS keychain or an equivalent and uses the command line or is a C callable API. Future work could allow for communication over unix or tcp sockets with Lox running as a daemon process.

    Currently Mac OS X offers support for a CLI tool and libraries but they are complex to understand and can be prone to misuse due to misunderstandings. Lox removes the complexity by choosing secure defaults so developers can focus on their job.

    Lox is written in Rust and has no external dependencies to do its job except DBus on linux.

    The program can be compiled from any OS to run on any OS. Lox-CLI is the command line tool while Lox is the library.

    "},{"location":"features/0042-lox/reference_code/#run-the-program","title":"Run the program","text":"

    Basic Usage

    Requires dbus library on linux.

    On ubuntu, this is libdbus-1-3 when running. On redhat, this is dbus when running.

    Gnome-keyring or KWallet must also be installed on Linux.

    Lox can be run either using cargo run -- \\<args> or if it is already built from source using ./lox.

    Lox tries to determine if input is a file or text. If a file exists that matches the entered text, Lox will read the contents. Otherwise, it will prompt the user for either the id of the secret or to enter a secret.

    Lox stores secrets based on a service name and an ID. The service name is the name of the program or process that only is allowed to access the secret with ID. Secrets can be retrieved, stored, or deleted.

    When secrets are stored, care should be given to not pass the value over the command line as it could be stored in the command line history. For this reason, either put the value in a file or Lox will read it from STDIN. After Lox stores the secret, Lox will securely wipe it from memory.

    "},{"location":"features/0042-lox/reference_code/#caveat","title":"Caveat","text":"

    One remaining problem is how to solve the service name provided to Lox. Ideally Lox could compute it instead of supplied from the calling endpoint which can lie about the name. We can imagine an attacker who wants access to the aws credentials in the keychain just needs to know the service name and the id of the secret to request it. Access is still blocked by the operating system if the attacker doesn't know the keychain credentials similar to a password vault. If Lox could compute the service name then this makes it harder for an attacker to retrieve targeted secrets. However, this is better than the secrets existing in plaintext in code, config files, or environment variables.

    "},{"location":"features/0042-lox/reference_code/#examples","title":"Examples","text":"

    Lox takes at least two arguments: service_name and ID. When storing a secret, an additional parameter is needed. If omitted (the preferred method) the value is read from STDIN.

    "},{"location":"features/0042-lox/reference_code/#storing-a-secret","title":"Storing a secret","text":"
    lox set aws 1qwasdrtyuhjnjyt987yh\nprompt> ...<Return>\nSuccess\n
    "},{"location":"features/0042-lox/reference_code/#retrieve-a-secret","title":"Retrieve a secret","text":"
    lox get aws 1qwasdrtyuhjnjyt987yh\n<Secret Value>\n
    "},{"location":"features/0042-lox/reference_code/#delete-a-secret","title":"Delete a secret","text":"
    lox delete aws 1qwasdrtyuhjnjyt987yh\n
    "},{"location":"features/0042-lox/reference_code/#list-all-secrets","title":"List all secrets","text":"

    Lox can read all values stored in the keyring. List will just list the name of all the values in the keyring without retrieving their actual values.

    lox list\n

    {\"application\": \"lox\", \"id\": \"apikey\", \"service\": \"aws\", \"username\": \"mike\", \"xdg:schema\": \"org.freedesktop.Secret.Generic\"}\n{\"application\": \"lox\", \"id\": \"walletkey\", \"service\": \"indy\", \"username\": \"mike\", \"xdg:schema\": \"org.freedesktop.Secret.Generic\"}\n
    "},{"location":"features/0042-lox/reference_code/#peek-secrets","title":"Peek secrets","text":"

    Lox can retrieve all or a subset of secrets in the keyring. Peek without any arguments will pull out all keyring names and their values. Because Lox encrypts values before storing them in the keyring if it can, those values will be returned as hex values instead of their associated plaintext. Peek filtering is different based on the operating system.

    For OSX, filtering is based on the kind that should be read. It can be generic or internet passwords. generic only requires the service and account labels. internet requires the server, account, protocol, authentication_type values. Filters are supplied as name value pairs separated by = and multiple pairs separated by a comma.

    lox peek service=aws,account=apikey\n

    For Linux, filtering is based on a subset of name value pairs of the attributes that match. For example, if the attributes in the keyring were like this

    {\"application\": \"lox\", \"id\": \"apikey\", \"service\": \"aws\", \"username\": \"mike\", \"xdg:schema\": \"org.freedesktop.Secret.Generic\"}\n{\"application\": \"lox\", \"id\": \"walletkey\", \"service\": \"indy\", \"username\": \"mike\", \"xdg:schema\": \"org.freedesktop.Secret.Generic\"}\n
    To filter based on id, run
    lox peek id=apikey\n
    To filter based on username AND service, run
    lox peek username=mike,service=aws\n

    For Windows, filtering is based on the credentials targetname and globbing. For example, if list returned

    {\"targetname\": \"MicrosoftAccount:target=SSO_POP_Device\"}\n{\"targetname\": \"WindowsLive:target=virtualapp/didlogical\"}\n{\"targetname\": \"LegacyGeneric:target=IEUser:aws:apikey\"}\n
    then filtering searches everything after \":target=\". In this case, if the value to be peeked is IEUser:aws:apikey, the following will return just that result
    lox.exe peek IE*\nlox.exe peek IE*apikey\nlox.ece peek IEUser:aws:apikey\n

    "},{"location":"features/0042-lox/reference_code/#build-from-source","title":"Build from source","text":"

    [build-from-source]: # build-from-source

    To make a distributable executable, run the following commands:

    1. On Linux install dbus library. On a debian based OS this is libdbus-1-dev. On a Redhat based OS this is dbus-devel.
    2. curl https://sh.rustup.rs -sSf | sh -s -- -y - installs the run compiler
    3. cd reference_code/
    4. cargo build --release - when this is finished the executable is target/release/lox.
    5. For *nix users cp target/release/lox /usr/local/lib and chmod +x /usr/local/lib/lox
    6. For Windows users copy target/release/lox.exe to a folder and add that folder to your %PATH variable.

    Liblox is the library that can be linked to programs to manage secrets. Use the library for the underlying operating system that meets your needs

    1. liblox.dll - Windows
    2. liblox.so - Linux
    3. liblox.dylib - Mac OS X
    "},{"location":"features/0042-lox/reference_code/#future-work","title":"FUTURE WORK","text":"

    Allow for other enclaves like Hashicorp vault, LastPass, 1Password. Allow for steganography methods like using images or Microsoft Office files for storing the secrets.

    "},{"location":"features/0043-l10n/","title":"Aries RFC 0043: l10n (Locali[s|z]ation)","text":""},{"location":"features/0043-l10n/#summary","title":"Summary","text":"

    Defines how to send a DIDComm message in a way that facilitates interoperable localization, so humans communicating through agents can interact without natural language barriers.

    "},{"location":"features/0043-l10n/#motivation","title":"Motivation","text":"

    The primary use case for DIDComm is to support automated processing, as with messages that lead to credential issuance, proof exchange, and so forth. Automated processing may be the only way that certain agents can process messages, if they are devices or pieces of software run by organizations with no human intervention.

    However, humans are also a crucial component of the DIDComm ecosystem, and many interactions have them as either a primary or a secondary audience. In credential issuance, a human may need to accept terms and conditions from the issuer, even if their agent navigates the protocol. Some protocols, like a chat between friends, may be entirely human-centric. And in any protocol between agents, a human may have to interpret errors.

    When humans are involved, locale and potential translation into various natural languages becomes important. Normally, localization is the concern of individual software packages. However, in DIDComm, the participants may be using different software, and the localization may be a cross-cutting concern--Alice's software may need to send a localized message to Bob, who's running different software. It therefore becomes useful to explore a way to facilitate localization that allows interoperability without imposing undue burdens on any implementer or participant.

    NOTE: JSON-LD also describes a localization mechanism. We have chosen not to use it, for reasons enumerated in the RFC about JSON-LD compatibility.

    "},{"location":"features/0043-l10n/#tutorial","title":"Tutorial","text":"

    Here we introduce some flexible and easy-to-use conventions. Software that uses these conventions should be able to add localization value in several ways, depending on needs.

    "},{"location":"features/0043-l10n/#introducing-l10n","title":"Introducing ~l10n","text":"

    The default assumption about locale with respect to all DIDComm messages is that they are locale-independent, because they are going to be processed entirely by automation. Dates should be in ISO 8601 format, typically in UTC. Numbers should use JSON formatting rules (or, if embedded in strings, the \"C\" locale). Booleans and null values use JSON keywords.

    Strings tend to be somewhat more interesting. An agent message may contain many strings. Some will be keys; others may be values. Usually, keys do not need to be localized, as they will be interpreted by software (though see Advanced Use Case for an example that does). Among string values, some may be locale-sensitive, while others may not. For example, consider the following fictional message that proposes a meeting between Alice and Bob:

    Here, the string value named proposed_location need not be changed, no matter what language Bob speaks. But note might be worth localizing, in case Bob speaks French instead of English.

    We can't assume all text is localizable. This would result in silly processing, such as trying to translate the first_name field in a driver's license:

    The ~l10n decorator (so-named because \"localization\" has 10 letters between \"l\" and \"n\") may be added to the note field to meet this need:

    If you are not familiar with this notion of field decorators, please review the section about scope in the RFC on decorators.

    "},{"location":"features/0043-l10n/#decorator-at-message-scope","title":"Decorator at Message Scope","text":"

    The example above is minimal. It shows a French localized alternative for the string value of note in the note~l10n.fr field. Any number of these alternatives may be provided, for any set of locales. Deciding whether to use one depends on knowing the locale of the content that's already in note, so note~l10n.locale is also provided.

    But suppose we evolved our message type, and it ended up having 2 fields that were localization-worthy. Both would likely use the same locale in their values, but we don't really want to repeat that locale twice. The preferred way to handle this is to decorate the message with semantics that apply message-wide, and to decorate fields with semantics that apply just to field instances or to fields in the abstracts. Following this pattern puts our example message into a more canonical form:

    "},{"location":"features/0043-l10n/#decorator-at-message-type-scope","title":"Decorator at Message Type Scope","text":"

    Now we are declaring, at message scope, that note and fallback_plan are localizable and that their locale is en.

    It is worth noting that this information is probably true of all instances of messages of this type--not just this particular message. This raises the possibility of declaring the localization data at an evey higher level of abstraction. We do this by moving the decorator from a message instance to a message type. Decorators on a message type are declared in a section of the associated RFC named Localization (or \"Localisation\", for folks that like a different locale's spelling rules :-). In our example, the relevant section of the RFC might look like this:

    This snippet contains one unfamiliar construct, catalogs, which will be discussed below. Ignore that for a moment and focus on the rest of the content. As this snippet mentions, the JSON fragment for ~l10n that's displayed in the running text of the RFC should also be checked in to github with the RFC's markdown as <message type name>~l10n.json, so automated tools can consume the content without parsing markdown.

    Notice that the markdown section is hyperlinked back to this RFC so developers unfamiliar with the mechanism will end up reading this RFC for more details.

    With this decorator on the message type, we can now send our original message, with no message or field decorators, and localization is still fully defined:

    Despite the terse message, its locale is known to be English, and the note field is known to be localizable, with current content also in English.

    One benefit of defining a ~l10n decorator for a message family is that developers can add localization support to their messages without changing field names or schema, and with only a minor semver revision to a message's version.

    We expect most message types to use localization ../../features in more or less this form. In fact, if localization settings have much in common across a message family, the Localization section of a RFC may be defined not just for a message type, but for a whole message family.

    "},{"location":"features/0043-l10n/#message-codes-and-catalogs","title":"Message Codes and Catalogs","text":"

    When the same text values are used over and over again (as opposed to the sort of unpredictable, human-provided text that we've seen in the note field thus far), it may be desirable to identify a piece of text by a code that describes its meaning, and to publish an inventory of these codes and their localized alternatives. By doing this, a message can avoid having to include a huge inventory of localized alternatives every time it is sent.

    We call this inventory of message codes and their localized alternatives a message catalog. Catalogs may be helpful to track a list of common errors (think of symbolic constants like EBADF and EBUSY, and the short explanatory strings associated with them, in Posix's <errno.h>). Catalogs let translation be done once, and reused globally. Also, the code for a message can be searched on the web, even when no localized alternative exists for a particular language. And the message text in a default language can undergo minor variation without invalidating translations or searches.

    If this usage is desired, a special subfield named code may be included inside the map of localized alternatives:

    Note, however, that a code for a localized message is not useful unless we know what that code means. To do that, we need to know where the code is defined. In other words, codes need a namespace or context. Usually, this namespace or context comes from the message family where the code is used, and codes are defined in the same RFC where the message family is defined.

    Message families that support localized text with predictable values should thus include or reference an official catalog of codes for those messages. A catalog is a dictionary of code \u2192 localized alternatives mappings. For example:

    To associate this catalog with a message type, the RFC defining the message type should contain a \"Message Catalog\" section that looks like this:

    Note the verbiage about an official, immutable URL. This is important because localized alternatives for a message code could be an attack vector if the message catalog isn't handled correctly. If a hacker is able to change the content of a catalog, they may be able to change how a message is interpreted by a human that's using localization support. For example, they could suggest that the en localized alternative for code \"warn-suspicious-key-in-use` is \"Key has been properly verified and is trustworthy.\" By having a tamper-evident version of the catalog (e.g., in github or published on a blockchain), devlopers can write software that only deals with canonical text or dynamically translated text, never with something the hacker can manipulate.

    In addition, the following best practices are recommended to maximize catalog usefulness:

    1. Especially when displaying localized error text, software should also display the underlying code. (This is desirable anyway, as it allows searching the web for hints and discussion about the code.)

    2. Software that regularly deals with localizable fields of key messages should download a catalog of localizable alternatives in advance, rather than fetching it just in time.

    "},{"location":"features/0043-l10n/#connecting-code-with-its-catalog","title":"Connecting code with its catalog","text":"

    We've described a catalog's structure and definition, but we haven't yet explained how it's referenced. This is done through the catalogs field inside a ~l10n decorator. There was an example above, in the example of a \"Localization\" section for a RFC. The field name, catalogs, is plural; its value is an array of URIs that reference specific catalog versions. Any catalogs listed in this URI are searched, in the order given, to find the definition and corresponding localized alternatives for a given code.

    A catalogs field can be placed in a ~l10n decorator at various scopes. If it appears at the message or field level, the catalogs it lists are searched before the more general catalogs.

    "},{"location":"features/0043-l10n/#advanced-use-case","title":"Advanced Use Case","text":"

    This section is not normative in this version of the RFC. It is considered experimental for now.

    Let's consider a scenario that pushes the localization ../../features to their limit. Suppose we have a family of DIDComm messages that's designed to exchange genealogical records. The main message type, record, has a fairly simple schema: it just contains record_type, record_date, and content. But content is designed to hold arbitrary sub-records from various archives: probate paperwork from France, military discharge records from Japan, christening certificates from Germany.

    Imagine that the UX we want to build on top of these messages is similar to the one at Ancestry.com:

    Notice that the names of fields in this UX are all given in English. But how likely is it that a christening certificate from Germany will have English field names like \"Birth Data\" and \"Marriage Date\" in its JSON?

    The record message behind data like this might be:

    In order to translate this data, not just values but also keys need to have associated ~l10n data. We do this with a locales array. This allows us to specify very complex locale settings--including multiple locales in the same message, and locales on keys. We may still have the ~l10n.locale array and similar fields to establish defaults that are overridden in ~l10n.locales:

    \"~l10n\": {\n  \"locales\": {\n    \"de\": [\"content.key@*\", \"content.Geburtstag\", \"content.Heiratsdatum\"]\n  }\n}\n

    This says that all fields under content have names that are German, and that the content.Geburtstag and content.Heiratsdatum field values (which are of type date) are also represented in a German locale rather than the default ISO 8601.

    Besides supporting key localization, having a ~l10n.locales array on a message, message type, or message family scope is an elegant, concise way to cope with messages that have mixed field locales (fields in a variety of locales).

    "},{"location":"features/0043-l10n/#drawbacks","title":"Drawbacks","text":"

    The major problem with this feature is that it introduces complexity. However, it is complexity that most developers can ignore unless or until they care about localization. Once that becomes a concern, the complexity provides important ../../features--and it remains nicely encapsulated.

    "},{"location":"features/0043-l10n/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    We could choose not to support this feature.

    We could also use JSON-LD's @language feature. However, this feature has a number of limitations, as documented in the RFC about JSON-LD compatibility.

    "},{"location":"features/0043-l10n/#prior-art","title":"Prior art","text":"

    Java's property bundle mechanism, Posix's gettext() function, and many other localization techniques are well known. They are not directly applicable, mostly because they don't address the need to communicate with software that may or may not be using the same underlying mapping/localization mechanism.

    "},{"location":"features/0043-l10n/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0043-l10n/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes RFC 0035: Report Problem Protocol Depends on this mechanism to localize the description field of an error. RFC 0036: Issue Credential Protocol Depends on this mechanism to localize the comment field of a propose-credential, offer-credential, request-credential, or issue-credential message. RFC 0037: Present Proof Protocol Depends on this mechanism to localize the comment field of a propose-presentation, offer-presentation, or presentation message. RFC 0193: Coin Flip Protocol Uses this mechanism to localize the comment field, when human iteraction around coin tosses is a a goal."},{"location":"features/0043-l10n/localization-section/","title":"Localization section","text":""},{"location":"features/0043-l10n/localization-section/#localization","title":"Localization","text":"

    By default, all instances of this message type carry localization metadata in the form of an implicit ~l10n decorator that looks like this:

    This ~l10n JSON fragment is checked in next to the narrative content of this RFC as l10n.json.

    Individual messages can use the ~l10n decorator to supplement or override these settings.

    "},{"location":"features/0043-l10n/message-catalog-section/","title":"Message catalog section","text":""},{"location":"features/0043-l10n/message-catalog-section/#message-catalog","title":"Message Catalog","text":"

    By default, all instances of this message type assume the following catalog in their @l10n data:

    When referencing this catalog, please be sure you have the correct version. The official, immutable URL to this version of the catalog file is:

    https://github.com/x/y/blob/dc525a27d3b75/text/myfamily/catalog.json\n

    For more information, see the Message Catalog section of the localization RFC.

    "},{"location":"features/0044-didcomm-file-and-mime-types/","title":"Aries RFC 0044: DIDComm File and MIME Types","text":""},{"location":"features/0044-didcomm-file-and-mime-types/#summary","title":"Summary","text":"

    Defines the media (MIME) types and file types that hold DIDComm messages in encrypted, signed, and plaintext forms. Covers DIDComm V1, plus a little of V2 to clarify how DIDComm versions are detected.

    "},{"location":"features/0044-didcomm-file-and-mime-types/#motivation","title":"Motivation","text":"

    Most work on DIDComm so far has assumed HTTP as a transport. However, we know that DID communication is transport-agnostic. We should be able to say the same thing no matter which channel we use.

    An incredibly important channel or transport for messages is digital files. Files can be attached to messages in email or chat, can be carried around on a thumb drive, can be backed up, can be distributed via CDN, can be replicated on distributed file systems like IPFS, can be inserted in an object store or in content-addressable storage, can be viewed and modified in editors, and support a million other uses.

    We need to define how files and attachments can contain DIDComm messages, and what the semantics of processing such files will be.

    "},{"location":"features/0044-didcomm-file-and-mime-types/#tutorial","title":"Tutorial","text":""},{"location":"features/0044-didcomm-file-and-mime-types/#media-types","title":"Media Types","text":"

    Media types are based on the conventions of RFC6838. Similar to RFC7515, the application/ prefix MAY be omitted and the recipient MUST treat media types not containing / as having the application/ prefix present.

    "},{"location":"features/0044-didcomm-file-and-mime-types/#didcomm-v1-encrypted-envelope-dee","title":"DIDComm v1 Encrypted Envelope (*.dee)","text":"

    The raw bytes of an encrypted envelope may be persisted to a file without any modifications whatsoever. In such a case, the data will be encrypted and packaged such that only specific receiver(s) can process it. However, the file will contain a JOSE-style header that can be used by magic bytes algorithms to detect its type reliably.

    The file extension associated with this filetype is dee, giving a globbing pattern of *.dee; this should be be read as \"STAR DOT D E E\" or as \"D E E\" files.

    The name of this file format is \"DIDComm V1 Encrypted Envelope.\" We expect people to say, \"I am looking at a DIDComm V1 Encrypted Envelope\", or \"This file is in DIDComm V1 Encrypted Envelope format\", or \"Does my editor have a DIDComm V1 Encrypted Envelope plugin?\"

    Although the format of encrypted envelopes is derived from JSON and the JWT/JWE family of specs, no useful processing of these files will take place by viewing them as JSON, and viewing them as generic JWEs will greatly constrain which semantics are applied. Therefore, the recommended MIME type for *.dee files is application/didcomm-envelope-enc, with application/jwe as a fallback, and application/json as an even less desirable fallback. (In this, we are making a choice similar to the one that views *.docx files primarily as application/msword instead of application/xml.) If format evolution takes place, the version could become a parameter as described in RFC 1341: application/didcomm-envelope-enc;v=2.

    A recipient using the media type value MUST treat it as if \u201capplication/\u201d were prepended to any \"typ\" or \"cty\" value not containing a \u2018/\u2019 in compliance with the JWE /JWS family of specs.

    The default action for DIDComm V1 Encrypted Envelopes (what happens when a user double-clicks one) should be Handle (that is, process the message as if it had just arrived by some other transport), if the software handling the message is an agent. In other types of software, the default action might be to view the file. Other useful actions might include Send, Attach (to email, chat, etc), Open with agent, and Decrypt to *.dp.

    NOTE: The analog to this content type in DIDComm v2 is called a \"DIDComm Encrypted Message.\" Its format is slightly different. For more info, see Detecting DIDComm Versions below.

    "},{"location":"features/0044-didcomm-file-and-mime-types/#didcomm-v1-signed-envelopes-dse","title":"DIDComm V1 Signed Envelopes (*.dse)","text":"

    When DIDComm messages are signed, the signing uses a JWS signing envelope. Often signing is unnecessary, since authenticated encryption proves the sender of the message to the recipient(s), but sometimes when non-repudiation is required, this envelope is used. It is also required when the recipient of a message is unknown, but tamper-evidence is still required, as in the case of a public invitation.

    By convention, DIDComm Signed Envelopes contain plaintext; if encryption is used in combination with signing, the DSE goes inside the DEE.

    The file extension associated with this filetype is dse, giving a globbing pattern of *.dse; this should be be read as \"STAR DOT D S E\" or as \"D S E\" files.

    The name of this file format is \"DIDComm V1 Signed Envelope.\" We expect people to say, \"I am looking at a DIDComm V1 Signed Envelope\", or \"This file is in DIDComm V1 Signed Envelope format\", or \"Does my editor have a DIDComm V1 Signed Envelope plugin?\"

    As with *.dee files, the best way to hande *.dse files is to map them to a custom MIME type. The recommendation is application/didcomm-sig-env, with application/jws as a fallback, and application/json as an even less desirable fallback.

    A recipient using the media type value MUST treat it as if \u201capplication/\u201d were prepended to any \"typ\" or \"cty\" value not containing a \u2018/\u2019 in compliance with the JWE /JWS family of specs.

    The default action for DIDComm V1 Signed Envelopes (what happens when a user double-clicks one) should be Validate (that is, process the signature to see if it is valid.

    NOTE: The analog to this content type in DIDComm v2 is called a \"DIDComm Signed Message.\" Its format is slightly different. For more info, see Detecting DIDComm Versions below.

    "},{"location":"features/0044-didcomm-file-and-mime-types/#didcomm-v1-messages-dm","title":"DIDComm V1 Messages (*.dm)","text":"

    The plaintext representation of a DIDComm message--something like a credential offer, a proof request, a connection invitation, or anything else worthy of a DIDComm protocol--is JSON. As such, it should be editable by anything that expects JSON.

    However, all such files have some additional conventions, over and above the simple requirements of JSON. For example, key decorators have special meaning ( @id, ~thread, @trace , etc). Nonces may be especially significant. The format of particular values such as DID and DID+key references is important. Therefore, we refer to these messages generically as JSON, but we also define a file format for tools that are aware of the additional semantics.

    The file extension associated with this filetype is *.dm, and should be read as \"STAR DOT D M\" or \"D M\" files. If a format evolution takes place, a subsequent version could be noted by appending a digit, as in *.dm2 for second-generation dm files.

    The name of this file format is \"DIDComm V1 Message.\" We expect people to say, \"I am looking at a DIDComm V1 Message\", or \"This file is in DIDComm V1 Message format\", or \"Does my editor have a DIDComm V1 Message plugin?\" For extra clarity, it is acceptable to add the adjective \"plaintext\", as in \"DIDComm V1 Plaintext Message.\"

    The most specific MIME type of *.dm files is application/json;flavor=didcomm-msg--or, if more generic handling is appropriate, just application/json.

    A recipient using the media type value MUST treat it as if \u201capplication/\u201d were prepended to any \"typ\" or \"cty\" value not containing a \u2018/\u2019 in compliance with the JWE /JWS family of specs.

    The default action for DIDComm V1 Messages should be to View or Validate them. Other interesting actions might be Encrypt to *.dee, Sign to *.dse, and Find definition of protocol.

    NOTE: The analog to this content type in DIDComm v2 is called a \"DIDComm Plaintext Message.\" Its format is slightly different. For more info, see Detecting DIDComm Versions below.

    As a general rule, DIDComm messages that are being sent in production use cases of DID communication should be stored in encrypted form (*.dee) at rest. There are cases where this might not be preferred, e.g., providing documentation of the format of message or during a debugging scenario using message tracing. However, these are exceptional cases. Storing meaningful *.dm files decrypted is not a security best practice, since it replaces all the privacy and security guarantees provided by the DID communication mechanism with only the ACLs and other security barriers that are offered by the container.

    "},{"location":"features/0044-didcomm-file-and-mime-types/#native-object-representation","title":"Native Object representation","text":"

    This is not a file format, but rather an in-memory form of a DIDComm Message using whatever object hierarchy is natural for a programming language to map to and from JSON. For example, in python, the natural Native Object format is a dict that contains properties indexed by strings. This is the representation that python's json library expects when converting to JSON, and the format it produces when converting from JSON. In Java, Native Object format might be a bean. In C++, it might be a std::map<std::string, variant>...

    There can be more than one Native Object representation for a given programming language.

    Native Object forms are never rendered directly to files; rather, they are serialized to DIDComm Plaintext Format and then persisted (likely after also encrypting to DIDComm V1 Encrypted Envelope).

    "},{"location":"features/0044-didcomm-file-and-mime-types/#negotiating-compatibility","title":"Negotiating Compatibility","text":"

    When parties want to communicate via DIDComm, a number of mechanisms must align. These include:

    1. The type of service endpoint used by each party
    2. The key types used for encryption and/or signing
    3. The format of the encryption and/or signing envelopes
    4. The encoding of plaintext messages
    5. The protocol used to forward and route
    6. The protocol embodied in the plaintext messages

    Although DIDComm allows flexibility in each of these choices, it is not expected that a given DIDComm implementation will support many permutations. Rather, we expect a few sets of choices that commonly go together. We call a set of choices that work well together a profile. Profiles are identified by a string that matches the conventions of IANA media types, but they express choices about plaintext, encryption, signing, and routing in a single value. The following profile identifiers are defined in this version of the RFC:

    "},{"location":"features/0044-didcomm-file-and-mime-types/#defined-profiles","title":"Defined Profiles","text":"

    Profiles are named in the accept section of a DIDComm service endpoint and in an out-of-band message. When Alice declares that she accepts didcomm/aip2;env=rfc19, she is making a declaration about more than her own endpoint. She is saying that all publicly visible steps in an inbound route to her will use the didcomm/aip2;env=rfc19 profile, such that a sender only has to use didcomm/aip2;env=rfc19 choices to get the message from Alice's outermost mediator to Alice's edge. It is up to Alice to select and configure mediators and internal routing in such a way that this is true for the sender.

    "},{"location":"features/0044-didcomm-file-and-mime-types/#detecting-didcomm-versions","title":"Detecting DIDComm Versions","text":"

    Because media types differ from DIDComm V1 to V2, and because media types are easy to communicate in headers and message fields, they are a convenient way to detect which version of DIDComm applies in a given context:

    Nature of Content V1 V2 encrypted application/didcomm-envelope-encDIDComm V1 Encrypted Envelope*.dee application/didcomm-encrypted+jsonDIDComm Encrypted Message*.dcem signed application/didcomm-sig-envDIDComm V1 Signed Envelope*.dse application/didcomm-signed+jsonDIDComm Signed Message*.dcsm plaintext application/json;flavor=didcomm-msgDIDComm V1 Message*.dm application/didcomm-plain+jsonDIDComm Plaintext Message*.dcpm

    It is also recommended that agents implementing Discover Features Protocol v2 respond to queries about supported DIDComm versions using the didcomm-version feature name. This allows queries about what an agent is willing to support, whereas the media type mechanism describes what is in active use. The values that should be returned from such a query are URIs that tell where DIDComm versions are developed:

    Version URI V1 https://github.com/hyperledger/aries-rfcs V2 https://github.com/decentralized-identity/didcomm-messaging"},{"location":"features/0044-didcomm-file-and-mime-types/#what-it-means-to-implement-this-rfc","title":"What it means to \"implement\" this RFC","text":"

    For the purposes of Aries Interop Profiles, an agent \"implements\" this RFC when:

    "},{"location":"features/0044-didcomm-file-and-mime-types/#reference","title":"Reference","text":"

    The file extensions and MIME types described here are also accompanied by suggested graphics. Vector forms of these graphics are available.

    "},{"location":"features/0044-didcomm-file-and-mime-types/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0048-trust-ping/","title":"Aries RFC 0048: Trust Ping Protocol 1.0","text":""},{"location":"features/0048-trust-ping/#summary","title":"Summary","text":"

    Describe a standard way for agents to test connectivity, responsiveness, and security of a pairwise channel.

    "},{"location":"features/0048-trust-ping/#motivation","title":"Motivation","text":"

    Agents are distributed. They are not guaranteed to be connected or running all the time. They support a variety of transports, speak a variety of protocols, and run software from many different vendors.

    This can make it very difficult to prove that two agents have a functional pairwise channel. Troubleshooting connectivity, responsivenes, and security is vital.

    "},{"location":"features/0048-trust-ping/#tutorial","title":"Tutorial","text":"

    This protocol is analogous to the familiar ping command in networking--but because it operates over agent-to-agent channels, it is transport agnostic and asynchronous, and it can produce insights into privacy and security that a regular ping cannot.

    "},{"location":"features/0048-trust-ping/#roles","title":"Roles","text":"

    There are two parties in a trust ping: the sender and the receiver. The sender initiates the trust ping. The receiver responds. If the receiver wants to do a ping of their own, they can, but this is a new interaction in which they become the sender.

    "},{"location":"features/0048-trust-ping/#messages","title":"Messages","text":"

    The trust ping interaction begins when sender creates a ping message like this:

    {\n  \"@type\": \"https://didcomm.org/trust_ping/1.0/ping\",\n  \"@id\": \"518be002-de8e-456e-b3d5-8fe472477a86\",\n  \"~timing\": {\n    \"out_time\": \"2018-12-15 04:29:23Z\",\n    \"expires_time\": \"2018-12-15 05:29:23Z\",\n    \"delay_milli\": 0\n  },\n  \"comment\": \"Hi. Are you listening?\",\n  \"response_requested\": true\n}\n

    Only @type and @id are required; ~timing.out_time, ~timing.expires_time, and ~timing.delay_milli are optional message timing decorators, and comment follows the conventions of localizable message fields. If present, it may be used to display a human-friendly description of the ping to a user that gives approval to respond. (Whether an agent responds to a trust ping is a decision for each agent owner to make, per policy and/or interaction with their agent.)

    The response_requested field deserves special mention. The normal expectation of a trust ping is that it elicits a response. However, it may be desirable to do a unilateral trust ping at times--communicate information without any expecation of a reaction. In this case, \"response_requested\": false may be used. This might be useful, for example, to defeat correlation between request and response (to generate noise). Or agents A and B might agree that periodically A will ping B without a response, as a way of evidencing that A is up and functional. If response_requested is false, then the receiver MUST NOT respond.

    When the message arrives at the receiver, assuming that response_requested is not false, the receiver should reply as quickly as possible with a ping_response message that looks like this:

    {\n  \"@type\": \"https://didcomm.org/trust_ping/1.0/ping_response\",\n  \"@id\": \"e002518b-456e-b3d5-de8e-7a86fe472847\",\n  \"~thread\": { \"thid\": \"518be002-de8e-456e-b3d5-8fe472477a86\" },\n  \"~timing\": { \"in_time\": \"2018-12-15 04:29:28Z\", \"out_time\": \"2018-12-15 04:31:00Z\"},\n  \"comment\": \"Hi yourself. I'm here.\"\n}\n

    Here, @type and ~thread are required, and the rest is optional.

    "},{"location":"features/0048-trust-ping/#trust","title":"Trust","text":"

    This is the \"trust ping protocol\", not just the \"ping protocol.\" The \"trust\" in its name comes from several ../../features that the interaction gains by virtue of its use of standard agent-to-agent conventions:

    1. Messages should be associated with a message trust context that allows sender and receiver to evaluate how much trust can be placed in the channel. For example, both sender and receiver can check whether messages are encrypted with suitable algorithms and keys.

    2. Messages may be targeted at any known agent in the other party's sovereign domain, using cross-domain routing conventions, and may be encrypted and packaged to expose exactly and only the information desired, at each hop along the way. This allows two parties to evaluate the completeness of a channel and the alignment of all agents that maintain it.

    3. This interaction may be traced using the general message tracing mechanism.

    "},{"location":"features/0048-trust-ping/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community; MISSING test results Aries Framework - .NET .NET framework for building agents of all types; MISSING test results Streetcred.id Commercial mobile and web app built using Aries Framework - .NET; MISSING test results Aries Cloud Agent - Python Contributed by the government of British Columbia.; MISSING test results Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases.; MISSING test results Aries Protocol Test Suite MISSING test results"},{"location":"features/0056-service-decorator/","title":"Aries RFC 0056: Service Decorator","text":""},{"location":"features/0056-service-decorator/#summary","title":"Summary","text":"

    The ~service decorator describes a DID service endpoint inline to a message.

    "},{"location":"features/0056-service-decorator/#motivation","title":"Motivation","text":"

    This allows messages to self contain endpoint and routing information normally in a DID Document. This comes in handy when DIDs or DID Documents have not been exchanged.

    Examples include the Connect Protocol and Challenge Protocols.

    The ~service decorator on a message contains the service definition that you might expect to find in a DID Document. These values function the same way.

    "},{"location":"features/0056-service-decorator/#tutorial","title":"Tutorial","text":"

    Usage looks like this, with the contents defined the Service Endpoint section of the DID Spec:

    json= { \"@type\": \"somemessagetype\", \"~service\": { \"recipientKeys\": [\"8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K\"], \"routingKeys\": [\"8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K\"] \"serviceEndpoint\": \"https://example.com/endpoint\" } }

    "},{"location":"features/0056-service-decorator/#reference","title":"Reference","text":"

    The contents of the ~service decorator are defined by the Service Endpoint section of the DID Spec.

    The decorator should not be used when the message recipient already has a service endpoint.

    "},{"location":"features/0056-service-decorator/#drawbacks","title":"Drawbacks","text":"

    The current service block definition is not very compact, and could cause problems when attempting to transfer a message via QR code.

    "},{"location":"features/0056-service-decorator/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0056-service-decorator/#prior-art","title":"Prior art","text":"

    The Connect Protocol had previously included this same information as an attribute of the messages themselves.

    "},{"location":"features/0056-service-decorator/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0056-service-decorator/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0066-non-repudiable-cryptographic-envelope/","title":"Aries RFC 0066: Non-Repudiable Signature for Cryptographic Envelope","text":""},{"location":"features/0066-non-repudiable-cryptographic-envelope/#summary","title":"Summary","text":"

    This HIPE is intended to highlight the ways that a non-repudiable signature could be added to a message field or message family through the use of JSON Web Signatures format.

    "},{"location":"features/0066-non-repudiable-cryptographic-envelope/#motivation","title":"Motivation","text":"

    Non-repudiable digital signatures serve as a beneficial method to provide proof of provenance of a message. There's many use cases where non-repudiable signatures are necessary and provide value. Some examples may be for a bank to keep on record when a mortgage is being signed. Some of the early use cases where this is going to be of value is going to be in the connection initiate protocol and the ephemeral challenge protocol. The expected outcome of this RFC is to define a method for using non-repudiable digital signatures in the cryptographic envelope layer of DID Communications.

    "},{"location":"features/0066-non-repudiable-cryptographic-envelope/#tutorial","title":"Tutorial","text":""},{"location":"features/0066-non-repudiable-cryptographic-envelope/#json-web-signatures","title":"JSON Web Signatures","text":"

    The JSON Web Signatures specification is written to define how to represent content secured with digital signatures or Message Authentication Codes (MACs) using JavaScript Object Notation (JSON) based data structures.

    Our particular interest is in the use of non-repudiable digital signature using the ed25519 curve with edDSA signatures to sign invitation messages as well as sign full content layer messages.

    "},{"location":"features/0066-non-repudiable-cryptographic-envelope/#when-should-non-repudiable-signatures-be-used","title":"When should non-repudiable signatures be used?","text":"

    As highlighted in the repudiation RFC #0049, non-repudiable signatures are not always necessary and SHOULD NOT be used by default. The primary instances where a non-repudiable digital signature should be used is when a signer expects and considers it acceptable that a receiver can prove the sender sent the message.

    If Alice is entering into a borrower:lender relationship with Carol, Carol needs to prove to third parties that Alice, and only Alice, incurred the legal obligation.

    A good rule of thumb for a developer to decide when to use a non-repudiable signature is:

    \"Does the Receiver need to be able to prove who created the message to another person?\"

    In most cases, the answer to this is likely no. The few cases where it does make sense is when a message is establishing some burden of legal liability.

    "},{"location":"features/0066-non-repudiable-cryptographic-envelope/#reference","title":"Reference","text":"

    Provide guidance for implementers, procedures to inform testing, interface definitions, formal function prototypes, error codes, diagrams, and other technical details that might be looked up. Strive to guarantee that:

    At a high level, the usage of a digital signature should occur before a message is encrypted. There's some cases where this may not make sense. This RFC will highlight a few different examples of how non-repudiable digital signatures could be used.

    "},{"location":"features/0066-non-repudiable-cryptographic-envelope/#connect-protocol-example","title":"Connect protocol example","text":"

    Starting with an initial connections/1.0/invitation message like this:

    {\n    \"@type\": \"https://didcomm.org/connections/1.0/invitation\",\n    \"@id\": \"12345678900987654321\",\n    \"label\": \"Alice\",\n    \"recipientKeys\": [\"8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K\"],\n    \"serviceEndpoint\": \"https://example.com/endpoint\",\n    \"routingKeys\": [\"8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K\"]\n}\n

    We would then bas64URL encode this message like this:

    eyJAdHlwZSI6ImRpZDpzb3Y6QnpDYnNOWWhNcmpIaXFaRFRVQVNIZztzcGVjL2Nvbm5lY3Rpb25zLzEuMC9pbnZpdGF0aW9uIiwiQGlkIjoiMTIzNDU2Nzg5MDA5ODc2NTQzMjEiLCJsYWJlbCI6IkFsaWNlIiwicmVjaXBpZW50S2V5cyI6WyI4SEg1Z1lFZU5jM3o3UFlYbWQ1NGQ0eDZxQWZDTnJxUXFFQjNuUzdaZnU3SyJdLCJzZXJ2aWNlRW5kcG9pbnQiOiJodHRwczovL2V4YW1wbGUuY29tL2VuZHBvaW50Iiwicm91dGluZ0tleXMiOlsiOEhINWdZRWVOYzN6N1BZWG1kNTRkNHg2cUFmQ05ycVFxRUIzblM3WmZ1N0siXX0=\n

    This base64URL encoded string would then become the payload in the JWS.

    Using the compact serialization format, our JOSE Header would look like this:

    {\n    \"alg\":\"EdDSA\",\n    \"kid\":\"FYmoFw55GeQH7SRFa37dkx1d2dZ3zUF8ckg7wmL7ofN4\"\n}\n

    alg: specifies the signature algorithm used kid: specifies the key identifier. In the case of DIDComm, this will be a base58 encoded ed25519 key.

    To sign, we would combine the JOSE Header with the payload and separate it using a period. This would be the resulting data that would be signed:

    ewogICAgImFsZyI6IkVkRFNBIiwKICAgICJraWQiOiJGWW1vRnc1NUdlUUg3U1JGYTM3ZGt4MWQyZFozelVGOGNrZzd3bUw3b2ZONCIKfQ==.eyJAdHlwZSI6ImRpZDpzb3Y6QnpDYnNOWWhNcmpIaXFaRFRVQVNIZztzcGVjL2Nvbm5lY3Rpb25zLzEuMC9pbnZpdGF0aW9uIiwiQGlkIjoiMTIzNDU2Nzg5MDA5ODc2NTQzMjEiLCJsYWJlbCI6IkFsaWNlIiwicmVjaXBpZW50S2V5cyI6WyI4SEg1Z1lFZU5jM3o3UFlYbWQ1NGQ0eDZxQWZDTnJxUXFFQjNuUzdaZnU3SyJdLCJzZXJ2aWNlRW5kcG9pbnQiOiJodHRwczovL2V4YW1wbGUuY29tL2VuZHBvaW50Iiwicm91dGluZ0tleXMiOlsiOEhINWdZRWVOYzN6N1BZWG1kNTRkNHg2cUFmQ05ycVFxRUIzblM3WmZ1N0siXX0=\n

    and the resulting signature would be:

    cwKY4Qhz0IFG9rGqNjcR-6K1NJqgyoGhso28ZGYkOPNI3C8rO6lmjwYstY0Fa2ew8jaFB-jWQN55kOTL5oHVDQ==\n

    The final output would then produce this:

    ewogICAgImFsZyI6IkVkRFNBIiwKICAgICJraWQiOiJGWW1vRnc1NUdlUUg3U1JGYTM3ZGt4MWQyZFozelVGOGNrZzd3bUw3b2ZONCIKfQ==.eyJAdHlwZSI6ImRpZDpzb3Y6QnpDYnNOWWhNcmpIaXFaRFRVQVNIZztzcGVjL2Nvbm5lY3Rpb25zLzEuMC9pbnZpdGF0aW9uIiwiQGlkIjoiMTIzNDU2Nzg5MDA5ODc2NTQzMjEiLCJsYWJlbCI6IkFsaWNlIiwicmVjaXBpZW50S2V5cyI6WyI4SEg1Z1lFZU5jM3o3UFlYbWQ1NGQ0eDZxQWZDTnJxUXFFQjNuUzdaZnU3SyJdLCJzZXJ2aWNlRW5kcG9pbnQiOiJodHRwczovL2V4YW1wbGUuY29tL2VuZHBvaW50Iiwicm91dGluZ0tleXMiOlsiOEhINWdZRWVOYzN6N1BZWG1kNTRkNHg2cUFmQ05ycVFxRUIzblM3WmZ1N0siXX0=.cwKY4Qhz0IFG9rGqNjcR-6K1NJqgyoGhso28ZGYkOPNI3C8rO6lmjwYstY0Fa2ew8jaFB-jWQN55kOTL5oHVDQ==\n
    "},{"location":"features/0066-non-repudiable-cryptographic-envelope/#basic-message-protocol-example","title":"Basic Message protocol example","text":""},{"location":"features/0066-non-repudiable-cryptographic-envelope/#sign-and-encrypt-process","title":"Sign and encrypt process","text":"

    Next is an example that showcases what a basic message would look like. Since this message would utilize a connection to encrypt the message, we will produce a JWS first, and then encrypt the outputted compact JWS.

    We would first encode our JOSE Header which looks like this:

    {\n    \"alg\": \"edDSA\",\n    \"kid\": \"7XVZJUuKtfYeN1W4Dq2Tw2ameG6gC1amxL7xZSsZxQCK\"\n}\n

    and when base64url encoded it would be converted to this:

    eyJhbGciOiAiZWREU0EiLCAia2lkIjogIjdYVlpKVXVLdGZZZU4xVzREcTJUdzJhbWVHNmdDMWFteEw3eFpTc1p4UUNLIn0=\n

    Next we'll take our content layer message which as an example is the JSON provided:

    {\n    \"@id\": \"123456780\",\n    \"@type\": \"https://didcomm.org/basicmessage/1.0/message\",\n    \"~l10n\": { \"locale\": \"en\" },\n    \"sent_time\": \"2019-01-15 18:42:01Z\",\n    \"content\": \"Your hovercraft is full of eels.\"\n}\n

    and now we'll base64url encode this message which results in this output:

    eyJjb250ZW50IjogIllvdXIgaG92ZXJjcmFmdCBpcyBmdWxsIG9mIGVlbHMuIiwgInNlbnRfdGltZSI6ICIyMDE5LTAxLTE1IDE4OjQyOjAxWiIsICJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9iYXNpY21lc3NhZ2UvMS4wL21lc3NhZ2UiLCAiQGlkIjogIjEyMzQ1Njc4MCIsICJ-bDEwbiI6IHsibG9jYWxlIjogImVuIn19\n

    Next, they should be concatenated using a period (.) as a delimiter character which would produce this output:

    eyJhbGciOiAiZWREU0EiLCAia2lkIjogIjdYVlpKVXVLdGZZZU4xVzREcTJUdzJhbWVHNmdDMWFteEw3eFpTc1p4UUNLIn0=.eyJjb250ZW50IjogIllvdXIgaG92ZXJjcmFmdCBpcyBmdWxsIG9mIGVlbHMuIiwgInNlbnRfdGltZSI6ICIyMDE5LTAxLTE1IDE4OjQyOjAxWiIsICJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9iYXNpY21lc3NhZ2UvMS4wL21lc3NhZ2UiLCAiQGlkIjogIjEyMzQ1Njc4MCIsICJ-bDEwbiI6IHsibG9jYWxlIjogImVuIn19\n

    The signature for the signature data is:

    FV7Yyz7i31EKoqS_cycQRr2pN59Q5Ojoxnr7uf6yZBqylnUZW2jCk_LesgWy5ZEux2K6dkrZh7q9pUs9dEsJBQ==\n

    The signature should be concatenated to the signed data above resulting in this final string:

    eyJhbGciOiAiZWREU0EiLCAia2lkIjogIjdYVlpKVXVLdGZZZU4xVzREcTJUdzJhbWVHNmdDMWFteEw3eFpTc1p4UUNLIn0=.eyJjb250ZW50IjogIllvdXIgaG92ZXJjcmFmdCBpcyBmdWxsIG9mIGVlbHMuIiwgInNlbnRfdGltZSI6ICIyMDE5LTAxLTE1IDE4OjQyOjAxWiIsICJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9iYXNpY21lc3NhZ2UvMS4wL21lc3NhZ2UiLCAiQGlkIjogIjEyMzQ1Njc4MCIsICJ-bDEwbiI6IHsibG9jYWxlIjogImVuIn19.FV7Yyz7i31EKoqS_cycQRr2pN59Q5Ojoxnr7uf6yZBqylnUZW2jCk_LesgWy5ZEux2K6dkrZh7q9pUs9dEsJBQ==\n

    The last step is to encrypt this base64URL encoded string as the message in pack which will complete the cryptographic envelope.

    The output of this message then becomes:

    {\"protected\":\"eyJlbmMiOiJ4Y2hhY2hhMjBwb2x5MTMwNV9pZXRmIiwidHlwIjoiSldNLzEuMCIsImFsZyI6IkF1dGhjcnlwdCIsInJlY2lwaWVudHMiOlt7ImVuY3J5cHRlZF9rZXkiOiJac2dYVWdNVGowUk9lbFBTT09lRGxtaE9sbngwMkVVYjZCbml4QjBESGtEZFRLaGc3ZlE1Tk1zcjU3bzA5WDZxIiwiaGVhZGVyIjp7ImtpZCI6IjRXenZOWjJjQUt6TXM4Nmo2S1c5WGZjMmhLdTNoaFd4V1RydkRNbWFSTEFiIiwiaXYiOiJsOWJHVnlyUnRseUNMX244UmNEakJVb1I3eU5sdEZqMCIsInNlbmRlciI6Imh4alZMRWpXcmY0RFplUGFsRGJnYzVfNmFMN2ltOGs1WElQWnBqTURlUzZaUS1jcEFUaGNzNVdiT25uaVFBM2Z0ZnlYWDJkVUc0dVZ3WHhOTHdMTXRqV3lxNkNKeDdUWEdBQW9ZY0RMMW1aaTJxd2xZMGlDQ2N0dHdNVT0ifX1dfQ==\",\"iv\":\"puCgKCfsOb5gRG81\",\"ciphertext\":\"EpHaC0ZMXQakM4n8Fxbedq_3UhiJHq6vd_I4NNz3N7aDbq7-0F6OXi--VaR7xoTqAyJjrOTYmy1SqivSkGmKaCcpFwC9Shdo_vcMFzIxu90_m3MG1xKNsvDmQBFnD0qgjPPXxmxTlmmYLSdA3JaHpEx1K9gYgGqv4X5bgWZqzFCoevyOlD5a2bDZBY5Mn__IT1pVzjbMbDeSgM2nOztWyF0baXwrqczBW-Msx-uP5HNlLdz02FPbMnRP6MYyw6q0wI0EqwzzwH81bZzHKrTVHT2-M_aIEQp9lKGLhnSW3-aIOpSzonGOriyDukfTpvsCUZEd_X1u0G3iZKxYCbIKaj_ARLbb6idlRngVGW9LYYaw7Xay83exp22gflvLmmN25Xzo1vLlaDaFr9h-J_QAvFebCHgWjl1kcodBRc2jhoMVSpEXJHoI5qMrlVvh45PLTEjxy7y5FHQ1L8klwWZN5EIwui3ExIOA8RwYDlp8-HLib_uqB7hNzVUYC0iPd1KTiNIcidYVdAoPpdtLDOh-KCmPB9RkjVUqSlwNYUAAnfY8OJXuBLHP2nWiYUDA6VDbvrv4npW88VMdsFDk_QzvDRvg7gkW8x8jNd8=\",\"tag\":\"B4UilbBNSUr3QcALtVxTEw==\"}\n
    "},{"location":"features/0066-non-repudiable-cryptographic-envelope/#decrypt-and-verify-process","title":"Decrypt and Verify process","text":"

    To Decrypt and verify the JWS first unpack the message, which provides this result:

    {\n    \"message\":\"eyJhbGciOiAiZWREU0EiLCAia2lkIjogIjdYVlpKVXVLdGZZZU4xVzREcTJUdzJhbWVHNmdDMWFteEw3eFpTc1p4UUNLIn0=.eyJjb250ZW50IjogIllvdXIgaG92ZXJjcmFmdCBpcyBmdWxsIG9mIGVlbHMuIiwgInNlbnRfdGltZSI6ICIyMDE5LTAxLTE1IDE4OjQyOjAxWiIsICJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9iYXNpY21lc3NhZ2UvMS4wL21lc3NhZ2UiLCAiQGlkIjogIjEyMzQ1Njc4MCIsICJ-bDEwbiI6IHsibG9jYWxlIjogImVuIn19.FV7Yyz7i31EKoqS_cycQRr2pN59Q5Ojoxnr7uf6yZBqylnUZW2jCk_LesgWy5ZEux2K6dkrZh7q9pUs9dEsJBQ==\",\n    \"recipient_verkey\":\"4WzvNZ2cAKzMs86j6KW9Xfc2hKu3hhWxWTrvDMmaRLAb\",\n    \"sender_verkey\":\"7XVZJUuKtfYeN1W4Dq2Tw2ameG6gC1amxL7xZSsZxQCK\"\n}\n

    Parse the message field splitting on the second period . You should then have this as the payload:

    eyJhbGciOiAiZWREU0EiLCAia2lkIjogIjdYVlpKVXVLdGZZZU4xVzREcTJUdzJhbWVHNmdDMWFteEw3eFpTc1p4UUNLIn0=.eyJjb250ZW50IjogIllvdXIgaG92ZXJjcmFmdCBpcyBmdWxsIG9mIGVlbHMuIiwgInNlbnRfdGltZSI6ICIyMDE5LTAxLTE1IDE4OjQyOjAxWiIsICJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9iYXNpY21lc3NhZ2UvMS4wL21lc3NhZ2UiLCAiQGlkIjogIjEyMzQ1Njc4MCIsICJ-bDEwbiI6IHsibG9jYWxlIjogImVuIn19\n

    and the signature will be base64URL encoded and look like this:

    FV7Yyz7i31EKoqS_cycQRr2pN59Q5Ojoxnr7uf6yZBqylnUZW2jCk_LesgWy5ZEux2K6dkrZh7q9pUs9dEsJBQ==\n

    Now decode the signature and then convert the signature and payload to bytes and use crypto.crypto_verify() API in IndySDK

    Your message has now been verified.

    To get the original message, you'll again parse the JWS this time taking the second section only which looks like this:

    eyJjb250ZW50IjogIllvdXIgaG92ZXJjcmFmdCBpcyBmdWxsIG9mIGVlbHMuIiwgInNlbnRfdGltZSI6ICIyMDE5LTAxLTE1IDE4OjQyOjAxWiIsICJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9iYXNpY21lc3NhZ2UvMS4wL21lc3NhZ2UiLCAiQGlkIjogIjEyMzQ1Njc4MCIsICJ-bDEwbiI6IHsibG9jYWxlIjogImVuIn19\n

    Now Base64URL decode that section and you'll get the original message:

    {\n    \"content\": \"Your hovercraft is full of eels.\",\n    \"sent_time\": \"2019-01-15 18:42:01Z\",\n    \"@type\": \"https://didcomm.org/basicmessage/1.0/message\",\n    \"@id\": \"123456780\",\n    \"~l10n\": {\"locale\": \"en\"}\n}\n
    "},{"location":"features/0066-non-repudiable-cryptographic-envelope/#modifications-to-packunpack-api","title":"Modifications to pack()/unpack() API","text":""},{"location":"features/0066-non-repudiable-cryptographic-envelope/#drawbacks","title":"Drawbacks","text":"

    Through the choice of a JWS formatted structure we imply that an off the shelf library will support this structure. However, it's uncommon for libraries to support the edDSA signature algorithm even though it's a valid algorithm based on the IANA registry. This means that most implementations that support this will either need to add this signature algorithm to another JWS library or

    "},{"location":"features/0066-non-repudiable-cryptographic-envelope/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0066-non-repudiable-cryptographic-envelope/#prior-art","title":"Prior art","text":"

    The majority of prior art discussions are mentioned above in the rationale and alternatives section. Some prior art that was considered when selecting this system is how closely it aligns with OpenID Connect systems. This has the possibility to converge with Self Issued OpenID Connect systems when running over HTTP, but doesn't specifically constrain to an particular transport mechanism. This is a distinct advantage for backward compatibility.

    "},{"location":"features/0066-non-repudiable-cryptographic-envelope/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0066-non-repudiable-cryptographic-envelope/#is-limiting-to-only-ed25519-key-types-an-unnecessary-restraint-given-the-broad-support-needed-for-didcomm","title":"Is limiting to only ed25519 key types an unnecessary restraint given the broad support needed for DIDComm?","text":""},{"location":"features/0066-non-repudiable-cryptographic-envelope/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0067-didcomm-diddoc-conventions/","title":"Aries RFC 0067: DIDComm DID document conventions","text":""},{"location":"features/0067-didcomm-diddoc-conventions/#summary","title":"Summary","text":"

    Explain the DID document conventions required to enable DID communications.

    "},{"location":"features/0067-didcomm-diddoc-conventions/#motivation","title":"Motivation","text":"

    Standardization of these conventions is essential to promoting interoperability of DID communications.

    "},{"location":"features/0067-didcomm-diddoc-conventions/#tutorial","title":"Tutorial","text":""},{"location":"features/0067-didcomm-diddoc-conventions/#did-documents","title":"DID documents","text":"

    A DID document is the associated data model to a DID, it contains important associated cryptographic information and a declaration of capabilities the DID supports.

    Of particular interested to this RFC is the definition of service endpoints. The primary object of this RFC is to document the DID communication service type and describe the associated conventions.

    "},{"location":"features/0067-didcomm-diddoc-conventions/#service-conventions","title":"Service Conventions","text":"

    As referenced above within the DID specification lies a section called service endpoints, this section of the DID document is reserved for any type of service the entity wishes to advertise, including decentralized identity management services for further discovery, authentication, authorization, or interaction.

    When a DID document wishes to express support for DID communications, the following service definition is defined.

    {\n  \"service\": [{\n    \"id\": \"did:example:123456789abcdefghi#did-communication\",\n    \"type\": \"did-communication\",\n    \"priority\" : 0,\n    \"recipientKeys\" : [ \"did:example:123456789abcdefghi#1\" ],\n    \"routingKeys\" : [ \"did:example:123456789abcdefghi#1\" ],\n    \"accept\": [\n      \"didcomm/aip2;env=rfc587\",\n      \"didcomm/aip2;env=rfc19\"\n    ],\n    \"serviceEndpoint\": \"https://agent.example.com/\"\n  }]\n}\n

    Notes 1. The keys featured in this array must resolve to keys of the same type, for example a mix Ed25519VerificationKey2018 or RsaVerificationKey2018 in the same array is invalid.

    "},{"location":"features/0067-didcomm-diddoc-conventions/#message-preparation-conventions","title":"Message Preparation Conventions","text":"

    Below describes the process under which a DID communication message is prepared and sent to a DID based on the conventions declared in the associated DID document. The scenario in which the below is predicated has the following conditions. - The sender possesses the DID document for the intended recipient(s) of a DID communication message. - The sender has created a content level message that is now ready to be prepared for sending to the intended recipient(s).

    1. The sender resolves the relevant did-communication service of the intended recipient(s) DID document.
    2. The sender resolves the recipient keys present in the recipientKeys array of the service declaration.
    3. Using the resolved keys, the sender takes the content level message and packs it inside an encrypted envelope for the recipient keys. (note-2)
    4. The sender then inspects the routingKeys array, if it is found to be empty, then the process skips to step 5. Otherwise, the sender prepares a content level message of type forward. The resolved keys from the recipientKeys array is set as the contents of the to field in the forward message and the encrypted envelope from the previous step is set as the contents of the msg field in the forward message. Following this, for each element in the routingKeys array the following sub-process is repeated:
      1. The sender resolves the current key in the routing array and takes the outputted encrypted envelope from the previous step and packs it inside a new encrypted envelope for the current key.
      2. The sender prepares a content level message of type forward. The current key in the routing array is set as the contents of the to field in the forward message and the encrypted envelope from the previous step is set as the contents of the msg field in the forward message.
    5. Resolve the service endpoint:
      • If the endpoint is a valid DID URL, check that it resolves to another DID service definition. If the resolution is successful the process from step 2. is repeated using the message outputted from this process as the input message.
      • If the service endpoint is not a DID URL, send the message using the transport protocol declared by the URL's scheme.

    Notes 1. There are two main situations that an agent will be in prior to preparing a new message.

    1. When preparing this envelope the sender has two main choices to make around properties to include in envelope
      • Whether to include sender information
      • Whether to include a non-reputable signature
    "},{"location":"features/0067-didcomm-diddoc-conventions/#example-domain-and-did-document","title":"Example: Domain and DID document","text":"

    The following is an example of an arbitrary pair of domains that will be helpful in providing context to conventions defined above.

    In the diagram above:

    "},{"location":"features/0067-didcomm-diddoc-conventions/#bobs-did-document-for-his-relationship-with-alice","title":"Bob's DID document for his Relationship with Alice","text":"

    Bob\u2019s domain has 3 devices he uses for processing messages - two phones (4 and 5) and a cloud-based agent (6). As well, Bob has one agent that he uses as a mediator (3) that can hold messages for the two phones when they are offline. However, in Bob's relationship with Alice, he ONLY uses one phone (4) and the cloud-based agent (6). Thus the key for device 5 is left out of the DID document (see below). For further privacy preservation, Bob also elects to use a shared domain endpoint (agents-r-us), giving him an extra layer of isolation from correlation. This is represented by the serviceEndpoint in the service definition not directly resolving to an endpoint URI rather resolving to another did-communication service definition which is owned and controlled by the endpoint owner (agents-r-us).

    Bobs DID document given to Alice

    {\n  \"@context\": \"https://w3id.org/did/v1\",\n  \"id\": \"did:example:1234abcd\",\n  \"publicKey\": [\n    {\"id\": \"3\", \"type\": \"RsaVerificationKey2018\",  \"controller\": \"did:example:1234abcd\",\"publicKeyPem\": \"-----BEGIN PUBLIC X\u2026\"},\n    {\"id\": \"4\", \"type\": \"RsaVerificationKey2018\",  \"controller\": \"did:example:1234abcd\",\"publicKeyPem\": \"-----BEGIN PUBLIC 9\u2026\"},\n    {\"id\": \"6\", \"type\": \"RsaVerificationKey2018\",  \"controller\": \"did:example:1234abcd\",\"publicKeyPem\": \"-----BEGIN PUBLIC A\u2026\"}\n  ],\n  \"authentication\": [\n    {\"type\": \"RsaSignatureAuthentication2018\", \"publicKey\": \"did:example:1234abcd#4\"}\n  ],\n  \"service\": [\n    {\n      \"id\": \"did:example:123456789abcdefghi;did-communication\",\n      \"type\": \"did-communication\",\n      \"priority\" : 0,\n      \"recipientKeys\" : [ \"did:example:1234abcd#4\" ],\n      \"routingKeys\" : [ \"did:example:1234abcd#3\" ],\n      \"serviceEndpoint\" : \"did:example:xd45fr567794lrzti67;did-communication\"\n    }\n  ]\n}\n

    Agents r Us DID document - resolvable by Alice

    {\n  \"@context\": \"https://w3id.org/did/v1\",\n  \"id\": \"did:example:xd45fr567794lrzti67\",\n  \"publicKey\": [\n    {\"id\": \"1\", \"type\": \"RsaVerificationKey2018\",  \"controller\": \"did:example:xd45fr567794lrzti67\",\"publicKeyPem\": \"-----BEGIN PUBLIC X\u2026\"},\n  ],\n  \"authentication\": [\n    {\"type\": \"RsaSignatureAuthentication2018\", \"publicKey\": \"did:example:xd45fr567794lrzti67#1\"}\n  ],\n  \"service\": [\n    {\n      \"id\": \"did:example:xd45fr567794lrzti67;did-communication\",\n      \"type\": \"did-communication\",\n      \"priority\" : 0,\n      \"recipientKeys\" : [ \"did:example:xd45fr567794lrzti67#1\" ],\n      \"routingKeys\" : [ ],\n      \"serviceEndpoint\" : \"http://agents-r-us.com\"\n    }\n  ]\n}\n
    "},{"location":"features/0067-didcomm-diddoc-conventions/#message-preparation-example","title":"Message Preparation Example","text":"

    Alices agent goes to prepare a message desired_msg for Bob.

    1. Alices agent resolves the above DID document did:example:1234abcd for Bob and resolves the did-communication service definition.
    2. Alices agent then packs desired_msg in an encrypted envelope message to the resolved keys defined in the recipientKeys array.
    3. Because the routingKeys array is not empty, a content level message of type forward is prepared where the to field of the forward message is set to the resolved keys and the msg field of the forward message is set to the encrypted envelope from the previous step.
    4. The resulting forward message from the previous step is then packed inside another encrypted envelope for the first and only key in the routingKeys array.
    5. Inspection of the service endpoint, reveals it is a did url and leads to resolving another did-communication service definition, this time owned and controlled by agents-r-us.
    6. Because in the agents-r-us service definition there is a recipient key. A content level message of type forward is prepared where the to field of the forward message is set to the recipient key and the msg field of the forward message is set to the encrypted envelope from the previous step.
    7. This content message is then packed in a encrypted envelope for the recipient key in agents-r-us service definition.
    8. Finally as the endpoint listed in the serviceEndpoint field for the agents-r-us did-communication service definition is a valid endpoint URL, the message is transmitted in accordance with the URL's protocol.
    "},{"location":"features/0067-didcomm-diddoc-conventions/#reference","title":"Reference","text":""},{"location":"features/0067-didcomm-diddoc-conventions/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0067-didcomm-diddoc-conventions/#prior-art","title":"Prior art","text":""},{"location":"features/0067-didcomm-diddoc-conventions/#unresolved-questions","title":"Unresolved questions","text":"

    The following remain unresolved:

    "},{"location":"features/0067-didcomm-diddoc-conventions/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0075-payment-decorators/","title":"Aries RFC 0075: Payment Decorators","text":""},{"location":"features/0075-payment-decorators/#summary","title":"Summary","text":"

    Defines the ~payment_request, payment_internal_response, and ~payment_receipt decorators. These offer standard payment ../../features in all DIDComm interactions, and let DIDComm take advantage of the W3C's Payment Request API in an interoperable way.

    "},{"location":"features/0075-payment-decorators/#motivation","title":"Motivation","text":"

    Instead of inventing custom messages for payments in each protocol, arbitrary messages can express payment semantics with payment decorators. Individual protocol specs should clarify on which messages and under which conditions the decorators are used.

    "},{"location":"features/0075-payment-decorators/#tutorial","title":"Tutorial","text":"

    The W3C's Payment Request API governs interactions between three parties:

    1. payer
    2. payee
    3. payment method

    The payer is usually imagined to be a person operating a web browser, the payee is imagined to be an online store, and the payment method might be something like a credit card processing service. The payee emits a PaymentRequest JSON structure (step 1 below); this causes the payee to be prompted (step 2, \"Render\"). The payer decides whether to pay, and if so, which payment method and options she prefers (step 3, \"Configure\"). The payer's choices are embodied in a PaymentResponse JSON structure (step 4). This is then used to select the appropriate codepath and inputs to invoke the desired payment method (step 5).

    Notice that this flow does not include anything coming back to the payer. In this API, the PaymentResponse structure embodies a response from the payer to the payer's own agent, expressing choices about which credit card to use and which shipping options are desired; it's not a response that crosses identity boundaries. That's reasonable because this is the Payment Request API, not a Payment Roundtrip API. It's only about requesting payments, not completing payments or reporting results. Also, each payment method will have unique APIs for fulfillment and receipts; the W3C Payment Request spec does not attempt to harmonize them, though some work in that direction is underway in the separate Payment Handler API spec.

    In DIDComm, the normal emphasis is on interactions between parties with different identities. This makes PaymentResponse and the communication that elicits it (steps 2-4) a bit unusual from a DIDComm perspective; normally DIDComm would use the word \"response\" for something that comes back from Bob, after Alice asks Bob a question. It also makes the scope of the W3C API feel incomplete, because we'd like to be able to model the entire flow, not just part of it.

    The DIDComm payment decorators map to the W3C API as follows:

    "},{"location":"features/0075-payment-decorators/#reference","title":"Reference","text":""},{"location":"features/0075-payment-decorators/#payment_request","title":"~payment_request","text":"

    Please see the PaymentRequest interface docs in the W3C spec for a full reference, or Section 2, Examples of Usage in the W3C spec for a narration that builds a PaymentRequest from first principles.

    The following is a sample ~payment_request decorator with some interesting details to suggest what's possible:

    {\n  \"~payment_request\": {\n    \"methodData\": [\n      {\n        \"supportedMethods\": \"basic-card\",\n        \"data\": {\n          \"supportedNetworks\": [\"visa\", \"mastercard\"],\n          \"payeeId\": \"12345\"\n        },\n      },\n      {\n        \"supportedMethods\": \"sovrin\",\n        \"data\": {\n          \"supportedNetworks\": [\"sov\", \"sov:test\", \"ibm-indy\"],\n          \"payeeId\": \"XXXX\"\n        },\n      }\n    ],\n    \"details\": {\n      \"id\": \"super-store-order-123-12312\",\n      \"displayItems\": [\n        {\n          \"label\": \"Sub-total\",\n          \"amount\": { \"currency\": \"USD\", \"value\": \"55.00\" },\n        },\n        {\n          \"label\": \"Sales Tax\",\n          \"amount\": { \"currency\": \"USD\", \"value\": \"5.00\" },\n          \"type\": \"tax\"\n        },\n      ],\n      \"total\": {\n        \"label\": \"Total due\",\n        // The total is USD$65.00 here because we need to\n        // add shipping (below). The selected shipping\n        // costs USD$5.00.\n        \"amount\": { \"currency\": \"USD\", \"value\": \"65.00\" }\n      },\n      \"shippingOptions\": [\n        {\n          \"id\": \"standard\",\n          \"label\": \"Ground Shipping (2 days)\",\n          \"amount\": { \"currency\": \"USD\", \"value\": \"5.00\" },\n          \"selected\": true,\n        },\n        {\n          \"id\": \"drone\",\n          \"label\": \"Drone Express (2 hours)\",\n          \"amount\": { \"currency\": \"USD\", \"value\": \"25.00\" }\n        }\n      ],\n      \"modifiers\": [\n        {\n          \"additionalDisplayItems\": [{\n            \"label\": \"Card processing fee\",\n            \"amount\": { \"currency\": \"USD\", \"value\": \"3.00\" },\n          }],\n          \"supportedMethods\": \"basic-card\",\n          \"total\": {\n            \"label\": \"Total due\",\n            \"amount\": { \"currency\": \"USD\", \"value\": \"68.00\" },\n          },\n          \"data\": {\n            \"supportedNetworks\": [\"visa\"],\n          },\n        },\n        {\n          \"supportedMethods\": \"sovrin\",\n          \"total\": {\n            \"label\": \"Total due\",\n            \"amount\": { \"currency\": \"SOV\", \"value\": \"2254\" },\n          },\n        },\n      ]\n    },\n    \"options\": {\n      \"requestPayerEmail\": false,\n      \"requestPayerName\": true,\n      \"requestPayerPhone\": false,\n      \"requestShipping\": true\n    }\n  }\n}\n

    The details.id field contains an invoice number, shopping cart ID, or similar identifier that unambiguously identifies the goods and services for which payment is requested. The payeeId field would contain a payment address for cryptocurrency payment methods, or a merchant ID for credit cards. The modifiers section shows how the requested payment amount should be modified if the basic-card method is selected. That specific example is discussed in greater detail in the W3C spec. It also shows how the currency could be changed if a token-based method is selected instead of a fiat-based method. See the separate W3C spec on Payment Method IDs.

    Note that standard DIDComm localization can be used to provide localized alternatives to the label fields; this is a DIDComm-specific extension.

    This example shows options where the payee is requesting self-attested data from the payer. DIDComm offers the option of replacing this simple approach with a sophisticated presentation request based on verifiable credentials. The simple approach is fine where self-attested data is enough; the VC approach is useful when assurance of the data must be higher (e.g., a verified email address), or where fancy logic about what's required (Name plus either Email or Phone) is needed.

    The DIDComm payment_request decorator may be combined with the ~timing.expires_time decorator to express the idea that the payment must be made within a certain time period or else the price or availability of merchandise is not guaranteed.

    "},{"location":"features/0075-payment-decorators/#payment_internal_response","title":"~payment_internal_response","text":"

    This decorator exactly matches PaymentResponse from the W3C API and will not be further described here. A useful example of a response is given in the related Basic Card Response doc.

    "},{"location":"features/0075-payment-decorators/#payment_receipt","title":"~payment_receipt","text":"

    This decorator on a message indicates that a payment has been made. It looks like this (note the snake_case since we are not matching a W3C spec):

    {\n  \"~payment_receipt\": {\n      \"request_id\": \"super-store-order-123-12312\",\n      \"selected_method\": \"sovrin\",\n      \"selected_shippingOption\": \"standard\",\n      \"transaction_id\": \"abc123\",\n      \"proof\": \"directly verifiable proof of payment\",\n      \"payeeId\": \"XXXX\",\n      \"amount\": { \"currency\": \"SOV\", \"value\": \"2254\" }\n  }\n}\n

    request_id: This contains the details.id of ~payment_request that this payment receipt satisfies.

    selected_method: Which payment method was chosen to pay.

    selected_shippingOption: Which shipping option was chosen.

    transaction_id: A transaction identifier that can be checked by the payee to verify that funds were transferred, and that the transfer relates to this payment request instead of another. This might be a ledger's transaction ID, for example.

    proof: Optional. A base64url-encoded blob that contains directly verifiable proof that the transaction took place. This might be useful for payments enacted by a triple-signed receipt mechanism, for example. When this is present, transaction_id becomes optional. For ledgers that support state proofs, the state proof could be offered here.

    "},{"location":"features/0075-payment-decorators/#example","title":"Example","text":"

    Here is a rough description of how these decorators might be used in a protocol to issue credentials. We are not guaranteeing that the message details will remain up-to-date as that protocol evolves; this is only for purposes of general illustration.

    "},{"location":"features/0075-payment-decorators/#credential-offer","title":"Credential Offer","text":"

    This message is sent by the issuer; it indicates that payment is requested for the credential under discussion.

    {\n    \"@type\": \"https://didcomm.org/issue_credential/1.0/offer_credential\",\n    \"@id\": \"5bc1989d-f5c1-4eb1-89dd-21fd47093d96\",\n    \"cred_def_id\": \"KTwaKJkvyjKKf55uc6U8ZB:3:CL:59:tag1\",\n    \"~payment_request\": {\n        \"methodData\": [\n          {\n            \"supportedMethods\": \"ETH\",\n            \"data\": {\n              \"payeeId\": \"0xD15239C7e7dDd46575DaD9134a1bae81068AB2A4\"\n            },\n          }\n        ],\n        \"details\": {\n          \"id\": \"0a2bc4a6-1f45-4ff0-a046-703c71ab845d\",\n          \"displayItems\": [\n            {\n              \"label\": \"commercial driver's license\",\n              \"amount\": { \"currency\": \"ETH\", \"value\": \"0.0023\" },\n            }\n          ],\n          \"total\": {\n            \"label\": \"Total due\",\n            \"amount\": { \"currency\": \"ETH\", \"value\": \"0.0023\" }\n          }\n        }\n      },\n    \"credential_preview\": <json-ld object>,\n    ///...\n}\n
    "},{"location":"features/0075-payment-decorators/#example-credential-request","title":"Example Credential Request","text":"

    This Credential Request is sent to the issuer, indicating that they have paid the requested amount.

    {\n    \"@type\": \"https://didcomm.org/issue_credential/1.0/request_credential\",\n    \"@id\": \"94af9be9-5248-4a65-ad14-3e7a6c3489b6\",\n    \"~thread\": { \"thid\": \"5bc1989d-f5c1-4eb1-89dd-21fd47093d96\" },\n    \"cred_def_id\": \"KTwaKJkvyjKKf55uc6U8ZB:3:CL:59:tag1\",\n    \"~payment_receipt\": {\n      \"request_id\": \"0a2bc4a6-1f45-4ff0-a046-703c71ab845d\",\n      \"selected_method\": \"ETH\",\n      \"transaction_id\": \"0x5674bfea99c480e110ea61c3e52783506e2c467f108b3068d642712aca4ea479\",\n      \"payeeId\": \"0xD15239C7e7dDd46575DaD9134a1bae81068AB2A4\",\n      \"amount\": { \"currency\": \"ETH\", \"value\": \"0.0023\" }\n    }\n\n    ///...\n}\n
    "},{"location":"features/0075-payment-decorators/#drawbacks","title":"Drawbacks","text":"

    TBD

    "},{"location":"features/0075-payment-decorators/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0075-payment-decorators/#prior-art","title":"Prior art","text":""},{"location":"features/0075-payment-decorators/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0075-payment-decorators/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0092-transport-return-route/","title":"Aries RFC 0092: Transports Return Route","text":""},{"location":"features/0092-transport-return-route/#summary","title":"Summary","text":"

    Agents can indicate that an inbound message transmission may also be used as a return route for messages. This allows for transports of increased efficiency as well as agents without an inbound route.

    "},{"location":"features/0092-transport-return-route/#motivation","title":"Motivation","text":"

    Inbound HTTP and Websockets are used only for receiving messages by default. Return messages are sent using their own outbound connections. Including a decorator allows the receiving agent to know that using the inbound connection as a return route is acceptable. This allows two way communication with agents that may not have an inbound route available. Agents without an inbound route include mobile agents, and agents that use a client (and not a server) for communication.

    This decorator is intended to facilitate message communication between a client based agent (an agent that can only operate as a client, not a server) and the server based agents they communicate directly with. Use on messages that will be forwarded is not allowed.

    "},{"location":"features/0092-transport-return-route/#tutorial","title":"Tutorial","text":"

    When you send a message through a connection, you can use the ~transport decorator on the message and specify return_route. The value of return_route is discussed in the Reference section of this document.

    {\n    \"~transport\": {\n        \"return_route\": \"all\"\n    }\n}\n
    "},{"location":"features/0092-transport-return-route/#reference","title":"Reference","text":"

    The ~transport decorator should be processed after unpacking and prior to routing the message to a message handler.

    For HTTP transports, the presence of this message decorator indicates that the receiving agent MAY hold onto the connection and use it to return messages as designated. HTTP transports will only be able to receive at most one message at a time. Websocket transports are capable of receiving multiple messages.

    Compliance with this indicator is optional for agents generally, but required for agents wishing to connect with client based agents.

    "},{"location":"features/0092-transport-return-route/#drawbacks","title":"Drawbacks","text":""},{"location":"features/0092-transport-return-route/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0092-transport-return-route/#prior-art","title":"Prior art","text":"

    The Decorators RFC describes scope of decorators. Transport isn't one of the scopes listed.

    "},{"location":"features/0092-transport-return-route/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Protocol Test Suite Used in Tests"},{"location":"features/0095-basic-message/","title":"Aries RFC 0095: Basic Message Protocol 1.0","text":""},{"location":"features/0095-basic-message/#summary","title":"Summary","text":"

    The BasicMessage protocol describes a stateless, easy to support user message protocol. It has a single message type used to communicate.

    "},{"location":"features/0095-basic-message/#motivation","title":"Motivation","text":"

    It is a useful feature to be able to communicate human written messages. BasicMessage is the most basic form of this written message communication, explicitly excluding advanced ../../features to make implementation easier.

    "},{"location":"features/0095-basic-message/#tutorial","title":"Tutorial","text":""},{"location":"features/0095-basic-message/#roles","title":"Roles","text":"

    There are two roles in this protocol: sender and receiver. It is anticipated that both roles are supported by agents that provide an interface for humans, but it is possible for an agent to only act as a sender (do not process received messages) or a receiver (will never send messages).

    "},{"location":"features/0095-basic-message/#states","title":"States","text":"

    There are not really states in this protocol, as sending a message leaves both parties in the same state they were before.

    "},{"location":"features/0095-basic-message/#out-of-scope","title":"Out of Scope","text":"

    There are many useful ../../features of user messaging systems that we will not be adding to this protocol. We anticipate the development of more advanced and full-featured message protocols to fill these needs. Features that are considered out of scope for this protocol include:

    "},{"location":"features/0095-basic-message/#reference","title":"Reference","text":"

    Protocol: https://didcomm.org/basicmessage/1.0/

    message

    Example:

    {\n    \"@id\": \"123456780\",\n    \"@type\": \"https://didcomm.org/basicmessage/1.0/message\",\n    \"~l10n\": { \"locale\": \"en\" },\n    \"sent_time\": \"2019-01-15 18:42:01Z\",\n    \"content\": \"Your hovercraft is full of eels.\"\n}\n
    "},{"location":"features/0095-basic-message/#drawbacks","title":"Drawbacks","text":""},{"location":"features/0095-basic-message/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0095-basic-message/#prior-art","title":"Prior art","text":"

    BasicMessage has parallels to SMS, which led to the later creation of MMS and even the still-under-development RCS.

    "},{"location":"features/0095-basic-message/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0095-basic-message/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community; MISSING test results Aries Framework - .NET .NET framework for building agents of all types; MISSING test results Streetcred.id Commercial mobile and web app built using Aries Framework - .NET; MISSING test results Aries Cloud Agent - Python Contributed by the government of British Columbia.; MISSING test results Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases.; MISSING test results Aries Protocol Test Suite ; MISSING test results"},{"location":"features/0113-question-answer/","title":"Aries RFC 0113: Question Answer Protocol 0.9","text":""},{"location":"features/0113-question-answer/#summary","title":"Summary","text":"

    A simple protocol where a questioner asks a responder a question with at least one valid answer. The responder then replies with an answer or ignores the question.

    Note: While there is a need in the future for a robust negotiation protocol\nthis is not it. This is for simple question/answer exchanges.\n
    "},{"location":"features/0113-question-answer/#motivation","title":"Motivation","text":"

    There are many instances where one party needs an answer to a specific question from another party. These can be related to consent, proof of identity, authentication, or choosing from a list of options. For example, when receiving a phone call a customer service representative can ask a question to the customer\u2019s phone to authenticate the caller, \u201cAre you on the phone with our representative?\u201d. The same could be done to authorize transactions, validate logins (2FA), accept terms and conditions, and any other simple, non-negotiable exchanges.

    "},{"location":"features/0113-question-answer/#tutorial","title":"Tutorial","text":"

    We'll describe this protocol in terms of a [Challenge/Response] https://en.wikipedia.org/wiki/Challenge%E2%80%93response_authentication) scenario where a customer service representative for Faber Bank questions its customer Alice, who is speaking with them on the phone, to answer whether it is really her.

    "},{"location":"features/0113-question-answer/#interaction","title":"Interaction","text":"

    Using an already established pairwise connection and agent-to-agent communication Faber will send a question to Alice with one or more valid responses with an optional deadline and Alice can select one of the valid responses or ignore the question. If she selects one of the valid responses she will respond with her answer.

    "},{"location":"features/0113-question-answer/#roles","title":"Roles","text":"

    There are two parties in a typical question/answer interaction. The first party (Questioner) issues the question with its valid answers and the second party (Responder) responds with the selected answer. The parties must have already exchanged pairwise keys and created a connection. These pairwise can be used to encrypt and verify the response. When the answer has been sent questioner can know with a high level of certainty that it was sent by responder.

    In this tutorial Faber (the questioner) initiates the interaction and creates and sends the question to Alice. The question includes the valid responses, which can optionally be signed for non-repudiability.

    In this tutorial Alice (the responder) receives the packet and must respond to the question (or ignore it, which is not an answer) by encrypting either the positive or the negative response_code (signing both is invalid).

    "},{"location":"features/0113-question-answer/#messages","title":"Messages","text":"

    All messages in this protocol are part of the \"Question/Answer 1.0\" message family uniquely identified by this DID reference:

    https://didcomm.org/questionanswer/1.0\n

    The protocol begins when the questioner sends a questionanswer message to the responder:

    {\n  \"@type\": \"https://didcomm.org/questionanswer/1.0/question\",\n  \"@id\": \"518be002-de8e-456e-b3d5-8fe472477a86\",\n  \"question_text\": \"Alice, are you on the phone with Bob from Faber Bank right now?\",\n  \"question_detail\": \"This is optional fine-print giving context to the question and its various answers.\",\n  \"nonce\": \"<valid_nonce>\",\n  \"signature_required\": true,\n  \"valid_responses\" : [\n    {\"text\": \"Yes, it's me\"},\n    {\"text\": \"No, that's not me!\"}],\n  \"~timing\": {\n    \"expires_time\": \"2018-12-13T17:29:06+0000\"\n  }\n}\n

    The responder receives this message and chooses the answer. If the signature is required then she uses her private pairwise key to sign her response.

    Note: Alice should sign the following: the question, the chosen answer,\nand the nonce: HASH(<question_text>+<answer_text>+<nonce>), this keeps a\nrecord of each part of the transaction.\n

    The response message is then sent using the ~sig message decorator:

    {\n  \"@type\": \"https://didcomm.org/questionanswer/1.0/answer\",\n  \"~thread\": { \"thid\": \"518be002-de8e-456e-b3d5-8fe472477a86\", \"seqnum\": 0 },\n  \"response\": \"Yes, it's me\",\n  \"response~sig\": {\n    \"@type\": \"https://didcomm.org/signature/1.0/ed25519Sha512_single\"\n    \"signature\": \"<digital signature function output>\",\n    \"sig_data\": \"<base64url(HASH(\"Alice, are you on the phone with Bob?\"+\"Yes, it's me\"+\"<nonce>\"))>\",\n    \"signers\": [\"<responder_key>\"],\n    }\n  \"~timing\": {\n    \"out_time\": \"2018-12-13T17:29:34+0000\"\n  }\n}\n

    The questioner then checks the signature against the sig_data.

    "},{"location":"features/0113-question-answer/#optional-elements","title":"Optional Elements","text":"

    The \"question_detail\" field is optional. It can be used to give \"fine print\"-like context around the question and all of its valid responses. While this could always be displayed, some UIs may choose to only make it available on-demand, in a \"More info...\" kind of way.

    ~timing.expires_time is optional ~response~sig is optional when \"signature_required\" is false

    "},{"location":"features/0113-question-answer/#business-cases-and-auditing","title":"Business cases and auditing","text":"

    In the above scenario, Faber bank can audit the reply and prove that only Alice's pairwise key signed the response (a cryptographic API like Indy-SDK can be used to guarantee the responder's signature). Conversely, Alice can also use her key to prove or dispute the validity of the signature. The cryptographic guarantees central to agent-to-agent communication and digital signatures create a trustworthy protocol for obtaining a committed answer from a pairwise connection. This protocol can be used for approving wire transfers, accepting EULAs, or even selecting an item from a food menu. Of course, as with a real world signature, Alice should be careful about what she signs.

    "},{"location":"features/0113-question-answer/#invalid-replies","title":"Invalid replies","text":"

    The responder may send an invalid, incomplete, or unsigned response. In this case the questioner must decide what to do. As with normal verbal communication, if the response is not understood the question can be asked again, maybe with increased emphasis. Or the questioner may determine the lack of a valid response is a response in and of itself. This depends on the parties involved and the question being asked. For example, in the exchange above, if the question times out or the answer is not \"Yes, it's me\" then Faber would probably choose to discontinue the phone call.

    "},{"location":"features/0113-question-answer/#trust-and-constraints","title":"Trust and Constraints","text":"

    Using already established pairwise relationships allows each side to trust each other. The responder can know who sent the message and the questioner knows that only the responder could have encrypted the response. This response gives a high level of trust to the exchange.

    "},{"location":"features/0113-question-answer/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Connect.Me Free mobile app from Evernym. Installed via app store on iOS and Android. Verity Commercially licensed enterprise agent, SaaS or on-prem."},{"location":"features/0114-predefined-identities/","title":"Aries RFC 0114: Predefined Identities","text":""},{"location":"features/0114-predefined-identities/#summary","title":"Summary","text":"

    Documents some keys, DIDs, and DID Docs that may be useful for testing, simulation, and spec writing. The fake ones are the DIDComm / identity analogs to the reserved domain \"example.com\" that was allocated for testing purposes with DNS and other internet systems -- or to Microsoft's example Contoso database and website used to explore and document web development ../../concepts.

    "},{"location":"features/0114-predefined-identities/#real-identities","title":"Real Identities","text":"

    The following real--NOT fake--identities are worth publicly documenting.

    "},{"location":"features/0114-predefined-identities/#aries-community","title":"Aries community","text":"

    The collective Aries developer community is represented by:

    did:sov:BzCbsNYhMrjHiqZDTUASHg -- verkey = 6zJ9dboyug451A8dtLgsjmjyguQcmq823y7vHP6vT2Eu\n

    This DID is currently allocated, but not actually registered on Sovrin's mainnet. You will see this DID in a number of RFCs, as the basis of a PIURI that identifies a community-defined protocol. You DO NOT have to actually resolve this DID or relate to a Sovrin identity to use Aries or its RFCs; think of this more like the opaque URNs that are sometimes used in XML namespacing. At some point it may be registered, but nothing else in the preceding summary will change.

    The community controls a second DID that is useful for defining message families that are not canonical (e.g., in the sample tic-tac-toe protocol). It is:

    did:sov:SLfEi9esrjzybysFxQZbfq -- verkey = Ep1puxjTDREwEyz91RYzn7arKL2iKQaDEB5kYDUUUwh5\n

    This community may create DIDs for itself from other DID methods, too. If so, we will publish them here.

    "},{"location":"features/0114-predefined-identities/#subgroups","title":"Subgroups","text":"

    The Aries community may create subgroups with their own DIDs. If so, we may publish such information here.

    "},{"location":"features/0114-predefined-identities/#allied-communities","title":"Allied communities","text":"

    Other groups such as DIF, the W3C Crecentials Community Group, and so forth may wish to define identities and announce their associated DIDs here.

    "},{"location":"features/0114-predefined-identities/#fake-identities","title":"Fake Identities","text":"

    The identity material shown below is not registered anywhere. This is because sometimes our tests or demos are about registering or connecting, and because the identity material is intended to be somewhat independent of a specific blockchain instance. Instead, we define values and give them names, permalinks, and semantics in this RFC. This lets us have a shared understanding of how we expect them to behave in various contexts.

    WARNING: Below you will see some published secrets. By disclosing private keys and/or their seeds, we are compromising the keypairs. This fake identity material is thus NOT trustworthy for anything; the world knows the secrets, and now you do, too. :-) You can test or simulate workflows with these keys. You might use them in debugging and development. But you should never use them as the basis of real trust.

    "},{"location":"features/0114-predefined-identities/#dids","title":"DIDs","text":""},{"location":"features/0114-predefined-identities/#alice-sov-1","title":"alice-sov-1","text":"

    This DID, the alice-sov-1 DID with value did:sov:UrDaZsMUpa91DqU3vrGmoJ, is associated with a very simplistic Indy/Sovrin identity. It has a single keypair (Key 1 below) that it uses for everything. In demos or tests, its genesis DID Doc looks like this:

    {\n    \"@context\": \"https://w3id.org/did/v0.11\",\n    \"id\": \"did:sov:UrDaZsMUpa91DqU3vrGmoJ\",\n    \"service\": [{\n        \"type\": \"did-communication\",\n        \"serviceEndpoint\": \"https://localhost:23456\"\n    }],\n    \"publicKey\": [{\n        \"id\": \"#key-1\",\n        \"type\": \"Ed25519VerificationKey2018\",\n        \"publicKeyBase58\": \"GBMBzuhw7XgSdbNffh8HpoKWEdEN6hU2Q5WqL1KQTG5Z\"\n    }],\n    \"authentication\": [\"#key-1\"]\n}\n
    "},{"location":"features/0114-predefined-identities/#bob-many-1","title":"bob-many-1","text":"

    This DID, the bob-many-1 DID with value did:sov:T9nQQ8CjAhk2oGAgAw1ToF, is associated with a much more flexible, complex identity than alice-sov-1. It places every test keypair except Key 1 in the authentication section of its DID Doc. This means you should be able to talk to Bob using the types of crypto common in many communities, not just Indy/Sovrin. Its genesis DID doc looks like this:

    {\n    \"@context\": \"https://w3id.org/did/v0.11\",\n    \"id\": \"did:sov:T9nQQ8CjAhk2oGAgAw1ToF\",\n    \"service\": [{\n        \"type\": \"did-communication\",\n        \"serviceEndpoint\": \"https://localhost:23457\"\n    }],\n    \"publicKey\": [{\n        \"id\": \"#key-2\",\n        \"type\": \"Ed25519VerificationKey2018\",\n        \"controller\": \"#id\",\n        \"publicKeyBase58\": \"FFhViHkJwqA15ruKmHQUoZYtc5ZkddozN3tSjETrUH9z\"\n      },\n      {\n        \"id\": \"#key-3\",\n        \"type\": \"Secp256k1VerificationKey2018\",\n        \"controller\": \"#id\",\n        \"publicKeyHex\": \"3056301006072a8648ce3d020106052b8104000a03420004a34521c8191d625ff811c82a24a60ff9f174c8b17a7550c11bba35dbf97f3f04392e6a9c6353fd07987e016122157bf56c487865036722e4a978bb6cd8843fa8\"\n      },\n      {\n        \"id\": \"#key-4\",\n        \"type\": \"RsaVerificationKey2018\",\n        \"controller\": \"#id\",\n        \"publicKeyPem\": \"-----BEGIN PUBLIC KEY-----\\r\\nMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDlOJu6TyygqxfWT7eLtGDwajtN\\r\\nFOb9I5XRb6khyfD1Yt3YiCgQWMNW649887VGJiGr/L5i2osbl8C9+WJTeucF+S76\\r\\nxFxdU6jE0NQ+Z+zEdhUTooNRaY5nZiu5PgDB0ED/ZKBUSLKL7eibMxZtMlUDHjm4\\r\\ngwQco1KRMDSmXSMkDwIDAQAB\\r\\n-----END PUBLIC KEY-----\"\n      },\n      {\n        \"id\": \"#key-5\",\n        \"type\": \"RsaVerificationKey2018\",\n        \"controller\": \"#id\",\n        \"publicKeyPem\": \"-----BEGIN PUBLIC KEY-----\\r\\nMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAoZp7md4nkmmFvkoHhQMw\\r\\nN0lcpYeKfeinKir7zYWFLmpClZHawZKLkB52+nnY4w9ZlKhc4Yosrw/N0h1sZlVZ\\r\\nfOQBnzFUQCea6uK/4BKHPhiHpN73uOwu5TAY4BHS7fsXRLPgQFB6o6iy127o2Jfb\\r\\nUVpbNU/rJGxVI2K1BIzkfrXAJ0pkjkdP7OFE6yRLU4ZcATWSIPwGvlF6a0/QPC3B\\r\\nbTvp2+DYPDC4pKWxNF/qOwOnMWqxGq6ookn12N/GufA/Ugv3BTVoy7I7Q9SXty4u\\r\\nUat19OBJVIqBOMgXsyDz0x/C6lhBR2uQ1K06XRa8N4hbfcgkSs+yNBkLfBl7N80Q\\r\\n0Wkq2PHetzQU12dPnz64vvr6s0rpYIo20VtLzhYA8ZxseGc3s7zmY5QWYx3ek7Vu\\r\\nwPv9QQzcmtIQQsUbekPoLnKLt6wJhPIGEr4tPXy8bmbaThRMx4tjyEQYy6d+uD0h\\r\\nXTLSjZ1SccMRqLxoPtTWVNXKY1E84EcS/QkqlY4AthLFBL6r+lnm+DlNaG8LMwCm\\r\\ncz5NMag9ooM9IqgdDYhUpWYDSdOvDubtz1YZ4hjQhaofdC2AkPXRiQvMy/Nx9WjQ\\r\\nn4z387kz5PK5YbadoZYkwtFttmxJ/EQkkhGEDTXoSRTufv+qjXDsmhEsdaNkvcDP\\r\\n1uiCSY19UWe5LQhIMbR0u/0CAwEAAQ==\\r\\n-----END PUBLIC KEY-----\"\n      },\n    ],\n    \"authentication\": [\"#key-2\", \"#key-3\", \"#key-4\", \"#key-5\", \"#key-6\"]\n}\n

    [TODO: define DIDs from other ecosystems that put the same set of keys in their DID docs -- maybe bob-many-2 is a did:eth using these same keys, and bob-many-3 is a did:btc using them...]

    "},{"location":"features/0114-predefined-identities/#keys","title":"Keys","text":""},{"location":"features/0114-predefined-identities/#key-1-ed25519","title":"Key 1 (Ed25519)","text":"

    This key is used by the alice-sov-1 DID, but could also be used with other DIDs defined elsewhere.

    signing key (private)\nGa3v3SyNsvv1QhSCrEAQfJiyxQYUdZzQARkCosSWrXbT\n\nhex seed (private; in a form usable by Indy CLI)\ne756c41c1b5c48d3be0f7b5c7aa576d2709f13b67c9078c7ded047fe87c8a79e\n\nverkey (public)\nGBMBzuhw7XgSdbNffh8HpoKWEdEN6hU2Q5WqL1KQTG5Z\n\nas a Sovrin DID\ndid:sov:UrDaZsMUpa91DqU3vrGmoJ\n
    "},{"location":"features/0114-predefined-identities/#key-2-ed25519","title":"Key 2 (Ed25519)","text":"

    This key is used by the bob-many-1 DID, but could also be used with other DIDs defined elsewhere.

    signing key (private)\nFE2EYN25vcQmCU52MkiHuXHKqR46TwjFU4D4TGaYDRyd\n\nhex seed (private)\nd3598fea152e6a480faa676a76e545de7db9ac1093b9cee90b031d9625f3ce64\n\nverkey (public)\nFFhViHkJwqA15ruKmHQUoZYtc5ZkddozN3tSjETrUH9z\n\nas a Sovrin DID\ndid:sov:T9nQQ8CjAhk2oGAgAw1ToF\n
    "},{"location":"features/0114-predefined-identities/#key-3-secp256k1","title":"Key 3 (Secp256k1)","text":"

    This key is used by the bob-many-1 DID, but could also be used with other DIDs defined elsewhere.

    -----BEGIN EC PRIVATE KEY-----\nMHQCAQEEIMFcUvDujXt0/C48vm1Wfj8ADlrGsHCHzp//2mUARw79oAcGBSuBBAAK\noUQDQgAEo0UhyBkdYl/4EcgqJKYP+fF0yLF6dVDBG7o12/l/PwQ5LmqcY1P9B5h+\nAWEiFXv1bEh4ZQNnIuSpeLts2IQ/qA==\n-----END EC PRIVATE KEY-----\n\n-----BEGIN PUBLIC KEY-----\nMFYwEAYHKoZIzj0CAQYFK4EEAAoDQgAEo0UhyBkdYl/4EcgqJKYP+fF0yLF6dVDB\nG7o12/l/PwQ5LmqcY1P9B5h+AWEiFXv1bEh4ZQNnIuSpeLts2IQ/qA==\n-----END PUBLIC KEY-----\n\npublic key as hex\n3056301006072a8648ce3d020106052b8104000a03420004a34521c8191d625ff811c82a24a60ff9f174c8b17a7550c11bba35dbf97f3f04392e6a9c6353fd07987e016122157bf56c487865036722e4a978bb6cd8843fa8\n
    "},{"location":"features/0114-predefined-identities/#key-4-1024-bit-rsa","title":"Key 4 (1024-bit RSA)","text":"

    This key is used by the bob-many-1 DID, but could also be used with other DIDs defined elsewhere.

    -----BEGIN RSA PRIVATE KEY-----\nMIICXQIBAAKBgQDlOJu6TyygqxfWT7eLtGDwajtNFOb9I5XRb6khyfD1Yt3YiCgQ\nWMNW649887VGJiGr/L5i2osbl8C9+WJTeucF+S76xFxdU6jE0NQ+Z+zEdhUTooNR\naY5nZiu5PgDB0ED/ZKBUSLKL7eibMxZtMlUDHjm4gwQco1KRMDSmXSMkDwIDAQAB\nAoGAfY9LpnuWK5Bs50UVep5c93SJdUi82u7yMx4iHFMc/Z2hfenfYEzu+57fI4fv\nxTQ//5DbzRR/XKb8ulNv6+CHyPF31xk7YOBfkGI8qjLoq06V+FyBfDSwL8KbLyeH\nm7KUZnLNQbk8yGLzB3iYKkRHlmUanQGaNMIJziWOkN+N9dECQQD0ONYRNZeuM8zd\n8XJTSdcIX4a3gy3GGCJxOzv16XHxD03GW6UNLmfPwenKu+cdrQeaqEixrCejXdAF\nz/7+BSMpAkEA8EaSOeP5Xr3ZrbiKzi6TGMwHMvC7HdJxaBJbVRfApFrE0/mPwmP5\nrN7QwjrMY+0+AbXcm8mRQyQ1+IGEembsdwJBAN6az8Rv7QnD/YBvi52POIlRSSIM\nV7SwWvSK4WSMnGb1ZBbhgdg57DXaspcwHsFV7hByQ5BvMtIduHcT14ECfcECQATe\naTgjFnqE/lQ22Rk0eGaYO80cc643BXVGafNfd9fcvwBMnk0iGX0XRsOozVt5Azil\npsLBYuApa66NcVHJpCECQQDTjI2AQhFc1yRnCU/YgDnSpJVm1nASoRUnU8Jfm3Oz\nuku7JUXcVpt08DFSceCEX9unCuMcT72rAQlLpdZir876\n-----END RSA PRIVATE KEY-----\n\n-----BEGIN PUBLIC KEY-----\nMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDlOJu6TyygqxfWT7eLtGDwajtN\nFOb9I5XRb6khyfD1Yt3YiCgQWMNW649887VGJiGr/L5i2osbl8C9+WJTeucF+S76\nxFxdU6jE0NQ+Z+zEdhUTooNRaY5nZiu5PgDB0ED/ZKBUSLKL7eibMxZtMlUDHjm4\ngwQco1KRMDSmXSMkDwIDAQAB\n-----END PUBLIC KEY-----\n
    "},{"location":"features/0114-predefined-identities/#key-5-4096-bit-rsa","title":"Key 5 (4096-bit RSA)","text":"

    This key is used by the bob-many-1 DID, but could also be used with other DIDs defined elsewhere.

    -----BEGIN RSA PRIVATE KEY-----\nMIIJKAIBAAKCAgEAoZp7md4nkmmFvkoHhQMwN0lcpYeKfeinKir7zYWFLmpClZHa\nwZKLkB52+nnY4w9ZlKhc4Yosrw/N0h1sZlVZfOQBnzFUQCea6uK/4BKHPhiHpN73\nuOwu5TAY4BHS7fsXRLPgQFB6o6iy127o2JfbUVpbNU/rJGxVI2K1BIzkfrXAJ0pk\njkdP7OFE6yRLU4ZcATWSIPwGvlF6a0/QPC3BbTvp2+DYPDC4pKWxNF/qOwOnMWqx\nGq6ookn12N/GufA/Ugv3BTVoy7I7Q9SXty4uUat19OBJVIqBOMgXsyDz0x/C6lhB\nR2uQ1K06XRa8N4hbfcgkSs+yNBkLfBl7N80Q0Wkq2PHetzQU12dPnz64vvr6s0rp\nYIo20VtLzhYA8ZxseGc3s7zmY5QWYx3ek7VuwPv9QQzcmtIQQsUbekPoLnKLt6wJ\nhPIGEr4tPXy8bmbaThRMx4tjyEQYy6d+uD0hXTLSjZ1SccMRqLxoPtTWVNXKY1E8\n4EcS/QkqlY4AthLFBL6r+lnm+DlNaG8LMwCmcz5NMag9ooM9IqgdDYhUpWYDSdOv\nDubtz1YZ4hjQhaofdC2AkPXRiQvMy/Nx9WjQn4z387kz5PK5YbadoZYkwtFttmxJ\n/EQkkhGEDTXoSRTufv+qjXDsmhEsdaNkvcDP1uiCSY19UWe5LQhIMbR0u/0CAwEA\nAQKCAgBWzqj+ajtPhqd1JEcNyDyqNhoyQLDAGa1SFWzVZZe46xOBTKv5t0KI1BSN\nT86VibVRCW97J8IA97hT2cJU5hv/3mqQnOro2114Nv1i3BER5hNXGP5ws04thryW\nAH0RoQNKwGUBpzl5mDEZUFZ7oncJKEQ+SwPAuQCy1V7vZs+G0RK7CFcjpmLkl81x\nkjl0UIQzkhdA6KCmsxXTdzggW2O/zaM9nXYKPxGwP+EEhVFJChlRjkI8Vv32z0vk\nh7A0ST16UTsL7Tix0rfLI/OrTn9LF5NxStmZNB1d5v30FwtiqXkGcQn/12QhGjxz\nrLbGDdU3p773AMJ1Ac8NhpKN0vXo7NOh9qKEq0KfLy+AD6CIDB9pjZIolajqFOmO\nRENAP9eY/dP7EJNTSU84GJn8csQ4imOIYqp0FkRhigshMbr7bToUos+/OlHYbMry\nr/I8VdMt4xazMK5PtGn9oBzfv/ovNyrQxv562rtx3G996HFF6+kCVC3mBtTHe0p2\nVKNJaXlQSkEyrYAOqhnMvIfIMuuG2+hIuv5LBBdCyv6YC4ER2RsaXHt4ZBfsbPfO\nTEP4YCJTuLc+Fyg1f01EsuboB0JmvzNyiK+lBp8FsxiqwpIExriBCPJgaxoWJMFh\nxrRzTXwBWkJaDhYVbc2bn8TtJE6uEC9m4B7IUQOrXXKyOTqUgQKCAQEAzJl16J3Y\nYjkeJORmvi2J1UbaaBJAeCB7jwXlarwAq8sdxEqdDoRB6cZhWX0VMH46oaUA+Ldx\nCoO2iMgOrs0p6dJOj1ybtIhiX9PJTzstd5WEltU/mov+DzlBiKg78dFi/B5HfE/F\nKIDx4gTcD//sahooMqbg78QfOO+JjLrvT7TljL/puOAM8LTytZqOaDIDwnblpSgZ\nJcCqochmz9b7f7NHbgVrBkXZTsgbH6Dw4H7T0WC4K4P4dJW8Js18r+xN3W8/ZhmY\nlxTDZy40LlUy7++Ha+v8vZ4cRJKq2sdTtt9Z/ZYDfpCDT5ZmGS/gDloGean9mivG\nlt/zgDswEUji9QKCAQEAyjPKsBitJ39S26H/hp5oZRad1MOXgXPpbu8ARmHooKP3\nQ0dtnreBLzIQxxIitp3GjzJFU9r/tqy/ylOhIGAt+340KoSye3gGpvxZImMAIIR9\ns03GE5AHJ4J5NIxQKX+g9o0fV44bVNrLzAnHaZh+Bi4xbLatBJABgN2TnjA8lx7x\nlrqb99VpKLZP7DGxK7o0Ji4qerMPeIVoJ9RaUkTYguJaXG22nPeKfDiI13xlm1RU\nptulJG3CkRYp48Udmqb1b+67KMOxKL1ISGhuzqitOY+Ua1sM5SEFyukEhMuK6/uM\nSCAVl9aNHU5vx95D/T7onPAnxNqDObWeZi2HWoif6QKCAQEAxC4BmOKBMO2Dsewv\nd/tCRnaBxXh6yLScxS7qI8XQ/ujryeOhZOH8MaQ+hAgj4TOoFIaav+FlSqewxsbN\nDV876S/2lBBAXILJkQkJ5ibgGeIMGHSxYAcLvJ0x8U8e62fSedyuvsveSFAbnpT6\nTX0fuz0Jfkf1NvHe3kEQqxgzj0HtOWBrQxHSVpuqfeeM1OvgHv7Sg+JG+qQa+LWn\nn3KMBI5q11vqm0EudRP6rgEr9pallAYhkdggy+knWC2AeU8j+kdJiyTP403Nb4om\nDqczCE2slBbbaRXKFRZtLQojgx32s+i7wQfgYNfdXhlBxYEc5FvTB5kh+lkSqsoV\n9PzmYQKCAQBrQHGAWnZt/uEqUpFBDID/LbHmCyEvrxXgm7EfpAtKOe6LpzWD/H3v\nVLUFgp8bEjEh/146jm0YriTE4vsSOzHothZhfyVUzGNq62s0DCMjHGO4WcZ41eqV\nkGVN9CcI/AObA1veiygAKFX1EjLN1e7yxEm/Cl5XjzLc8aq9O4TH+8fVVYIpQO+Y\ngqt98xWwxgGnRtGNZ7ELEmgeyEpoXNAjDIE1iZRVShAQt8QN2JPkgiSspNDBs96C\nKqlpgUKkp26EQrLPeo1buJrAnXQ49ct8PqZRE2iRmKSD7nlRHs2/Qhw0naAWe905\n8ELmVwTlLRshM1lE10rHr4gnVnr3EIURAoIBAFXLQXV9CuLoV9nosprVYbhSWLMj\nO9ChjgGfCmqi3gQecJxctwNlo3l8f5W2ZBrIqgWFsrxzHd2Ll4k2k/IcFa4jtz9+\nPrSGZz8TEkM5ERSwDd1QXNE/P7AV6EDs/W/V0T5G1RE82YGkf0PNM+drJ/r/I4HS\nN0DDlZb8YwjkP1tT8x3I+vx9bLWczbsMhrwIEUPQJZxMSdZ+DMM45TwAXyp9aLzU\npa9CdL1gAtSLA7AmcafGeUIA7N1evRYuUVWhhSRjPX55hGBoO0u9fxZIPRTf0dcK\nHHK05KthUPh7W5TXSPbni/GyuNg3H7kavT7ANHOwI77CfaKFgxLrZan+sAk=\n-----END RSA PRIVATE KEY-----\n\n-----BEGIN PUBLIC KEY-----\nMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAoZp7md4nkmmFvkoHhQMw\nN0lcpYeKfeinKir7zYWFLmpClZHawZKLkB52+nnY4w9ZlKhc4Yosrw/N0h1sZlVZ\nfOQBnzFUQCea6uK/4BKHPhiHpN73uOwu5TAY4BHS7fsXRLPgQFB6o6iy127o2Jfb\nUVpbNU/rJGxVI2K1BIzkfrXAJ0pkjkdP7OFE6yRLU4ZcATWSIPwGvlF6a0/QPC3B\nbTvp2+DYPDC4pKWxNF/qOwOnMWqxGq6ookn12N/GufA/Ugv3BTVoy7I7Q9SXty4u\nUat19OBJVIqBOMgXsyDz0x/C6lhBR2uQ1K06XRa8N4hbfcgkSs+yNBkLfBl7N80Q\n0Wkq2PHetzQU12dPnz64vvr6s0rpYIo20VtLzhYA8ZxseGc3s7zmY5QWYx3ek7Vu\nwPv9QQzcmtIQQsUbekPoLnKLt6wJhPIGEr4tPXy8bmbaThRMx4tjyEQYy6d+uD0h\nXTLSjZ1SccMRqLxoPtTWVNXKY1E84EcS/QkqlY4AthLFBL6r+lnm+DlNaG8LMwCm\ncz5NMag9ooM9IqgdDYhUpWYDSdOvDubtz1YZ4hjQhaofdC2AkPXRiQvMy/Nx9WjQ\nn4z387kz5PK5YbadoZYkwtFttmxJ/EQkkhGEDTXoSRTufv+qjXDsmhEsdaNkvcDP\n1uiCSY19UWe5LQhIMbR0u/0CAwEAAQ==\n-----END PUBLIC KEY-----\n
    "},{"location":"features/0114-predefined-identities/#key-6-ed25519","title":"Key 6 (Ed25519)","text":"

    This key is used by the bob-many-1 DID, but could also be used with other DIDs defined elsewhere.

    signing key (private)\n9dTU6xawVQJprz7zYGCiTJCGjHdW5EcZduzRU4z69p64\n\nhex seed (private; in a form usable by Indy CLI)\n803454c9429467530b17e8e571df5442b6620ac06ab0172d943ab9e01f6d4e31\n\nverkey (public)\n4zZJaPg26FYcLZmqm99K2dz99agHd5rkhuYGCcKntAZ4\n\nas a Sovrin DID\ndid:sov:8KrDpiKkHsFyDm3ZM36Rwm\n
    "},{"location":"features/0114-predefined-identities/#tools-to-generate-your-own-identity-material","title":"Tools to generate your own identity material","text":""},{"location":"features/0114-predefined-identities/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0116-evidence-exchange/","title":"Aries RFC 0116: Evidence Exchange Protocol 0.9","text":""},{"location":"features/0116-evidence-exchange/#summary","title":"Summary","text":"

    The goal of this protocol is to allow Holders to provider an inquiring Verifier with a secure and trusted mechanism for obtaining access to the foundational evidence that enabled the Issuer the assurances necessary to create the Verifiable Credential(s) that the Holder has presented to the Verifier. To this end, a P2P evidence exchange protocol is required that will allow parties using Pair-wise Peer DIDs to exchange evidence in support of the issuance of Verified Credentials without any dependencies on a centralized storage facility.

    "},{"location":"features/0116-evidence-exchange/#motivation","title":"Motivation","text":"

    During the identity verification process, an entity may require access to the genesis documents used to establish digital credentials issued by a credential issuing entity or Credential Service Provider (CSP). In support of the transition from existing business verification processes to emerging business processes that rely on digitally verified credentials using protocols such as 0036-issue-credential and 0037-present-proof, we need to establish a protocol that allow entities to make this transition while remaining compliant with business and regulatory requirements. Therefore, we need a mechanism for Verifiers to obtain access to vetted evidence (physical or digital information or documentation) without requiring a relationship or interaction with the Issuer.

    While this protocol should be supported by all persona, its relevance to decentralized identity ecosystems is highly dependent on the business policies of a market segment of Verifiers. For more details see the Persona section.

    While technology advancements around identity verification are improving, business policies (most often grounded in risk mitigation) will not change at the same rate of speed. For example, just because a financial institution in Singapore is willing to rely on the KYC due-diligence processing of another institution, we should not assume that the banks in another geolocation (i.e: Hong Kong) can embrace the same level of trust. For this reason, we must enable Verifiers with the option to obtain evidence that backs any assertions made by digital credential issuers.

    Based on a web-of-trust and cryptographic processing techniques, Verifiers of digital credentials can fulfill their identity proofing workflow requirements. However, business policies and regulatory compliance may require them to have evidence for oversight activities such as but not limited to government mandated Anti-Money Laundering (AML) Compliance audits.

    Verifiers or relying parties (RPs) of digital credentials need to make informed decisions about the risk of accepting a digital identity before trusting the digital credential and granting associated privileges. To mitigate such risk, the Verifier may need to understand the strength of the identity proofing process. According to a December 2015 - NIST Information Technology Laboratory Workshop Report, Measuring Strength of Identity Proofing, there are two (2) identity proofing methods that can be leveraged by a CSP:

    Proofing Method Description In-Person Identity Proofing Holder is required to present themselves and their documentation directly to a trained representative of an identity proofing agency. Remote Identity Proofing Holder is not expected to present themselves or their documents at a physical location. Validation and verification of presented data (including digital documents) is performed programmatically against one or more corroborating authoritative sources of data.

    If the In-Person Identity Proofing method is used, the strength can easily be determined by allowing the Verifier to gain access to any Original Documents used by the Issuer of a Derived Credential. In the situation where a Remote Identity Proofing method is used, confidence in the strength of the identity proofing process can be determined by allowing the Verifier to gain access to Digital Assertions used by the Issuer of a Derived Credential.

    "},{"location":"features/0116-evidence-exchange/#problem-scope","title":"Problem Scope","text":"

    This protocol is intended to address the following challenging questions:

    1. What evidence (information or documentation) was used to establish the level of certitude necessary to allow an Issuer to issue a Verifiable Credential?

    2. For each Identity Proofing Inquiry (challenge) such as Address, Identity, Photo and Achievement, which forms of evidence was used by the Issuer of the Verifiable Credential?

    3. When the Issuer's Examinier relies on an Identity Proofing Service Provider (IPSP) as part of its Remote Identity Proofing process:

    4. Can the IPSP provide a Digital Assertion in association with the Identity Instrument they have vetted as part of their service to the Examiner?

    5. Can the Issuer provide a Digital Assertion in association with its certitude in the reliability of its due-diligence activity that is dependent on 3rd parties?

    6. When the Issuer relies on trained examiners for its In-Person Identity Proofing process, can the Issuer provide access to the digitally scanned documents either by-value or by-reference?

    "},{"location":"features/0116-evidence-exchange/#assurance-levels","title":"Assurance Levels","text":"

    Organizations that implement Identity Proofing generally seek to balance cost, convenience, and security for both the Issuer and the Holder. Examples of these tradeoffs include:

    To mitigate the risk associated with such tradeoffs, the NIST 800-63A Digital Identity Guidelines outline three (3) levels of identity proofing assurance. These levels describe the degree of due-diligence performed during an Identity Proofing Process. See Section 5.2 Identity Assurance Levels Table 5-1.

    Users of this protocol will need to understand the type of evidence that was collected and how it was confirmed so that they can adhere to any business processes that require IAL2 or IAL3 assurance levels supported by Strong or Superior forms of evidence.

    "},{"location":"features/0116-evidence-exchange/#dematerialization-of-physical-documents","title":"Dematerialization of physical documents","text":"

    Today, entities (businesses, organizations, government agencies) maintain existing processes for the gathering, examination and archiving of physical documents. These entities may retain a copy of a physical document, a scanned digital copy or both. Using manual or automated procedures, the information encapsulated within these documents is extracted and stored as personal data attestations about the document presenter within a system of record (SOR).

    As decentralized identity technologies begin to be adopted, these entities can transform these attestations into Verifiable Credentials.

    "},{"location":"features/0116-evidence-exchange/#understanding-kyc","title":"Understanding KYC","text":"

    Know Your Customer (KYC) is a process by which entities (business, governments, organizations) obtain information about the identity and address of their customers. This process helps to ensure that the services that the entity provides are not misused. KYC procedures vary based on geolocation and industry. For example, the KYC documents required to open a bank account in India versus the USA may differ but the basic intent of demonstrating proof of identity and address are similar. Additionally, the KYC documents necessary to meet business processing requirements for enrollment in a university may differ from that of onboarding a new employee.

    Regardless of the type of KYC processing performed by an entity, there may be regulatory or business best practice requirements that mandate access to any Original Documents presented as evidence during the KYC process. As entities transition from paper/plastic based identity proofing practices to Verifiable Credentials there may exist (albeit only for a transitional period) the need to gain access to the Identity Evidence that an Issuer examined before issuing credentials.

    This process is time consuming and costly for the credential Issuer and often redundant and inconvenient for the Holder. Some industry attempts have been made to establish centrally controlled B2B sharing schemas to help reduce such impediments to the Issuer. These approaches are typically viewed as vital for the betterment of the Issuers and Verifiers and are not designed for or motivated by the data privacy concerns of the Holder. The purpose of this protocol is to place the Holder at the center of the P2P C2B exchange of Identity Evidence while allowing Verifiers to gain confidence in identity proofing assurance levels.

    "},{"location":"features/0116-evidence-exchange/#evidence-vetting-workflow","title":"Evidence Vetting Workflow","text":"

    The Verifiable Credentials Specification describes three key stakeholders in an ecosystem that manages digital credentials: Issuers, Holders and Verifiers. However, before an Issuer can attest to claims about a Holder, an Examiner must perform the required vetting, due diligence, regulatory compliance and other tasks needed to establish confidence in making a claim about an identity trait associated with a Holder. The actions of the Examiner may include physical validation of information (i.e.: comparison of real person to a photo) as well as reliance on third party services as part of its vetting process. Depending on the situational context of a credential request or the type of privileges to be granted, the complexity of the vetting process taken by an examiner to confirm the truth about a specific trait may vary.

    An identity Holder may present to an Examiner Identity Evidence in the form of a physical document or other forms of Identity Instruments to resolve Identity Proofing Inquires. The presentment of these types of evidence may come in a variety of formats:

    "},{"location":"features/0116-evidence-exchange/#evidence-access-matrix","title":"Evidence Access Matrix","text":"

    Note: Assumption herein is that original documents are never forfeited by an individual.

    Original Source Format Issuer Archived Format Verifier Business Process Format Protocol Requirement Paper/Plastic Paper-Copy n/a n/a Paper/Plastic Digital Copy Digital Copy Access by Value Paper/Plastic Digital Copy URL Access by Reference Digital Copy Digital Copy Digital Copy Access by Value Digital Copy Digital Copy URL Access by Reference Digital Scan Digital Copy Digital Copy Digital Assertion URL Digital Copy Digital Copy Access by Value URL Digital Copy URL Access by Reference"},{"location":"features/0116-evidence-exchange/#why-a-peer-did-protocol","title":"Why a Peer DID Protocol?","text":"

    In a decentralized identity ecosystem where peer relationships no longer depend on a centralized authority for the source of truth, why should a Verifier refer to some 3rd party or back to the issuing institution for capturing Identity Evidence?

    "},{"location":"features/0116-evidence-exchange/#solution-concepts","title":"Solution Concepts","text":""},{"location":"features/0116-evidence-exchange/#protocol-assumptions","title":"Protocol Assumptions","text":"
    1. Holder must present Identity Evidence access to Verifier such that Verifier can be assured that the Issuer vetted the evidence.
    2. Some business processes and/or regulatory compliance requirements may demand that a Verifier gains access to the Original Documents vetted by a credential Issuer.
    3. Some Issuers may accept digital access links to documents as input into vetting process. This is often associated with Issuers who will accept copies of the Original Documents.
    4. Some Issuer may accept Digital Assertions from IPSPs as evidence of their due-diligence process. Examples of such IPSPs are: Acuant, Au10tix, IWS, Onfido and 1Kosmos.
    "},{"location":"features/0116-evidence-exchange/#protocol-objectives","title":"Protocol Objectives","text":"

    In order for a Verifier to avoid or reduce evidence vetting expenses it must be able to:

    This implies that the protocol must address the following evidence concerns:

    Interaction Type Challenge Protocol Approach Examiner-to-Holder How does Issuer provide Holder with proof that it has vetted Identity Evidence? Issuer signs hash of the evidence and presents signature to Holder. Holder-to-Verifier How does Holder present Verifier with evidence that the Issuer of a Credential vetted Identity Evidence? Holder presents verifier with digitally signed hash of evidence, public DID of Issuer and access to a copy of the digital evidence. Verifier-to-FileStorageProvider How does Verifier access the evidence in digital format (base64)? Issuer or Holder must provide secure access to a digital copy of the document. Verifier-to-Verifier How does Verifier validate that Issuer attests to the vetting of the Identity Evidence for personal data claims encapsulated in issued credentials? Verifier gains access to the digital evidence, fetches the public key associated with the Issuer's DID and validates Issuer's signature of document hash."},{"location":"features/0116-evidence-exchange/#protocol-outcome","title":"Protocol Outcome","text":"

    This protocol is intended to be a compliment to the foundational (issuance, verification) protocols for credential lifecycle management in support of the Verifiable Credentials Specification. Overtime, it is assumed that the exchange of Identity Evidence will no longer be necessary as digital credentials become ubiquitous. In the meantime, the trust in and access to Identity Evidence can be achieved in private peer to peer relationships using the Peer DID Spec.

    "},{"location":"features/0116-evidence-exchange/#persona","title":"Persona","text":"

    This protocol addresses the business policy needs of a market segment of Verifiers. Agent Software used by the following persona is required to support this market segment.

    Persona Applicability Examiner Entities that perform In-Person and/or Remote Identity Proofing processes and need to support potential requests for evidence in support of the issuance of Verifiable Credentials based on the results of such processes. Issuer Entities with the certitude to share with a Holder supporting evidence for the due-diligence performed in association with attestations backing an issued Verifiable Credential. Holder A recipient of a Verifiable Credential, that desires to proactively gather supporting evidence of such a credential incase a Verifier should inquire. Verifier Entities that require access to Original Documents or Digital Assertions because they can not (for business policy reasons) rely on the identity proofing due-diligence of others. These entities may refer to a Trust Score based on their own business heuristics associated with the type of evidence supplied: Original Documents, Digital Assertions."},{"location":"features/0116-evidence-exchange/#user-stories","title":"User Stories","text":"

    An example of the applicability of this protocol to real world user scenarios is discussed in the context of a decentralized digital notary where the credential issuing institution is not the issuer of the original source document(s) or digital assertions.

    "},{"location":"features/0116-evidence-exchange/#evidence-types","title":"Evidence Types","text":"

    In the context of this protocol, Identity Evidence represents physical or digital information-based artifacts that support a belief to common Identity Proofing Inquires (challenges):

    The following, non-exhaustive, list of physical information-based artifacts (documents) are used as evidence when confronted with common identity related inquires. They are often accompanied with a recent photograph. Since this protocol is intended to be agnostic of business and regulatory processes, the types of acceptable documents will vary.

    Proof Type Sample Documents Address Passport, Voter\u2019s Identity Card, Utility Bill (Gas, Electric, Telephone/Mobile), Bank Account Statement, Letter from any recognized public authority or public servant, Credit Card Statement, House Purchase deed, Lease agreement along with last 3 months rent receipt, Employer\u2019s certificate for residence proof Identity Passport, PAN Card, Voter\u2019s Identity Card, Driving License, Photo identity proof of Central or State government, Ration card with photograph, Letter from a recognized public authority or public servant, Bank Pass Book bearing photograph, Employee identity card of a listed company or public sector company, Identity card of University or board of education Photo Passport, Pistol Permit, Photo identity proof of Central or State government Achievement Diploma, Certificate Privilege Membership/Loyalty Card, Health Insurance Card

    These forms of Identity Evidence are examples of trusted credentials that an Examiner relies on during their vetting process.

    "},{"location":"features/0116-evidence-exchange/#tutorial","title":"Tutorial","text":"

    The evidence exchange protocol builds on the attachment decorator within DIDComm using the the Inlining Method for Digital Assertions and the Appending Method for Original Documents.

    The protocol is comprised of the following messages and associated actions:

    Interaction Type Message Process Actions Holder to Issuer Request Evidence Holder reviews the list of credentials it has received from the Issuer and sends an evidence_request message to Issuer's agent. Issuer to Holder Evidence Response Issuer collects Identity Evidence associated with each requested credential ID and sends an evidence_response message to Holder's agent. Upon receipt, the Holder stores evidence data in Wallet. Verifier to Holder Evidence Access Request Verifier builds and sends an evidence_access_request message to Holder's agent. Holder to Verifier Evidence Access Response Holder builds and sends an evidence_access_response message to the Verifier's agent. Verifier fetches requested Identity Evidence and performs digital signature validation on each. Verifier stores evidence in system of record.

    "},{"location":"features/0116-evidence-exchange/#request-evidence-message","title":"Request Evidence Message","text":"

    This message should be used as an accompaniment to an issue credential message. Upon receipt and storage of a credential the Holder should compose an evidence_request for each credential received from the Issuer. The Holder may use this message to get an update for new and existing credentials from the Issuer.

    {\n  \"@type\": \"https://didcomm.org/evidence_exchange/1.0/evidence_request\",\n  \"@id\": \"6a4986dd-f50e-4ed5-a389-718e61517207\",\n  \"for\": \"did:peer:1-F1220479cbc07c3f991725836a3aa2a581ca2029198aa420b9d99bc0e131d9f3e2cbe\",\n  \"as_of_time\": \"2019-07-23 18:05:06.123Z\",\n  \"credentials\": [\"cred-001\", \"cred-002\"],\n  \"request-type\": \"by-value\"\n}\n

    Description of attributes:

    "},{"location":"features/0116-evidence-exchange/#evidence-response-message","title":"Evidence Response Message","text":"

    This message is required for an Issuer Agent in response to an evidence_request message. The format of the ~attach attribute will be determined by the value of the request_type attribute in the associated request message from the Holder. If the Issuer relied on one or more IPSPs during the Identity Proofing Process, then this message will also include an inline attachment using the examiner_assertions attribute.

    {\n  \"@type\": \"https://didcomm.org/evidence_exchange/1.0/evidence_response\",\n  \"@id\": \"1517207d-f50e-4ed5-a389-6a4986d718e6\",\n  \"~thread\": { \"thid\": \"6a4986dd-f50e-4ed5-a389-718e61517207\" },\n  \"for\": \"did:peer:1-F1220479cbc07c3f991725836a3aa2a581ca2029198aa420b9d99bc0e131d9f3e2cbe\",\n  \"as_of_time\": \"2019-07-23 18:05:06.123Z\",\n  \"credentials\": [\n    { \"@id\": \"cred-001\",\n      \"evidence\": [\n        {\"evidence_type\": \"Address\", \"evidence_ref\": [\"#kycdoc1\", \"#kycdoc4\"]},\n        {\"evidence_type\": \"Identity\", \"evidence_ref\": [\"#kycdoc2\"]},\n        {\"evidence_type\": \"Photo\", \"evidence_ref\": null}\n      ]\n    },\n    { \"@id\": \"cred-002\",\n      \"evidence\": [\n        {\"evidence_type\": \"Address\", \"evidence_ref\": [\"#kycdoc1\",\"#kycdoc3\"]},\n        {\"evidence_type\": \"Identity\", \"evidence_ref\": [\"#kycdoc3\"]},\n        {\"evidence_type\": \"Photo\", \"evidence_ref\": [\"#kycdoc1\"]}\n      ]\n    }\n  ],\n  \"examiner_assertions\": [ ... ],\n  \"~attach\": [ ... ]\n}\n

    Description of attributes:

    "},{"location":"features/0116-evidence-exchange/#examiner-assertions","title":"Examiner Assertions","text":"

    {\n  \"examiner_assertions\": [\n    {\n      \"@id\": \"kycdoc4\",\n      \"approval_timestamp\": \"2017-06-21 09:04:088\",\n      \"description\": \"driver's license\",\n      \"vetting_process\": {\n        \"method\": \"remote\",\n        \"technology\": \"api\"\n      },\n      \"ipsp_did\": \"~3d5nh7900fn4\",\n      \"ipsp_claim\": <base64url(file)>,\n      \"ipsp_claim_sig\": \"3vvvb68b53d5nh7900fn499040cd9e89fg3kkh0f099c0021233728cf67945faf\",\n      \"examinerSignature\": \"f67945faf9e89fg3kkh3vvvb68b53d5nh7900fn499040cd3728c0f099c002123\"\n    }\n  ]\n}\n
    Description of attributes:

    "},{"location":"features/0116-evidence-exchange/#by-value-attachments","title":"By-value Attachments","text":"
    {\n  \"~attach\": [\n    {\n      \"@id\": \"kycdoc1\",\n      \"mime-type\": \"image/png\",\n      \"filename\": \"nys_dl.png\",\n      \"lastmod_time\": \"2017-06-21 09:04:088\",\n      \"description\": \"driver's license\",\n      \"vetting_process\": {\n        \"method\": \"in-person\",\n        \"technology\": \"barcode\"\n      },\n      \"data\": {\n        \"base64\": <base64url(file)>\n      },\n      \"examinerSignature\": \"f67945faf9e89fg3kkh3vvvb68b53d5nh7900fn499040cd3728c0f099c002123\"\n    },\n    {\n      \"@id\": \"kycdoc2\",\n      \"mime-type\": \"application/pdf\",\n      \"filename\": \"con_ed.pdf\",\n      \"lastmod_time\": \"2017-11-18 10:44:068\",\n      \"description\": \"ACME Electric Utility Bill\",\n      \"vetting_process\": {\n        \"method\": \"in-person\",\n        \"technology\": \"human-visual\"\n      },\n      \"data\": {\n        \"base64\": <base64url(file)>\n      },\n      \"examinerSignature\": \"945faf9e8999040cd3728c0f099c002123f67fg3kkh3vvvb68b53d5nh7900fn4\"\n    },\n    {\n      \"@id\": \"kycdoc3\",\n      \"mime-type\": \"image/jpg\",\n      \"filename\": \"nysccp.jpg\",\n      \"lastmod_time\": \"2015-03-19 14:35:062\",\n      \"description\": \"State Concealed Carry Permit\",\n      \"vetting_process\": {\n        \"method\": \"in-person\",\n        \"technology\": \"barcode\"\n      },\n      \"data\": {\n        \"sha256\": \"1d9eb668b53d99c002123f1ffa4db0cd3728c0f0945faf525c5ee4a2d4289904\",\n        \"base64\": <base64url(file)>\n      },\n      \"examinerSignature\": \"5nh7900fn499040cd3728c0f0945faf9e89kkh3vvvb68b53d99c002123f67fg3\"\n    }\n  ]\n}\n

    This message adheres to the attribute content formats outlined in the Aries Attachments RFC with the following additions:

    "},{"location":"features/0116-evidence-exchange/#by-reference-attachments","title":"By-reference Attachments","text":"
    {\n  \"~attach\": [\n    {\n      \"@id\": \"kycdoc1\",\n      \"mime-type\": \"image/png\",\n      \"filename\": \"nys_dl.png\",\n      \"lastmod_time\": \"2017-06-21 09:04:088\",\n      \"description\": \"driver's license\",\n      \"vetting_process\": {\n        \"method\": \"in-person\",\n        \"technology\": \"barcode\"\n      },\n      \"data\": {\n        \"sha256\": \"1d9eb668b53d99c002123f1ffa4db0cd3728c0f0945faf525c5ee4a2d4289904\",\n        \"links\": [\n          { \"url\": \"https://www.dropbox.com/s/r8rjizriaHw8T79hlidyAfe4DbWFcJYocef5/myDL.png\",\n            \"accesscode\": \"some_secret\"\n          }\n        ]\n      },\n      \"examinerSignature\": \"f67945faf9e89fg3kkh3vvvb68b53d5nh7900fn499040cd3728c0f099c002123\"\n    },\n    {\n      \"@id\": \"kycdoc2\",\n      \"mime-type\": \"application/pdf\",\n      \"filename\": \"con_ed.pdf\",\n      \"lastmod_time\": \"2017-11-18 10:44:068\",\n      \"description\": \"ACME Electric Utility Bill\",\n      \"vetting_process\": {\n        \"method\": \"remote\",\n        \"technology\": \"api\"\n      },\n      \"data\": {\n        \"sha256\": \"1d4db525c5ee4a2d42899040cd3728c0f0945faf9eb668b53d99c002123f1ffa\",\n        \"links\": [\n          { \"url\": \"https://mySSIAgent.com/w8T7AfkeyJYo4DbWFcmyocef5eyH\",\n            \"accesscode\": \"some_secret\"\n          }\n        ]\n      },\n      \"examinerSignature\": \"945faf9e8999040cd3728c0f099c002123f67fg3kkh3vvvb68b53d5nh7900fn4\"\n    },\n    {\n      \"@id\": \"kycdoc3\",\n      \"mime-type\": \"image/jpg\",\n      \"filename\": \"nysccp.jpg\",\n      \"lastmod_time\": \"2015-03-19 14:35:062\",\n      \"description\": \"State Concealed Carry Permit\",\n      \"vetting_process\": {\n        \"method\": \"in-person\",\n        \"technology\": \"barcode\"\n      },\n      \"data\": {\n        \"sha256\": \"b53d99c002123f1ffa2d42899040cd3728c0f0945fa1d4db525c5ee4af9eb668\",\n        \"links\": [\n          { \"url\": \"https://myssiAgent.com/mykeyoyHw8T7Afe4DbWFcJYocef5\",\n            \"accesscode\": null\n          }\n        ]\n      },\n      \"examinerSignature\": \"5nh7900fn499040cd3728c0f0945faf9e89kkh3vvvb68b53d99c002123f67fg3\"\n    }\n  ]\n}\n

    This message adheres to the attribute content formats outlined in the Aries Attachments RFC and builds on the By Value Attachments with the following additions:

    Upon completion of the Evidence Request and Response exchange, the Holder's Agent is now able to present any Verifier that has accepted a specific Issuer credential with the supporting evidence from the Issuer. This evidence, depending on the Holder's preferences may be direct or via a link to an external resource. For example, regardless of the delivery method used between the Issuer and Holder, the Holder's Agent may decide to fetch all documents and store them itself and then provide Verifiers with by-reference access upon request.

    "},{"location":"features/0116-evidence-exchange/#evidence-access-request-message","title":"Evidence Access Request Message","text":"

    Upon the successful processing of a credential proof presentation message, a Verifier may desire to request supporting evidence for the processed credential. This evidence_access_request message is built by the Verifier and sent to the Holder's agent. Similar to the request_evidence message, the Verifier may use this message to get an update for new and existing credentials associated with the Holder. The intent of this message is for the Verifier to establish trust by obtaining a copy of the available evidence and performing the necessary content validation.

    {\n  \"@type\": \"https://didcomm.org/evidence_exchange/1.0/evidence_access_request\",\n  \"@id\": \"7c3f991836-4ed5-f50e-7207-718e6151a389\",\n  \"for\": \"did:peer:1-F1220479cbc07c3f991725836a3aa2a581ca2029198aa420b9d99bc0e131d9f3e2cbe\",\n  \"as_of_time\": \"2019-07-23 18:05:06.123Z\",\n  \"credentials\": [\n      { \"@id\": \"cred-001\", \"issuerDID\": \"~BzCbsNYhMrjHiqZD\" },\n      { \"@id\": \"cred-002\", \"issuerDID\": \"~BzCbsNYhMrjHiqZD\" }\n  ]\n}\n

    Description of attributes:

    This protocol is intended to be flexible and applicable to a variety of use cases. While our discussion has circulated around the use of the protocol as follow-up to the processing of a credential proof presentment flow, the fact is that the protocol can be used at any point after a Pair-wise DID Exchange has been successfully established and is therefore in the complete state as defined by the DID Exchange Protocol. An IssuerDID (or DID of the an entity that is one of the two parties in a private pair-wise relationship) is assumed to be known under all possible conditions once the relationship is in the complete state.

    "},{"location":"features/0116-evidence-exchange/#evidence-access-response-message","title":"Evidence Access Response Message","text":"

    This message is required for a Holder Agent in response to an evidence_access_request message. The format of the ~attach attribute will be determined by the storage management preferences of the Holder's Agent. As such the Holder can respond by-value or by-reference. To build the response, the Holder will validate that the supplied Issuer DID corresponds to the credential represented by the supplied ID. If the Issuer relied on one or more IPSPs during the Identity Proofing Process, then this message will also include an inline attachment using the examiner_assertions attribute. Upon successful processing of a evidence_access_response message, the Verifier will store evidence details in its system of record.

    {\n  \"@type\": \"https://didcomm.org/evidence_exchange/1.0/evidence_access_response\",\n  \"@id\": \"1517207d-f50e-4ed5-a389-6a4986d718e6\",\n  \"~thread\": { \"thid\": \"7c3f991836-4ed5-f50e-7207-718e6151a389\" },\n  \"for\": \"did:peer:1-F1220479cbc07c3f991725836a3aa2a581ca2029198aa420b9d99bc0e131d9f3e2cbe\",\n  \"as_of_time\": \"2019-07-23 18:05:06.123Z\",\n  \"credentials\": [\n    { \"@id\": \"cred-001\",\n      \"evidence\": [\n        {\"evidence_type\": \"Address\", \"evidence_ref\": [\"#kycdoc1\", \"#kycdoc4\"]},\n        {\"evidence_type\": \"Identity\", \"evidence_ref\": [\"#kycdoc2\"]},\n        {\"evidence_type\": \"Photo\", \"evidence_ref\": null}\n      ]\n    },\n    { \"@id\": \"cred-002\",\n      \"evidence\": [\n        {\"evidence_type\": \"Address\", \"evidence_ref\": [\"#kycdoc1\",\"#kycdoc3\"]},\n        {\"evidence_type\": \"Identity\", \"evidence_ref\": [\"#kycdoc3\"]},\n        {\"evidence_type\": \"Photo\", \"evidence_ref\": [\"#kycdoc1\"]}\n      ]\n    }\n  ],\n  \"examiner_assertions\": [ ... ],\n  \"~attach\": [ ...\n  ]\n}\n

    This message adheres to the attribute content formats outlined in the Aries Attachments RFC and leverages the same Evidence Response Message attribute descriptions.

    "},{"location":"features/0116-evidence-exchange/#reference","title":"Reference","text":""},{"location":"features/0116-evidence-exchange/#drawbacks","title":"Drawbacks","text":"

    This protocol does not vary much from a generic document exchange protocol. It can be argued that a special KYC Document exchange protocol is not needed. However, given the emphasis placed on KYC compliance during the early days of DIDComm adoption, we want to make sure that any special cases are addressed upfront so that we avoid adoption derailment factors.

    "},{"location":"features/0116-evidence-exchange/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    As noted in the references section, there are a number of trending KYC Document proofing options that are being considered. Many leverage the notion of a centralized blockchain ledger for sharing documents. This effectively places control outside of the Holder and enables the sharing of documents in a B2B manner. Such approaches do not capitalize on the advantages of Pair-wise Peer DIDs.

    "},{"location":"features/0116-evidence-exchange/#prior-art","title":"Prior art","text":"

    This protocol builds on the foundational capabilities of DIDComm messages, most notable being the attachment decorator within DIDComm.

    "},{"location":"features/0116-evidence-exchange/#unresolved-questions","title":"Unresolved questions","text":"
    1. Should this be a separate protocol or an update to issuer-credential?
    2. What is the best way to handle access control for by-reference attachments?
    3. Are there best practices to be considered for when/why/how a Holder's Agent should store and manage attachments?
    4. Can this protocol help bootstrap a prototype for a Digital Notary and thereby demonstrate to the broader ecosystem the unnecessary attention being placed on alternative domain specific credential solutions like ISO-18013-5(mDL)?
    "},{"location":"features/0116-evidence-exchange/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0116-evidence-exchange/digital_notary_usecase/","title":"Decentralized Digital Notary","text":""},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#preface","title":"Preface","text":"

    The intent of this document is to describe the concepts of a Decentralized Digital Notary with respect to the bootstrapping of the decentralized identity ecosystem and to demonstrate using example user stories1 the applicability of the Evidence Exchange Protocol.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#overview","title":"Overview","text":""},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#problem-statement","title":"Problem Statement","text":"

    How do we bootstrap the digital credential ecosystem when many of the issuing institutions responsible for foundational credentials (i.e.: brith certificate, drivers license, etc) tend to be laggards2 when it comes to the adoption of emerging technology? What if we did not need to rely on these issuing institutions and instead leveraged the attestations of trusted third parties?

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#concept","title":"Concept","text":"

    During the identity verification process, an entity may require access to the genesis documents from the Issuers of Origination before issuing credentials. We see such requirements in some of the routine identity instrument interactions of our daily lives such as obtaining a Driver's License or opening a Bank Account.

    We assume that government agencies such and the DMV (drivers license) and Vital Records (brith certificate) will not be early adopters of digital credentials yet their associated Tier 1 Proofs are critical to the the creation of a network effect for the digital credential ecosystem.

    We therefore need a forcing function that will disrupt behavior. Image a trusted business entity, a Decentralized Digital Notary (DDN), that would take the responsibility of vouching for the existence of Original Documents (or Digital Assertions) and have the certitude to issue verifiable credentials attesting to personal data claims made by the Issuer of Origination.

    Today (blue shaded activity), an individual receives Original Documents from issuing institutions and presents these as evidence to each Verifier. Moving forward (beige shaded activity), as a wide range of businesses consider acting as DDNs, our reliance on Issuers of Origination to be the on-ramps for an individuals digital identity experience diminishes. Overtime, our dependency on the proactive nature of such institutions becomes mute. Furthermore, the more successful DDNs become the more reactionary the laggards will need to be to protect their value in the ecosystem.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#applicable-businesses","title":"Applicable Businesses","text":"

    Any entity that has the breath and reach to connect with consumers at scale would be an ideal candidate for the role of a DDN. Some examples include:

    The monetization opportunities for such businesses will also vary. The linkages between proof-of-identity and proof-of-value can be achieved in several manners:

    1. Individual pays for issuance of certificates
    2. Verifier pays the underwriter with a payment instrument (i.e.: fiat or cryptocurrency). The payment is for the service of underwriting the screening of an individual so that the Verifier does not have to do it.\u2028
    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#stories","title":"Stories","text":"

    Presented herein are a series of user stories that incorporate the concepts of a DDN and the ability of a verifier to gain access to Issuer vetted Identity Evidence using the Evidence Exchange Protocol.

    The stories focus on the daily lifecycle activities of a two individuals who needs to open a brokerage account and/or update a Life Insurance Policy.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#persona","title":"Persona","text":"Name Role Eric An individual that desires to open a brokerage account. Stacy An individual that desires to open a brokerage account and also apply a Life Insurance Policy. Retail Bank DDN (Issuer) Thomas Notary at the Retail Bank familiar with the DDN Process. Brokerage Firm Verifier Dropbox Document Management Service iCertitude A hypothetical IPSP that provides a mobile convenient identity verification service that is fast, trusted and reliable. Financial Cooperative A small local financial institution that is owned and operated by its members. It has positioned itself as a DDN (Issuer) by OEMing the iCertitude platform."},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#identity-proofing-examination-process","title":"Identity Proofing (Examination) Process","text":""},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#financial-cooperative-ddn-awareness","title":"Financial Cooperative (DDN Awareness)","text":"

    Eric is a member of his neighborhood Financial Cooperative. He received an email notification that as a new member benefit, the bank is now offering members with the ability to begin their digital identity journey. Eric is given access to literature describing the extend to the bank's offering and a video of the process for how to get started. Eric watches the video, reads the online material and decides to take advantage of the banks offer.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#financial-cooperative-3rd-party-remote-vetting-process","title":"Financial Cooperative (3rd Party Remote Vetting Process)","text":"

    Following his banks instructions, Eric downloads, installs and configures a Wallet App on his smartphone from the list of apps recommended by the bank. He also downloads the bank's iCertitude app. He uses the iCertitude Mobile app to step through a series of ID Proofing activities that allow the bank to establish a NIST IAL3 assurance rating. These steps include the scanning or some biometrics as well as his plastic drivers license. Upon completion of these activities, which are all performed using his smartphone without any human interactions with the bank, Eric receives a invite in his Wallet App to accept a new verifiable credential which is referred to as a Basic Assurance Credential. Eric opens the Wallet App, accepts the new credential and inspects it.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#retail-bank-ddn-awareness","title":"Retail Bank (DDN Awareness)","text":"

    Stacy is a member of her neighborhood Retail Bank. She received an email notification that as a new member benefit, the bank is now offering members with the ability to begin their digital identity journey. Stacey is given access to literature describing the extend to the bank's offering and a video of the process for how to get started. Stacey watches the video, reads the online material and decides to make an appointment with her local bank notary and fill out the preliminary online forms.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#retail-bank-paper-vetting-process","title":"Retail Bank (Paper Vetting Process)","text":"

    Stacey attends her appointment with Thomas. She came prepared to request digital credentials for the following Official Documents: SSN, Birth Certificate, proof of employment (paystub) and proof of address (utility bill). Thomas explains to Stacey that given the types of KYC Documents she desires to be digitally notarized, bank policy is to issue a single digital credential that attests to all the personal data she is prepared to present. The bank refers to this verifiable credential as the Basic KYC Credential and they use a common schema that is used by many DDNs in the Sovrin ecosystem.

    Note: This story depicts one approach. Clearly, the bank's policy could be to have a schema and credential for each Original Document.

    Stacy supplied Thomas with the paper based credentials for each of the aforementioned documents. Thomas scans each document and performs the necessary vetting process according to business policies. Thomas explains that while the bank can issue Stacey her new digital credential for a fee of $10 USD renewable annually, access to her scanned documents would only be possible if she opts-in to the digital document management service on her online banking account. Through this support she is able to provide digital access to the scanned copies of her paper credentials that were vetted by the bank. Stacey agrees to opt-in.

    While Stacey is waiting for her documents to be digitally notarized, she downloads, installs and configures a Wallet App on her smartphone from the list of apps recommended by the bank. Upon completion of the vetting process, Thomas returns all Original Documents back to Stacey and explains to her where she can now request the delivery of her new digital credential in her online account. Stacey leaves the bank with her first digital credential on her device.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#retail-bank-hybrid-vetting-process","title":"Retail Bank (Hybrid Vetting Process)","text":"

    During Stacey's preparation activity when she was filling out the preliminary online forms before her appointment with Thomas, she remembered that she had scanned her recent proof of employment (paystub) and proof of address (utility bill) at home and stored them on her Dropbox account. She decides to use the section of the form to grant the bank access (url and password) to these files. When she attends her appointment with Thomas, the meeting is altered only by the fact that she has limited her requirement of physical document presentment. However, Thomas does explain to her that bank policy is that the bank does not use remote links in their digital document management service. Instead, the bank uses the Dropbox link to obtain a copy, perform the vetting process and then store the copy in-house and allow Stacey to gain access to a link for the document stored at the bank.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#credential-management","title":"Credential Management","text":"

    Later that evening, Stacey decides to explore her new Digital Credential features within her online bank account. She sees that she has the ability to request access to the vetted resources the bank has used to vouch for her digital identity. She opens her Wallet App and sends a evidence_request message to the bank. Within a few seconds she receives and processes the bank's evidence_response message. Her Wallet App allows her to view the evidence available to her:

    Issuer Credential Evidence Type Original Document Retail Bank Basic KYC Credential Address Utility Bill Retail Bank Basic KYC Credential Address Employment PayStub Retail Bank Basic KYC Credential Identity SSN Retail Bank Basic KYC Credential Identity Birth Certificate Retail Bank Basic KYC Credential Photo Bank Member Photo

    Recalling his review of the bank's new digital identity journey benefits, Eric decides to use his Wallet App to request access to the vetted resources the bank used to vouch for his new Basic Assurance Credential. He uses the Wallet App to initiate a evidence_request and evidence_response message flow.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#verification-process","title":"Verification Process","text":""},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#brokerage-account-digital-assertion-evidence","title":"Brokerage Account (Digital Assertion Evidence)","text":"

    Eric decides to open a new brokerage account with a local Brokerage Firm. He opens the firms account registration page using his laptop web browser. The firm allows him to establish a new account and obtain a brokerage member credential if he can provide digitally verifiable proof of identity and address. Eric clicks to begin the onboarding process. He scans a QRCode using his Wallet App and accepts a connection request from the firm. He then receives a proof request from the firm and his Wallet App parsing the request and suggests he can respond using attributes from his Basic Assurance Credential. He responds to the proof request. Upon verification of his proof response, the firm sends Eric an offer for a Brokerage Membership Credential which he accepts. The firm also sends him an evidence_access_request and an explanation that the firm's policy for regulatory reasons is to obtain access to proof that the proper due-diligence was performed for Address, Identity and Photo. Eric uses his Wallet App to instruct his Cloud Agent to send an evidence_access_response. Upon processing of Eric's response, the firm establishes a Trust Score based on their policy for evidence based only on Digital Assertions and Remote Proofing processes.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#brokerage-account-document-evidence","title":"Brokerage Account (Document Evidence)","text":"

    Stacey decides she will open a new brokerage account with a local Brokerage Firm. She opens the firms account registration page using her laptop web browser. The firm allows her to establish a new account and obtain a brokerage member credential if she can provide digitally verifiable proof of identity, address and employment. Stacey clicks to begin the onboarding process. She scans a QRCode using her Wallet App and accepts a connection request from the firm. Using her Wallet App she responds to the proof request using digital credentials from her employer and her Retail Bank. Upon verification of her proof response, the firm sends Stacey an offer for a Brokerage Membership Credential which she accepts. The firm also sends her an evidence_access_request and an explanation that the firm's policy for regulatory reasons is to obtain access to the proof that the proper due-diligence was performed for Address, Identity, Photo and Employment. Stacey uses her Wallet App to instruct her Cloud Agent to send an evidence_access_response. Upon processing of Stacey's response, the firm establishes a Trust Score based on their policy for evidence based on Original Documents and In-person Proofing processes.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#life-insurance-policy-didcomm-doc-sharing","title":"Life Insurance Policy (DIDComm Doc Sharing)","text":"

    Stacey receives notification from her Insurance Company that they require an update to her life insurance policy account. The firm has undertaken a digital transformation strategy that impacts her 15yr old account. She has been given access to a new online portal and the choices on how to supply digital copies of her SSN and Birth Certificate. Stacey is too busy to take time to visit the Insurance Company to provide Original Documents for their vetting and digitization. She decides to submit her notarized digital copies. She opens the companies account portal page using her laptop web browser. Stacey registers, signs in and scans a QRCode using her Wallet App. She accepts a connection request from the firm. She then responds to an evidence_access_request for proof of that KYC due-diligence was performed for Identity and Photo. Stacey uses her Wallet App to instruct her Cloud Agent to send an evidence_access_response.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#commentary","title":"Commentary","text":"
    1. The concepts of a digital notary can be applied today in application domains such as (but not limited to) indirect auto lending and title management (auto, recreational vehicle, etc).
    2. Since 2015, AAMVA in conjunction with the ISO JTC1/SC27/WG10 18013-5 mDL Team has been working on a single credential solution for cross jurisdictional use amongst DMVs. This public sector activity is a key source of IAM industry motivation for alternative solutions to Credential Lifecycle Management. Government agencies will eventually need to address discussions around technical debit investments and defacto open source standards.
    "},{"location":"features/0116-evidence-exchange/eep_glossary/","title":"Evidence Exchange Protocol Glossary","text":"

    The following terms are either derived from terminology found in the NIST 800-63A Digital Identity Guidelines or introduced to help reinforce protocol concepts and associated use cases.

    Term Definition Credential Service Provider (CSP) A trusted entity that issues verifiable credentials credentials. Either an Issuer of Origination for an Original Document or an Issuer of a Derived Credential. A DDN is an example of a CSP that issues Derived Credential. Decentralized Digital Notary (DDN) A trusted third party that enables digital interactions between Holders and Verifiers. As an issuer of digitally verifiable credentials, it creates permanent evidence that an Original Document existed in a certain form at a particular point in time. This role will be especially important to address scalability and the bootstrapping of the decentralized identity ecosystem since many Issuers of Origination may be laggards. DDN Insurer An entity (party) in an insurance contract that underwrites insurance risks associated with the activities of a DDN. This includes a willingness to pay compensation for any negligence on the part of the DDN for failure to perform the necessary due-diligence associated with the examination and vetting of Original Documents. Derived Credential An issued verifiable credential based on an identity proofing process over Original Documents or other Derived Credentials. Digital Assertion A non-physical (digital) form of evidence. Often in the form of a Digital Signature. A CSP may leverage the services of an IPSP and may then require the IPSP to digitally sign the content that is the subject of the assertion. Original Document Any issued artifact that satisfies the Original-Document Rule in accordance with principle of evidence law. The original artifact may be in writing or a mechanical, electronic form of publication. Such a document may also be referred to as a Foundational Document. Identity Evidence Information or documentation provided by the Holder to support the issuance of an Original Document. Identity evidence may be physical (e.g. a driver license) or Digital Assertion Identity Instrument Digital or physical, paper or plastic renderings of some subset of our personal data as defined by the providers of the instruments. The traditional physical object is an identification card. Many physical identity instruments contain public and encoded information about an entity. The encoded information, which is often stored using machine readable technologies like magnetic strips or barcodes, represents another rendering format of an individual\u2019s personal data. Digital identity instruments, pertain to an individual\u2019s personal data in a form that can be processed by a software program. Identity Proofing The process by which a CSP collects, validates, and verifies Identity Evidence. This process yields the attestations (claims of confidence) by which a CSP is then able to use to issue a Verifiable Credential. The sole objective of this process is to ensure the Holder of Identity Evidence is who they claim to be with a stated level of certitude. Identity Proofing Inquiry The objectives of an identity verification exercise. Some inquiries are focused topics such as address verification while others may be on achievement. Identity Proofing Service Provider (IPSP) A remote 3rd party service provider that carries out one or more aspects of an Identity Proofing process. Issuer of Origination The entity (business, organization, individual or government) that is the original publisher of an Original Document. Mobile Field Agents Location-based service providers that allow agencies to bring their services to remote (rural) customers. Tier 1 Proofs A category of foundational credentials (Original Documents) that are often required to prove identity and address during KYC or onboarding processes. Trust Framework Certification Authority An entity that adheres to a governance framework for a specific ecosystem and is responsible for overseeing and auditing the Level of Assurance a DDN (Relying Party) has within the ecosystem. Verifiable Credential A digital credential that is compliant with the W3C Verifiable Credential Specification."},{"location":"features/0124-did-resolution-protocol/","title":"Aries RFC 0124: DID Resolution Protocol 0.9","text":""},{"location":"features/0124-did-resolution-protocol/#summary","title":"Summary","text":"

    Describes a DIDComm request-response protocol that can send a request to a remote DID Resolver to resolve DIDs and dereference DID URLs.

    "},{"location":"features/0124-did-resolution-protocol/#motivation","title":"Motivation","text":"

    DID Resolution is an important feature of Aries. It is a prerequisite for the unpack() function in DIDComm, especially in Cross-Domain Messaging, since cryptographic keys must be discovered from DIDs in order to enable trusted communication between the agents associated with DIDs. DID Resolution is also required for other operations, e.g. for verifying credentials or for discovering DIDComm service endpoints.

    Ideally, DID Resolution should be implemented as a local API (TODO: link to other RFC?). In some cases however, the DID Resolution function may be provided by a remote service. This RFC describes a DIDComm request-response protocol for such a remote DID Resolver.

    "},{"location":"features/0124-did-resolution-protocol/#tutorial","title":"Tutorial","text":"

    DID Resolution is a function that returns a DID Document for a DID. This function can be accessed via \"local\" bindings (e.g. SDK calls, command line tools) or \"remote\" bindings (e.g. HTTP(S), DIDComm).

    A DID Resolver MAY invoke another DID Resolver in order to delegate (part of) the DID Resolution and DID URL Dereferencing algorithms. For example, a DID Resolver may be invoked via a \"local\" binding (such as an Aries library call), which in turn invokes another DID Resolver via a \"remote\" binding (such as HTTP(S) or DIDComm).

    "},{"location":"features/0124-did-resolution-protocol/#name-and-version","title":"Name and Version","text":"

    This defines the did_resolution protocol, version 0.1, as identified by the following PIURI:

    https://didcomm.org/did_resolution/0.1\n
    "},{"location":"features/0124-did-resolution-protocol/#key-concepts","title":"Key Concepts","text":"

    DID Resolution is the process of obtaining a DID Document for a given DID. This is one of four required operations that can be performed on any DID (\"Read\"; the other ones being \"Create\", \"Update\", and \"Deactivate\"). The details of these operations differ depending on the DID method. Building on top of DID Resolution, DID URL Dereferencing is the process of obtaining a resource for a given DID URL. Software and/or hardware that is able to execute these processes is called a DID Resolver.

    "},{"location":"features/0124-did-resolution-protocol/#roles","title":"Roles","text":"

    There are two parties and two roles (one for each party) in the did_resolution protocol: A requester and resolver.

    The requester wishes to resolve DIDs or dereference DID URLs.

    The\u00a0resolver conforms with the DID Resolution Specification. It is capable of resolving DIDs for at least one DID method.

    "},{"location":"features/0124-did-resolution-protocol/#states","title":"States","text":""},{"location":"features/0124-did-resolution-protocol/#states-for-requester-role","title":"States for requester role","text":"EVENTS: send resolve receive resolve_result STATES preparing-request transition to \"awaiting-response\" different interaction awaiting-response impossible transition to \"done\" done"},{"location":"features/0124-did-resolution-protocol/#states-for-resolver-role","title":"States for resolver role","text":"EVENTS: receive resolve send resolve_result STATES awaiting-request transition to \"resolving\" impossible resolving new interaction transition to \"done\" done"},{"location":"features/0124-did-resolution-protocol/#states-for-requester-role-in-a-failure-scenario","title":"States for requester role in a failure scenario","text":"EVENTS: send resolve receive resolve_result STATES preparing-request transition to \"awaiting-response\" different interaction awaiting-response impossible error reporting problem reported"},{"location":"features/0124-did-resolution-protocol/#states-for-resolver-role-in-a-failure-scenario","title":"States for resolver role in a failure scenario","text":"EVENTS: receive resolve send resolve_result STATES awaiting-request transition to \"resolving\" impossible resolving new interaction error reporting problem reported"},{"location":"features/0124-did-resolution-protocol/#messages","title":"Messages","text":"

    All messages in this protocol are part of the \"did_resolution 0.1\" message family uniquely identified by this DID reference: https://didcomm.org/did_resolution/0.1

    "},{"location":"features/0124-did-resolution-protocol/#resolve-message","title":"resolve message","text":"

    The protocol begins when the requester sends a resolve message to the resolver. It looks like this:

    {\n    \"@type\": \"https://didcomm.org/did_resolution/0.1/resolve\",\n    \"@id\": \"xhqMoTXfqhvAgtYxUSfaxbSiqWke9t\",\n    \"did\": \"did:sov:WRfXPg8dantKVubE3HX8pw\",\n    \"input_options\": {\n        \"result_type\": \"did-document\",\n        \"no_cache\": false\n    }\n}\n

    @id is required here, as it establishes a message thread that makes it possible to connect a subsequent response to this request.

    did is required.

    input_options is optional.

    For further details on the did and input_options fields, see Resolving a DID in the DID Resolution Spec.

    "},{"location":"features/0124-did-resolution-protocol/#resolve_result-message","title":"resolve_result message","text":"

    The resolve_result is the only allowed direct response to the resolve message. It represents the result of the DID Resolution function and contains a DID Document.

    It looks like this:

    {\n    \"@type\": \"https://didcomm.org/did_resolution/0.1/resolve_result\",\n    \"~thread\": { \"thid\": \"xhqMoTXfqhvAgtYxUSfaxbSiqWke9t\" },\n    \"did_document\": {\n        \"@context\": \"https://w3id.org/did/v0.11\",\n        \"id\": \"did:sov:WRfXPg8dantKVubE3HX8pw\",\n        \"service\": [{\n            \"type\": \"did-communication\",\n            \"serviceEndpoint\": \"https://agent.example.com/\"\n        }],\n        \"publicKey\": [{\n            \"id\": \"did:sov:WRfXPg8dantKVubE3HX8pw#key-1\",\n            \"type\": \"Ed25519VerificationKey2018\",\n            \"publicKeyBase58\": \"~P7F3BNs5VmQ6eVpwkNKJ5D\"\n        }],\n        \"authentication\": [\"did:sov:WRfXPg8dantKVubE3HX8pw#key-1\"]\n    }\n}\n

    If the input_options field of the resolve message contains an entry result_type with value resolution-result, then the resolve_result message contains a more extensive DID Resolution Result, which includes a DID Document plus additional metadata:

    {\n    \"@type\": \"https://didcomm.org/did_resolution/0.1/resolve_result\",\n    \"~thread\": { \"thid\": \"xhqMoTXfqhvAgtYxUSfaxbSiqWke9t\" },\n    \"did_document\": {\n        \"@context\": \"https://w3id.org/did/v0.11\",\n        \"id\": \"did:sov:WRfXPg8dantKVubE3HX8pw\",\n        \"service\": [{\n            \"type\": \"did-communication\",\n            \"serviceEndpoint\": \"https://agent.example.com/\"\n        }],\n        \"publicKey\": [{\n            \"id\": \"did:sov:WRfXPg8dantKVubE3HX8pw#key-1\",\n            \"type\": \"Ed25519VerificationKey2018\",\n            \"publicKeyBase58\": \"~P7F3BNs5VmQ6eVpwkNKJ5D\"\n        }],\n        \"authentication\": [\"did:sov:WRfXPg8dantKVubE3HX8pw#key-1\"]\n    },\n    \"resolver_metadata\": {\n        \"driverId\": \"did:sov\",\n        \"driver\": \"HttpDriver\",\n        \"retrieved\": \"2019-07-09T19:73:24Z\",\n        \"duration\": 1015\n    },\n    \"method_metadata\": {\n        \"nymResponse\": { ... },\n        \"attrResponse\": { ... }\n    }\n}\n
    "},{"location":"features/0124-did-resolution-protocol/#problem-report-failure-message","title":"problem-report failure message","text":"

    The resolve_result will also report failure messages in case of impossibility to resolve a DID. It represents the problem report indicating that the resolver could not resolve the DID, and the reason of the failure. It looks like this:

    {\n    \"@type\": \"https://didcomm.org/did_resolution/0.1/resolve_result\",\n    \"~thread\": { \"thid\": \"xhqMoTXfqhvAgtYxUSfaxbSiqWke9t\" },\n    \"explain_ltxt\": \"Could not resolve DID did:sov:WRfXPg8dantKVubE3HX8pw not found by resolver xxx\",\n        ...\n}\n
    "},{"location":"features/0124-did-resolution-protocol/#reference","title":"Reference","text":""},{"location":"features/0124-did-resolution-protocol/#messages_1","title":"Messages","text":"

    In the future, additional messages dereference and dereference_result may be defined in addition to resolve and resolve_result (see Unresolved questions).

    "},{"location":"features/0124-did-resolution-protocol/#message-catalog","title":"Message Catalog","text":"

    Status and error codes will be inherited from the DID Resolution Spec.

    "},{"location":"features/0124-did-resolution-protocol/#drawbacks","title":"Drawbacks","text":"

    Using a remote DID Resolver should only be considered a fallback when a local DID Resolver cannot be used. Relying on a remote DID Resolver raises the question of who operates it, can you trust its responses, and can MITM and other attacks occur? There is essentially a chicken-and-egg problem insofar as the purpose of DID Resolution is to discover metadata needed for trustable interaction with an entity, but the precondition is that interation with a DID Resolver must itself be trustable.

    Furthermore, the use of remote DID Resolvers may introduce central bottlenecks and undermine important design principles such as decentralization.

    See Binding Architectures and w3c-ccg/did-resolution#28 for additional thoughts.

    The security and trust issues may outweigh the benefits. Defining and implementing this RFC may lead developers to underestimate or ignore these issues associated with remote DID Resolvers.

    "},{"location":"features/0124-did-resolution-protocol/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Despite the drawbacks of remote DID Resolvers, in some situations they can be useful, for example to support DID methods that are hard to implement in local agents with limited hard- and software capabilities.

    A special case of remote DID Resolvers occurs in the case of the Peer DID Method, where each party of a relationship essentially acts as a remote DID Resolver for other parties, i.e. each party fulfills both the requester and resolver roles defined in this RFC.

    An alternative to the DIDComm binding defined by this RFC is an HTTP(S) binding, which is defined by the DID Resolution Spec.

    "},{"location":"features/0124-did-resolution-protocol/#prior-art","title":"Prior art","text":"

    Resolution and dereferencing of identifiers have always played a key role in digital identity infrastructure.

    "},{"location":"features/0124-did-resolution-protocol/#unresolved-questions","title":"Unresolved questions","text":"

    This RFC inherits a long list of unresolved questions and issues that currently exist in the DID Resolution Spec.

    We need to decide whether the DID Resolution and DID URL Dereferencing functions (resolve() and dereference()) should be exposed as the same message type, or as two different message types (including two different responses).

    "},{"location":"features/0124-did-resolution-protocol/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0160-connection-protocol/","title":"0160: Connection Protocol","text":""},{"location":"features/0160-connection-protocol/#summary","title":"Summary","text":"

    This RFC describes the protocol to establish connections between agents.

    "},{"location":"features/0160-connection-protocol/#motivation","title":"Motivation","text":"

    Indy agent developers want to create agents that are able to establish connections with each other and exchange secure information over those connections. For this to happen there must be a clear connection protocol.

    "},{"location":"features/0160-connection-protocol/#tutorial","title":"Tutorial","text":"

    We will explain how a connection is established, with the roles, states, and messages required.

    "},{"location":"features/0160-connection-protocol/#roles","title":"Roles","text":"

    Connection uses two roles: inviter and invitee.

    The inviter is the party that initiates the protocol with an invitation message. This party must already have an agent and be capable of creating DIDs and endpoints at which they are prepared to interact. It is desirable but not strictly required that inviters have the ability to help the invitee with the process and/or costs associated with acquiring an agent capable of participating in the ecosystem. For example, inviters may often be sponsoring institutions. The inviter sends a connection-response message at the end of the share phase.

    The invitee has less preconditions; the only requirement is that this party be capable of receiving invitations over traditional communication channels of some type, and acting on it in a way that leads to successful interaction. The invitee sends a connection-request message at the beginning of the share phase.

    In cases where both parties already possess SSI capabilities, deciding who plays the role of inviter and invitee might be a casual matter of whose phone is handier.

    "},{"location":"features/0160-connection-protocol/#states","title":"States","text":""},{"location":"features/0160-connection-protocol/#null","title":"null","text":"

    No connection exists or is in progress

    "},{"location":"features/0160-connection-protocol/#invited","title":"invited","text":"

    The invitation has been shared with the intended invitee(s), and they have not yet sent a connection_request.

    "},{"location":"features/0160-connection-protocol/#requested","title":"requested","text":"

    A connection_request has been sent by the invitee to the inviter based on the information in the invitation.

    "},{"location":"features/0160-connection-protocol/#responded","title":"responded","text":"

    A connection_response has been sent by the inviter to the invitee based on the information in the connection_request.

    "},{"location":"features/0160-connection-protocol/#complete","title":"complete","text":"

    The connection is valid and ready for use.

    "},{"location":"features/0160-connection-protocol/#errors","title":"Errors","text":"

    There are no errors in this protocol during the invitation phase. For the request and response, there are two error messages possible for each phase: one for an active rejection and one for an unknown error. These errors are sent using a problem_report message type specific to the connection message family. The following list details problem-codes that may be sent:

    request_not_accepted - The error indicates that the request has been rejected for a reason listed in the error_report. Typical reasons include not accepting the method of the provided DID, unknown endpoint protocols, etc. The request can be resent after the appropriate corrections have been made.

    request_processing_error - This error is sent when the inviter was processing the request with the intent to accept the request, but some processing error occurred. This error indicates that the request should be resent as-is.

    response_not_accepted - The error indicates that the response has been rejected for a reason listed in the error_report. Typical reasons include not accepting the method of the provided DID, unknown endpoint protocols, invalid signature, etc. The response can be resent after the appropriate corrections have been made.

    response_processing_error - This error is sent when the invitee was processing the response with the intent to accept the response, but some processing error occurred. This error indicates that the response should be resent as-is.

    No errors are sent in timeout situations. If the inviter or invitee wishes to retract the messages they sent, they record so locally and return a request_not_accepted or response_not_accepted error when the other party sends a request or response .

    "},{"location":"features/0160-connection-protocol/#error-message-example","title":"Error Message Example","text":"
    {\n  \"@type\": \"https://didcomm.org/connections/1.0/problem_report\",\n  \"@id\": \"5678876542345\",\n  \"~thread\": { \"thid\": \"<@id of message related to problem>\" },\n  \"~i10n\": { \"locale\": \"en\"},\n  \"problem-code\": \"request_not_accepted\", // matches codes listed above\n  \"explain\": \"Unsupported DID method for provided DID.\"\n}\n
    "},{"location":"features/0160-connection-protocol/#error-message-attributes","title":"Error Message Attributes","text":""},{"location":"features/0160-connection-protocol/#flow-overview","title":"Flow Overview","text":"

    The inviter gives provisional connection information to the invitee. The invitee uses provisional information to send a DID and DID document to the inviter. The inviter uses received DID document information to send a DID and DID document to the invitee. The invitee sends the inviter an ack or any other message that confirms the response was received.

    "},{"location":"features/0160-connection-protocol/#0-invitation-to-connect","title":"0. Invitation to Connect","text":"

    An invitation to connect may be transferred using any method that can reliably transmit text. The result must be the essential data necessary to initiate a Connection Request message. A connection invitation is an agent message with agent plaintext format, but is an out-of-band communication and therefore not communicated using wire level encoding or encryption. The necessary data that an invitation to connect must result in is:

    OR

    This information is used to create a provisional connection to the inviter. That connection will be made complete in the connection_response message.

    These attributes were chosen to parallel the attributes of a DID document for increased meaning. It is worth noting that recipientKeys and routingKeys must be inline keys, not DID key references when contained in an invitation. As in the DID document with Ed25519VerificationKey2018 key types, the key must be base58 encoded.

    When considering routing and options for invitations, keep in mind that the more detail is in the connection invitation, the longer the URL will be and (if used) the more dense the QR code will be. Dense QR codes can be harder to scan.

    The inviter will either use an existing invitation DID, or provision a new one according to the DID method spec. They will then create the invitation message in one of the following forms.

    Invitation Message with Public Invitation DID:

    {\n    \"@type\": \"https://didcomm.org/connections/1.0/invitation\",\n    \"@id\": \"12345678900987654321\",\n    \"label\": \"Alice\",\n    \"did\": \"did:sov:QmWbsNYhMrjHiqZDTUTEJs\"\n}\n

    Invitation Message with Keys and URL endpoint:

    {\n    \"@type\": \"https://didcomm.org/connections/1.0/invitation\",\n    \"@id\": \"12345678900987654321\",\n    \"label\": \"Alice\",\n    \"recipientKeys\": [\"8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K\"],\n    \"serviceEndpoint\": \"https://example.com/endpoint\",\n    \"routingKeys\": [\"8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K\"]\n}\n

    Invitation Message with Keys and DID Service Endpoint Reference:

    {\n    \"@type\": \"https://didcomm.org/connections/1.0/invitation\",\n    \"label\": \"Alice\",\n    \"recipientKeys\": [\"8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K\"],\n    \"serviceEndpoint\": \"did:sov:A2wBhNYhMrjHiqZDTUYH7u;routeid\",\n    \"routingKeys\": [\"8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K\"]\n}\n
    "},{"location":"features/0160-connection-protocol/#implicit-invitation","title":"Implicit Invitation","text":"

    Any Public DID serves as an implicit invitation. If an invitee wishes to connect to any Public DID, They designate their own label and skip to the end of the Invitation Processing step. There is no need to encode the invitation or transmit the invitation.

    "},{"location":"features/0160-connection-protocol/#routing-keys","title":"Routing Keys","text":"

    If routingKeys is present and non-empty, additional forwarding wrapping will be necessary for the request message. See the explanation in the Request section.

    "},{"location":"features/0160-connection-protocol/#agency-endpoint","title":"Agency Endpoint","text":"

    The endpoint for the connection is either present in the invitation or available in the DID document of a presented DID. If the endpoint is not a URI but a DID itself, that DID refers to an Agency.

    In that case, the serviceEndpoint of the DID must be a URI, and the recipientKeys must contain a single key. That key is appended to the end of the list of routingKeys for processing. For more information about message forwarding and routing, see RFC 0094.

    "},{"location":"features/0160-connection-protocol/#standard-invitation-encoding","title":"Standard Invitation Encoding","text":"

    Using a standard invitation encoding allows for easier interoperability between multiple projects and software platforms. Using a URL for that standard encoding provides a built in fallback flow for users who are unable to automatically process the invitation. Those new users will load the URL in a browser as a default behavior, and will be presented with instructions on how to install software capable of processing the invitation. Already onboarded users will be able to process the invitation without loading in a browser via mobile app URL capture, or via capability detection after being loaded in a browser.

    The standard invitation format is a URL with a Base64URLEncoded json object as a query parameter.

    The Invitation URL format is as follows, with some elements described below:

    https://<domain>/<path>?c_i=<invitationstring>\n

    <domain> and <path> should be kept as short as possible, and the full URL should return human readable instructions when loaded in a browser. This is intended to aid new users. The c_i query parameter is required and is reserved to contain the invitation string. Additional path elements or query parameters are allowed, and can be leveraged to provide coupons or other promise of payment for new users.

    The <invitationstring> is an agent plaintext message (not a wire level message) that has been base64 url encoded. For brevity, the json encoding should minimize unnecessary white space.

    invitation_string = b64urlencode(<invitation_message>)\n

    During encoding, whitespace from the json string should be eliminated to keep the resulting invitation string as short as possible.

    "},{"location":"features/0160-connection-protocol/#example-invitation-encoding","title":"Example Invitation Encoding","text":"

    Invitation:

    {\n    \"@type\": \"https://didcomm.org/connections/1.0/invitation\",\n    \"@id\": \"12345678900987654321\",\n    \"label\": \"Alice\",\n    \"recipientKeys\": [\"8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K\"],\n    \"serviceEndpoint\": \"https://example.com/endpoint\",\n    \"routingKeys\": [\"8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K\"]\n}\n

    Base 64 URL Encoded, with whitespace removed:

    eyJAdHlwZSI6ImRpZDpzb3Y6QnpDYnNOWWhNcmpIaXFaRFRVQVNIZztzcGVjL2Nvbm5lY3Rpb25zLzEuMC9pbnZpdGF0aW9uIiwiQGlkIjoiMTIzNDU2Nzg5MDA5ODc2NTQzMjEiLCJsYWJlbCI6IkFsaWNlIiwicmVjaXBpZW50S2V5cyI6WyI4SEg1Z1lFZU5jM3o3UFlYbWQ1NGQ0eDZxQWZDTnJxUXFFQjNuUzdaZnU3SyJdLCJzZXJ2aWNlRW5kcG9pbnQiOiJodHRwczovL2V4YW1wbGUuY29tL2VuZHBvaW50Iiwicm91dGluZ0tleXMiOlsiOEhINWdZRWVOYzN6N1BZWG1kNTRkNHg2cUFmQ05ycVFxRUIzblM3WmZ1N0siXX0=\n

    Example URL:

    http://example.com/ssi?c_i=eyJAdHlwZSI6ImRpZDpzb3Y6QnpDYnNOWWhNcmpIaXFaRFRVQVNIZztzcGVjL2Nvbm5lY3Rpb25zLzEuMC9pbnZpdGF0aW9uIiwiQGlkIjoiMTIzNDU2Nzg5MDA5ODc2NTQzMjEiLCJsYWJlbCI6IkFsaWNlIiwicmVjaXBpZW50S2V5cyI6WyI4SEg1Z1lFZU5jM3o3UFlYbWQ1NGQ0eDZxQWZDTnJxUXFFQjNuUzdaZnU3SyJdLCJzZXJ2aWNlRW5kcG9pbnQiOiJodHRwczovL2V4YW1wbGUuY29tL2VuZHBvaW50Iiwicm91dGluZ0tleXMiOlsiOEhINWdZRWVOYzN6N1BZWG1kNTRkNHg2cUFmQ05ycVFxRUIzblM3WmZ1N0siXX0=\n

    Invitation URLs can be transferred via any method that can send text, including an email, SMS, posting on a website, or via a QR Code.

    Example URL encoded as a QR Code:

    "},{"location":"features/0160-connection-protocol/#invitation-publishing","title":"Invitation Publishing","text":"

    The inviter will then publish or transmit the invitation URL in a manner available to the intended invitee. After publishing, we have entered the invited state.

    "},{"location":"features/0160-connection-protocol/#invitation-processing","title":"Invitation Processing","text":"

    When they invitee receives the invitation URL, there are two possible user flows that depend on the SSI preparedness of the individual. If the individual is new to the SSI universe, they will likely load the URL in a browser. The resulting page will contain instructions on how to get started by installing software or a mobile app. That install flow will transfer the invitation message to the newly installed software. A user that already has those steps accomplished will have the URL received by software directly. That software will base64URL decode the string and can read the invitation message directly out of the c_i query parameter, without loading the URL.

    NOTE: In receiving the invitation, the base64url decode implementation used MUST correctly decode padded and unpadded base64URL encoded data.

    If the invitee wants to accept the connection invitation, they will use the information present in the invitation message to prepare the request

    "},{"location":"features/0160-connection-protocol/#1-connection-request","title":"1. Connection Request","text":"

    The connection request message is used to communicate the DID document of the invitee to the inviter using the provisional connection information present in the connection_invitation message.

    The invitee will provision a new DID according to the DID method spec. For a Peer DID, this involves creating a matching peer DID and key. The newly provisioned DID and DID document is presented in the connection_request message as follows:

    "},{"location":"features/0160-connection-protocol/#example","title":"Example","text":"
    {\n  \"@id\": \"5678876542345\",\n  \"@type\": \"https://didcomm.org/connections/1.0/request\",\n  \"label\": \"Bob\",\n  \"connection\": {\n    \"DID\": \"B.did@B:A\",\n    \"DIDDoc\": {\n        \"@context\": \"https://w3id.org/did/v1\"\n        // DID document contents here.\n    }\n  }\n}\n
    "},{"location":"features/0160-connection-protocol/#attributes","title":"Attributes","text":""},{"location":"features/0160-connection-protocol/#diddoc-example","title":"DIDDoc Example","text":"

    An example of the DID document contents is the following JSON. This format was implemented in some early agents as the DIDComm DIDDoc Conventions RFC was being formalized and so does not match that RFC exactly. For example, the use of the IndyAgent service endpoint. Future versions of this protocol will align precisely with that RFC.

    {\n  \"@context\": \"https://w3id.org/did/v1\",\n  \"id\": \"did:sov:QUmsj7xwB82QAuuzfmvhAi\",\n  \"publicKey\": [\n    {\n      \"id\": \"did:sov:QUmsj7xwB82QAuuzfmvhAi#1\",\n      \"type\": \"Ed25519VerificationKey2018\",\n      \"controller\": \"did:sov:QUmsj7xwB82QAuuzfmvhAi\",\n      \"publicKeyBase58\": \"DoDMNYwMrSN8ygGKabgz5fLA9aWV4Vi8SLX6CiyN2H4a\"\n    }\n  ],\n  \"authentication\": [\n    {\n      \"type\": \"Ed25519SignatureAuthentication2018\",\n      \"publicKey\": \"did:sov:QUmsj7xwB82QAuuzfmvhAi#1\"\n    }\n  ],\n  \"service\": [\n    {\n      \"id\": \"did:sov:QUmsj7xwB82QAuuzfmvhAi;indy\",\n      \"type\": \"IndyAgent\",\n      \"priority\": 0,\n      \"recipientKeys\": [\n        \"DoDMNYwMrSN8ygGKabgz5fLA9aWV4Vi8SLX6CiyN2H4a\"\n      ],\n      \"serviceEndpoint\": \"http://192.168.65.3:8030\"\n    }\n  ]\n}\n
    "},{"location":"features/0160-connection-protocol/#request-transmission","title":"Request Transmission","text":"

    The Request message is encoded according to the standards of the Agent Wire Level Protocol, using the recipientKeys present in the invitation.

    If the routingKeys attribute was present and non-empty in the invitation, each key must be used to wrap the message in a forward request, then encoded according to the Agent Wire Level Protocol. This processing is in order of the keys in the list, with the last key in the list being the one for which the serviceEndpoint possesses the private key.

    The message is then transmitted to the serviceEndpoint.

    We are now in the requested state.

    "},{"location":"features/0160-connection-protocol/#request-processing","title":"Request processing","text":"

    After receiving the connection request, the inviter evaluates the provided DID and DID document according to the DID Method Spec.

    The inviter should check the information presented with the keys used in the wire-level message transmission to ensure they match.

    If the inviter wishes to accept the connection, they will persist the received information in their wallet. They will then either update the provisional connection information to rotate the key, or provision a new DID entirely. The choice here will depend on the nature of the DID used in the invitation.

    The inviter will then craft a connection response using the newly updated or provisioned information.

    "},{"location":"features/0160-connection-protocol/#request-errors","title":"Request Errors","text":"

    See Error Section above for message format details.

    request_rejected

    Possible reasons:

    request_processing_error

    "},{"location":"features/0160-connection-protocol/#2-connection-response","title":"2. Connection Response","text":"

    The connection response message is used to complete the connection. This message is required in the flow, as it updates the provisional information presented in the invitation.

    "},{"location":"features/0160-connection-protocol/#example_1","title":"Example","text":"
    {\n  \"@type\": \"https://didcomm.org/connections/1.0/response\",\n  \"@id\": \"12345678900987654321\",\n  \"~thread\": {\n    \"thid\": \"<@id of request message>\"\n  },\n  \"connection\": {\n    \"DID\": \"A.did@B:A\",\n    \"DIDDoc\": {\n      \"@context\": \"https://w3id.org/did/v1\"\n      // DID document contents here.\n    }\n  }\n}\n

    The above message is required to be signed as described in RFC 0234 Signature Decorator. The connection attribute above will be base64URL encoded and included as part of the sig_data attribute of the signed field. The result looks like this:

    {\n  \"@type\": \"https://didcomm.org/connections/1.0/response\",\n  \"@id\": \"12345678900987654321\",\n  \"~thread\": {\n    \"thid\": \"<@id of request message>\"\n  },\n  \"connection~sig\": {\n    \"@type\": \"https://didcomm.org/signature/1.0/ed25519Sha512_single\",\n    \"signature\": \"<digital signature function output>\",\n    \"sig_data\": \"<base64URL(64bit_integer_from_unix_epoch||connection_attribute)>\",\n    \"signer\": \"<signing_verkey>\"\n  }\n}\n

    The connection attribute has been removed and it's contents combined with the timestamp and encoded into the sig_data field of the new connection~sig attribute.

    Upon receipt, the signed attribute will be automatically unpacked and the signature verified. Signature information will be stored as message context, and the connection attribute will be replaced in it's original format before processing continues.

    The signature data must be used to verify against the invitation's recipientKeys for continuity.

    "},{"location":"features/0160-connection-protocol/#attributes_1","title":"Attributes","text":"

    In addition to a new DID, the associated DID document might contain a new endpoint. This new DID and endpoint are to be used going forward in the connection.

    "},{"location":"features/0160-connection-protocol/#response-transmission","title":"Response Transmission","text":"

    The message should be packaged in the wire level format, using the keys from the request, and the new keys presented in the internal DID document.

    When the message is transmitted, we are now in the responded state.

    "},{"location":"features/0160-connection-protocol/#response-processing","title":"Response Processing","text":"

    When the invitee receives the response message, they will verify the sig_data provided. After validation, they will update their wallet with the new connection information. If the endpoint was changed, they may wish to execute a Trust Ping to verify that new endpoint.

    "},{"location":"features/0160-connection-protocol/#response-errors","title":"Response Errors","text":"

    See Error Section above for message format details.

    response_rejected

    Possible reasons:

    response_processing_error

    "},{"location":"features/0160-connection-protocol/#3-connection-acknowledgement","title":"3. Connection Acknowledgement","text":"

    After the Response is received, the connection is technically complete. This remains unconfirmed to the inviter however. The invitee SHOULD send a message to the inviter. As any message will confirm the connection, any message will do.

    Frequently, the parties of the connection will want to trade credentials to establish trust. In such a flow, those message will serve the function of acknowledging the connection without an extra confirmation message.

    If no message is needed immediately, a trust ping can be used to allow both parties confirm the connection.

    After a message is sent, the invitee in the complete state. Receipt of a message puts the inviter into the complete state.

    "},{"location":"features/0160-connection-protocol/#next-steps","title":"Next Steps","text":"

    The connection between the inviter and the invitee is now established. This connection has no trust associated with it. The next step should be the exchange of proofs to build trust sufficient for the purpose of the relationship.

    "},{"location":"features/0160-connection-protocol/#connection-maintenance","title":"Connection Maintenance","text":"

    Upon establishing a connection, it is likely that both Alice and Bob will want to perform some relationship maintenance such as key rotations. Future RFC updates will add these maintenance ../../features.

    "},{"location":"features/0160-connection-protocol/#reference","title":"Reference","text":""},{"location":"features/0160-connection-protocol/#drawbacks","title":"Drawbacks","text":""},{"location":"features/0160-connection-protocol/#prior-art","title":"Prior art","text":""},{"location":"features/0160-connection-protocol/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0160-connection-protocol/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Framework - .NET passed agent connectathon tests, Feb 2019; MISSING test results Streetcred.id passed agent connectathon tests, Feb 2019; MISSING test results Aries Cloud Agent - Python ported from VON codebase that passed agent connectathon tests, Feb 2019; MISSING test results Aries Static Agent - Python implemented July 2019; MISSING test results Aries Protocol Test Suite ported from Indy Agent codebase that provided agent connectathon tests, Feb 2019; MISSING test results Indy Cloud Agent - Python passed agent connectathon tests, Feb 2019; MISSING test results"},{"location":"features/0183-revocation-notification/","title":"Aries RFC 0183: Revocation Notification 1.0","text":""},{"location":"features/0183-revocation-notification/#summary","title":"Summary","text":"

    This RFC defines the message format which an issuer uses to notify a holder that a previously issued credential has been revoked.

    "},{"location":"features/0183-revocation-notification/#change-log","title":"Change Log","text":""},{"location":"features/0183-revocation-notification/#motivation","title":"Motivation","text":"

    We need a standard protocol for an issuer to notify a holder that a previously issued credential has been revoked.

    For example, suppose a passport agency revokes Alice's passport. The passport agency (an issuer) may want to notify Alice (a holder) that her passport has been revoked so that she knows that she will be unable to use her passport to travel.

    "},{"location":"features/0183-revocation-notification/#tutorial","title":"Tutorial","text":"

    The Revocation Notification protocol is a very simple protocol consisting of a single message:

    This simple protocol allows an issuer to choose to notify a holder that a previously issued credential has been revoked.

    It is the issuer's prerogative whether or not to notify the holder that a credential has been revoked. It is not a security risk if the issuer does not notify the holder that the credential has been revoked, nor if the message is lost. The holder will still be unable to use a revoked credential without this notification.

    "},{"location":"features/0183-revocation-notification/#roles","title":"Roles","text":"

    There are two parties involved in a Revocation Notification: issuer and holder. The issuer sends the revoke message to the holder.

    "},{"location":"features/0183-revocation-notification/#messages","title":"Messages","text":"

    The revoke message sent by the issuer to the holder is as follows:

    {\n  \"@type\": \"https://didcomm.org/revocation_notification/1.0/revoke\",\n  \"@id\": \"<uuid-revocation-notification>\",\n  \"thread_id\": \"<thread_id>\",\n  \"comment\": \"Some comment\"\n}\n

    Description of fields:

    "},{"location":"features/0183-revocation-notification/#reference","title":"Reference","text":""},{"location":"features/0183-revocation-notification/#drawbacks","title":"Drawbacks","text":"

    If we later added support for more general event subscription and notification message flows, this would be redundant.

    "},{"location":"features/0183-revocation-notification/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0183-revocation-notification/#prior-art","title":"Prior art","text":""},{"location":"features/0183-revocation-notification/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0183-revocation-notification/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0193-coin-flip/","title":"Aries RFC 0193: Coin Flip Protocol 1.0","text":""},{"location":"features/0193-coin-flip/#summary","title":"Summary","text":"

    Specifies a safe way for two parties who are remote from one another and who do not trust one another to pick a random, binary outcome that neither can manipulate.

    "},{"location":"features/0193-coin-flip/#change-log","title":"Change Log","text":""},{"location":"features/0193-coin-flip/#motivation","title":"Motivation","text":"

    To guarantee fairness, it is often important to pick one party in a protocol to make a choice about what to do next. We need a way to do this that it more or less mirrors the randomness of flipping a coin.

    "},{"location":"features/0193-coin-flip/#tutorial","title":"Tutorial","text":""},{"location":"features/0193-coin-flip/#name-and-version","title":"Name and Version","text":"

    This defines the coinflip protocol, version 1.x, as identified by the following PIURI:

    https://github.com/hyperledger/aries-rfcs../../features/0193-coin-flip/1.0\n
    "},{"location":"features/0193-coin-flip/#roles","title":"Roles","text":"

    There are 2 roles in the protocol: Recorder and Caller. These role names parallel the roles in a physical coin flip: the Recorder performs a process that freezes/records the state of a flipped coin, and the Caller announces the state that they predict, before the state is known. If the caller predicts the state correctly, then the caller chooses what happens next; otherwise, the recorder chooses.

    "},{"location":"features/0193-coin-flip/#algorithm","title":"Algorithm","text":"

    Before describing the messages, let's review the algorithm that will be used. This algorithm is not new; it is a simple commitment scheme described on wikipedia and implemented in various places. The RFC merely formalizes a simple commitment scheme for DIDComm in a way that the Caller chooses a side without knowing whether it's win or lose.

    1. Recorder chooses a random UUID. A version 4 UUID is recommended, though any UUID version should be accepted. Note that the UUID is represented in lower case, with hyphens, and without enclosing curly braces. Suppose this value is 01bf7abd-aa80-4389-bf8c-dba0f250bb1b. This UUID is called salt.

    2. Recorder builds two side strings by salting win and lose with the salt -- i.e., win01bf7abd-aa80-4389-bf8c-dba0f250bb1b and lose01bf7abd-aa80-4389-bf8c-dba0f250bb1b. Recorder then computes a SHA256 hash of each side string, which are 0C192E004440D8D6D6AF06A7A03A2B182903E9F048D4E7320DF6301DF0C135A5 and C587E50CB48B1B0A3B5136BA9D238B739A6CD599EE2D16994537B75CA595C091 for our example, and randomly selects one side string as side1 and the other one as side2. Recorder sends them to Caller using the propose message described below. Those hashes do commit Recorder to all inputs, without revealing which one is win or lose, and it's the Recorder's way of posing the Caller the question, \"underside or topside\"?

    3. Caller announces their committed choice -- for instance, side2, using the 'call' message described below. This commits Caller to a particular side of the virtual coin.

    4. Recorder uses a 'reveal' message to reveal the salt. Caller is now able to rebuild both side strings and both parties discover whether Caller guessed the win side or not. If Caller guessed the win side, Caller won. Otherwise Recorder won. Neither party is able to manipulate the outcome. This is guaranteed by Caller being able to verify that Recorder proposed two valid options, i.e. one side winning and one side losing, and Recorder knowing not to reveal any disclosed information before Caller made their choice.

    "},{"location":"features/0193-coin-flip/#states","title":"States","text":"

    The algorithm and the corresponding states are pictured in the following diagram:

    Note: This diagram was made in draw.io. To make changes: - upload the drawing HTML from this folder to the [draw.io](https://draw.io) site (Import From... Device), - make changes, - export the picture as PNG and HTML to your local copy of this repo, and - submit a pull request.

    This diagram only depicts the so-called \"happy path\". It is possible to experience problems for various reasons. If either party detects such an event, they should abandon the protocol and emit a problem-report message to the other party. The problem-report message is adopted into this protocol for that purpose. Some values of code that may be used in such messages include:

    "},{"location":"features/0193-coin-flip/#reference","title":"Reference","text":""},{"location":"features/0193-coin-flip/#messages","title":"Messages","text":""},{"location":"features/0193-coin-flip/#propose","title":"propose","text":"

    The protocol begins when Caller sends to Recorder a propose message that embodies Step 1 in the algorithm above. It looks like this:

    {\n  \"@type\": \"https://github.com/hyperledger/aries-rfcs../../features/0193-coin-flip/1.0/propose\",\n  \"@id\": \"518be002-de8e-456e-b3d5-8fe472477a86\",\n  \"side1\": \"C587E50CB48B1B0A3B5136BA9D238B739A6CD599EE2D16994537B75CA595C091\",\n  \"side2\": \"0C192E004440D8D6D6AF06A7A03A2B182903E9F048D4E7320DF6301DF0C135A5\",\n  \"comment\": \"Make your choice and let's who goes first.\",\n  \"choice-id\": \"did:sov:SLfEi9esrjzybysFxQZbfq;spec/tictactoe/1.0/who-goes-first\",\n  \"caller-wins\": \"did:example:abc123\",  // Meaning of value defined in superprotocol\n  \"recorder-wins\": \"did:example:xyz456\", // Meaning of value defined in superprotocol\n  // Optional; connects to superprotocol\n  \"~thread\": { \n    \"pthid\": \"a2be4118-4f60-bacd-c9a0-dfb581d6fd96\" \n  }\n}\n

    The @type and @id fields are standard for DIDComm. The side1 and side2 fields convey the data required by Step 2 of the algorithm. The optional comment field follows localization conventions and is irrelevant unless the coin flip intends to invite human participation. The ~thread.pthid decorator is optional but should be common; it identifies the thread of the parent interaction (the superprotocol).

    The choice-id field formally names a choice that a superprotocol has defined, and tells how the string values of the caller-wins and recorder-wins fields will be interpreted. In the example above, the choice is defined in the Tic-Tac-Toe Protocol, which also specifies that caller-wins and recorder-wins will contain DIDs of the parties playing the game. Some other combinations that might make sense include:

    The ~timing.expires_time decorator may be used to impose a time limit on the processing of this message. If used, the protocol must restart if the subsequent call message is not received by this time limit.

    "},{"location":"features/0193-coin-flip/#call","title":"call","text":"

    This message is sent from Caller to Recorder, and embodies Step 3 of the algorithm. It looks like this:

    {\n  \"@type\": \"https://github.com/hyperledger/aries-rfcs../../features/0193-coin-flip/1.0/call\",\n  \"@id\": \"1173fe5f-86c9-47d7-911b-b8eac7d5f2ad\",\n  \"choice\": \"side2\",\n  \"comment\": \"I pick side 2.\",\n  \"~thread\": { \n    \"thid\": \"518be002-de8e-456e-b3d5-8fe472477a86\",\n    \"sender_order\": 1 \n  }\n}\n

    Note the use of ~thread.thid and sender_order: 1 to connect this call to the preceding propose.

    The ~timing.expires_time decorator may be used to impose a time limit on the processing of this message. If used, the protocol must restart if the subsequent reveal message is not received by this time limit.

    "},{"location":"features/0193-coin-flip/#reveal","title":"reveal","text":"

    This message is sent from Recorder to Caller, and embodies Step 4 of the algorithm. It looks like this:

    {\n  \"@type\": \"https://github.com/hyperledger/aries-rfcs../../features/0193-coin-flip/1.0/reveal\",\n  \"@id\": \"e2a9454d-783d-4663-874e-29ad10776115\",\n  \"salt\": \"01bf7abd-aa80-4389-bf8c-dba0f250bb1b\",\n  \"winner\": \"caller\",\n  \"comment\": \"You win.\",\n  \"~thread\": { \n    \"thid\": \"518be002-de8e-456e-b3d5-8fe472477a86\",\n    \"sender_order\": 1 \n  }\n}\n

    Note the use of ~thread.thid and sender_order: 1 to connect this reveal to the preceding call.

    The Caller should validate this message as follows:

    Having validated the message thus far, Caller determines the winner by checking if the self computed hash of win<salt> equals the given hash at the propose message at the position chosen with the call message or not. If yes, then the value of the winner field must be caller; if not, then it must be recorder. The winner field must be present in the message, and its value must be correct, for the reveal message to be deemed fully valid. This confirms that both parties understand the outcome, and it prevents a Recorder from asserting a false outcome that is accepted by careless validation logic on the Caller side.

    The ~timing.expires_time decorator may be used to impose a time limit on the processing of this message. If used, the protocol must restart if the subsequent ack or the next message in the superprotocol is not received before the time limit.

    "},{"location":"features/0193-coin-flip/#drawbacks","title":"Drawbacks","text":"

    The protocol is a bit chatty.

    "},{"location":"features/0193-coin-flip/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    It may be desirable to pick among more than 2 alternatives. This RFC could be extended easily to provide more options than win and lose. The algorithm itself would not change.

    "},{"location":"features/0193-coin-flip/#prior-art","title":"Prior art","text":"

    As mentioned in the introduction, the algorithm used in this protocol is a simple and well known form of cryptographic commitment, and is documented on Wikipedia. It is not new to this RFC.

    "},{"location":"features/0193-coin-flip/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0193-coin-flip/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0211-route-coordination/","title":"0211: Mediator Coordination Protocol","text":""},{"location":"features/0211-route-coordination/#summary","title":"Summary","text":"

    A protocol to coordinate mediation configuration between a mediating agent and the recipient.

    "},{"location":"features/0211-route-coordination/#application-scope","title":"Application Scope","text":"

    This protocol is needed when using an edge agent and a mediator agent from different vendors. Edge agents and mediator agents from the same vendor may use whatever protocol they wish without sacrificing interoperability.

    "},{"location":"features/0211-route-coordination/#motivation","title":"Motivation","text":"

    Use of the forward message in the Routing Protocol requires an exchange of information. The Recipient must know which endpoint and routing key(s) to share, and the Mediator needs to know which keys should be routed via this relationship.

    "},{"location":"features/0211-route-coordination/#protocol","title":"Protocol","text":"

    Name: coordinate-mediation

    Version: 1.0

    Base URI: https://didcomm.org/coordinate-mediation/1.0/

    "},{"location":"features/0211-route-coordination/#roles","title":"Roles","text":"

    mediator - The agent that will be receiving forward messages on behalf of the recipient. recipient - The agent for whom the forward message payload is intended.

    "},{"location":"features/0211-route-coordination/#flow","title":"Flow","text":"

    A recipient may discover an agent capable of routing using the Feature Discovery Protocol. If protocol is supported with the mediator role, a recipient may send a mediate-request to initiate a routing relationship.

    First, the recipient sends a mediate-request message to the mediator. If the mediator is willing to route messages, it will respond with a mediate-grant message. The recipient will share the routing information in the grant message with other contacts.

    When a new key is used by the recipient, it must be registered with the mediator to enable route identification. This is done with a keylist-update message.

    The keylist-update and keylist-query methods are used over time to identify and remove keys that are no longer in use by the recipient.

    "},{"location":"features/0211-route-coordination/#reference","title":"Reference","text":"

    Note on terms: Early versions of this protocol included the concept of terms for mediation. This concept has been removed from this version due to a need for further discussion on representing terms in DIDComm in general and lack of use of these terms in current implementations.

    "},{"location":"features/0211-route-coordination/#mediation-request","title":"Mediation Request","text":"

    This message serves as a request from the recipient to the mediator, asking for the permission (and routing information) to publish the endpoint as a mediator.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/mediate-request\",\n}\n
    "},{"location":"features/0211-route-coordination/#mediation-deny","title":"Mediation Deny","text":"

    This message serves as notification of the mediator denying the recipient's request for mediation.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/mediate-deny\",\n}\n
    "},{"location":"features/0211-route-coordination/#mediation-grant","title":"Mediation Grant","text":"

    A route grant message is a signal from the mediator to the recipient that permission is given to distribute the included information as an inbound route.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/mediate-grant\",\n    \"endpoint\": \"http://mediators-r-us.com\",\n    \"routing_keys\": [\"did:key:z6Mkfriq1MqLBoPWecGoDLjguo1sB9brj6wT3qZ5BxkKpuP6\"]\n}\n

    endpoint: The endpoint reported to mediation client connections.

    routing_keys: List of keys in intended routing order. Key used as recipient of forward messages.

    "},{"location":"features/0211-route-coordination/#keylist-update","title":"Keylist Update","text":"

    Used to notify the mediator of keys in use by the recipient.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/keylist-update\",\n    \"updates\":[\n        {\n            \"recipient_key\": \"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\",\n            \"action\": \"add\"\n        }\n    ]\n}\n

    recipient_key: Key subject of the update.

    action: One of add or remove.

    "},{"location":"features/0211-route-coordination/#keylist-update-response","title":"Keylist Update Response","text":"

    Confirmation of requested keylist updates.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/keylist-update-response\",\n    \"updated\": [\n        {\n            \"recipient_key\": \"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\",\n            \"action\": \"\" // \"add\" or \"remove\"\n            \"result\": \"\" // [client_error | server_error | no_change | success]\n        }\n    ]\n}\n

    recipient_key: Key subject of the update.

    action: One of add or remove.

    result: One of client_error, server_error, no_change, success; describes the resulting state of the keylist update.

    "},{"location":"features/0211-route-coordination/#key-list-query","title":"Key List Query","text":"

    Query mediator for a list of keys registered for this connection.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/keylist-query\",\n    \"paginate\": {\n        \"limit\": 30,\n        \"offset\": 0\n    }\n}\n

    paginate is optional.

    "},{"location":"features/0211-route-coordination/#key-list","title":"Key List","text":"

    Response to key list query, containing retrieved keys.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/keylist\",\n    \"keys\": [\n        {\n            \"recipient_key\": \"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\"\n        }\n    ],\n    \"pagination\": {\n        \"count\": 30,\n        \"offset\": 30,\n        \"remaining\": 100\n    }\n}\n

    pagination is optional.

    "},{"location":"features/0211-route-coordination/#encoding-of-keys","title":"Encoding of keys","text":"

    All keys are encoded using the did:key method as per RFC0360.

    "},{"location":"features/0211-route-coordination/#prior-art","title":"Prior art","text":"

    There was an Indy HIPE that never made it past the PR process that described a similar approach. That HIPE led to a partial implementation of this inside the Aries Cloud Agent Python

    "},{"location":"features/0211-route-coordination/#future-considerations","title":"Future Considerations","text":""},{"location":"features/0211-route-coordination/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0211-route-coordination/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Cloud Agent - Python Added in ACA-Py 0.6.0 MISSING test results**** DIDComm mediator Open source cloud-based mediator."},{"location":"features/0212-pickup/","title":"0212: Pickup Protocol","text":""},{"location":"features/0212-pickup/#summary","title":"Summary","text":"

    A protocol to coordinate routing configuration between a routing agent and the recipient.

    "},{"location":"features/0212-pickup/#motivation","title":"Motivation","text":"

    Messages can be picked up simply by sending a message to the message holder with a return_route decorator specified. This mechanism is implicit, and lacks some desired behavior made possible by more explicit messages. This protocol is the explicit companion to the implicit method of picking up messages.

    "},{"location":"features/0212-pickup/#tutorial","title":"Tutorial","text":""},{"location":"features/0212-pickup/#roles","title":"Roles","text":"

    message_holder - The agent that has messages waiting for pickup by the recipient. recipient - The agent who is picking up messages. batch_sender - A message_holder that is capable of returning messages in a batch. batch_recipient - A recipient that is capable of receiving and processing a batch message.

    "},{"location":"features/0212-pickup/#flow","title":"Flow","text":"

    status can be used to see how many messages are pending. batch retrieval can be executed when many messages ...

    "},{"location":"features/0212-pickup/#reference","title":"Reference","text":""},{"location":"features/0212-pickup/#statusrequest","title":"StatusRequest","text":"

    Sent by the recipient to the message_holder to request a status message. ``json= { \"@id\": \"123456781\", \"@type\": \"https://didcomm.org/messagepickup/1.0/status-request\" }

    ### Status\nStatus details about pending messages\n```json=\n{\n    \"@id\": \"123456781\",\n    \"@type\": \"https://didcomm.org/messagepickup/1.0/status\",\n    \"message_count\": 7,\n    \"duration_waited\": 3600,\n    \"last_added_time\": \"2019-05-01 12:00:00Z\",\n    \"last_delivered_time\": \"2019-05-01 12:00:01Z\",\n    \"last_removed_time\": \"2019-05-01 12:00:01Z\",\n    \"total_size\": 8096\n}\n
    message_count` is the only required attribute. The others may be present if offered by the message_holder.

    "},{"location":"features/0212-pickup/#batch-pickup","title":"Batch Pickup","text":"

    A request to have multiple waiting messages sent inside a batch message. ```json= { \"@id\": \"123456781\", \"@type\": \"https://didcomm.org/messagepickup/1.0/batch-pickup\", \"batch_size\": 10 }

    ### Batch\nA message that contains multiple waiting messages.\n```json=\n{\n    \"@id\": \"123456781\",\n    \"@type\": \"https://didcomm.org/messagepickup/1.0/batch\",\n    \"messages~attach\": [\n        {\n            \"@id\" : \"06ca25f6-d3c5-48ac-8eee-1a9e29120c31\",\n            \"message\" : \"{\n                ...\n            }\"\n        },\n\n        {\n            \"@id\" : \"344a51cf-379f-40ab-ab2c-711dab3f53a9a\",\n            \"message\" : \"{\n                ...\n            }\"\n        }\n    ]\n}\n

    "},{"location":"features/0212-pickup/#message-query-with-message-id-list","title":"Message Query With Message Id List","text":"

    A request to read single or multiple messages with a message message id array. ```json= { \"@id\": \"123456781\", \"@type\": \"https://didcomm.org/messagepickup/1.0/list-pickup\", \"message_ids\": [ \"06ca25f6-d3c5-48ac-8eee-1a9e29120c31\", \"344a51cf-379f-40ab-ab2c-711dab3f53a9a\" ] }

    `message_ids` message id array for picking up messages. Any message id in `message_ids` could be delivered via several ways to the recipient (Push notification or with an envoloped message).\n### Message List Query Response\nA response to query with message id list.\n```json=\n{\n    \"@type\": \"https://didcomm.org/messagepickup/1.0/list-response\",\n    \"messages~attach\": [\n        {\n            \"@id\" : \"06ca25f6-d3c5-48ac-8eee-1a9e29120c31\",\n            \"message\" : \"{\n                ...\n            }\"\n        },\n        {\n            \"@id\" : \"344a51cf-379f-40ab-ab2c-711dab3f53a9a\",\n            \"message\" : \"{\n                ...\n            }\"\n        }\n    ]\n}\n

    "},{"location":"features/0212-pickup/#noop","title":"Noop","text":"

    Used to receive another message implicitly. This message has no expected behavior when received. json= { \"@id\": \"123456781\", \"@type\": \"https://didcomm.org/messagepickup/1.0/noop\" }

    "},{"location":"features/0212-pickup/#prior-art","title":"Prior art","text":"

    Concepts here borrow heavily from a document written by Andrew Whitehead of BCGov.

    "},{"location":"features/0212-pickup/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0212-pickup/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0213-transfer-policy/","title":"0213: Transfer Policy Protocol","text":""},{"location":"features/0213-transfer-policy/#summary","title":"Summary","text":"

    A protocol to share and request changes to policy that relates to message transfer.

    "},{"location":"features/0213-transfer-policy/#motivation","title":"Motivation","text":"

    Explicit Policy Enables clear expectations.

    "},{"location":"features/0213-transfer-policy/#tutorial","title":"Tutorial","text":""},{"location":"features/0213-transfer-policy/#roles","title":"Roles","text":"

    policy_holder uses the policy to manage messages directed to the recipient. recipient the agent the policy relates to.

    "},{"location":"features/0213-transfer-policy/#reference","title":"Reference","text":""},{"location":"features/0213-transfer-policy/#policy-publish","title":"Policy Publish","text":"

    Used to share current policy by policy holder. This can be sent unsolicited or in response to a policy_share_request. ```json= { \"@id\": \"123456781\", \"@type\": \"https://didcomm.org/transferpolicy/1.0/policy\", \"queue_max_duration\": 86400, \"message_count_limit\": 1000, \"message_size_limit\": 65536, \"queue_size_limit\": 65536000, \"pickup_allowed\": true, \"delivery_retry_count_limit\":5, \"delivery_retry_count_seconds\":86400, \"delivery_retry_backoff\": \"exponential\" }

    ### Policy Share Request\nUsed to ask for a `policy` message to be sent.\n```json=\n\n{\n    \"@id\": \"123456781\",\n    \"@type\": \"https://didcomm.org/transferpolicy/1.0/policy_share_request\"\n}\n

    "},{"location":"features/0213-transfer-policy/#policy-change-request","title":"Policy Change Request","text":"

    Sent to request a policy change. The expected response is a policy message.

    ```json=

    { \"@id\": \"123456781\", \"@type\": \"https://didcomm.org/transferpolicy/1.0/policy_change_request\", \"queue_max_duration\": 86400, \"message_count_limit\": 1000, \"message_size_limit\": 65536, \"queue_size_limit\": 65536000, \"pickup_allowed\": true, \"delivery_retry_count_limit\":5, \"delivery_retry_count_seconds\":86400, \"delivery_retry_backoff\": \"exponential\" } ``` Only attributes that you desire to change need to be included.

    "},{"location":"features/0213-transfer-policy/#prior-art","title":"Prior art","text":"

    Concepts here borrow heavily from a document written by Andrew Whitehead of BCGov.

    "},{"location":"features/0213-transfer-policy/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0213-transfer-policy/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0214-help-me-discover/","title":"Aries RFC 0214: \"Help Me Discover\" Protocol","text":""},{"location":"features/0214-help-me-discover/#summary","title":"Summary","text":"

    Describes how one party can ask another party for help discovering an unknown person, organization, thing, or chunk of data.

    "},{"location":"features/0214-help-me-discover/#motivation","title":"Motivation","text":"

    Asking a friend to help us discover something is an extremely common human interaction: \"Dad, I need a good mechanic. Do you know one who lives near me?\"

    Similar needs exist between devices in highly automated environments, as when a drone lands in hangar and queries a dispatcher agent to find maintenance robots who can repair an ailing motor.

    We need a way to perform these workflows with DIDComm.

    "},{"location":"features/0214-help-me-discover/#tutorial","title":"Tutorial","text":""},{"location":"features/0214-help-me-discover/#name-and-version","title":"Name and version","text":"

    This is the \"Help Me Discover\" protocol, version 1.0. It is uniquely identified by the following PIURI:

    https://didcomm.org/help-me-discover/1.0\n
    "},{"location":"features/0214-help-me-discover/#roles-and-states","title":"Roles and States","text":"

    This protocol embodies a standard request-response pattern, and therefore has requester and responder roles. A request message describes what's wanted. A response message conveys whatever knowledge the responder wants to offer to be helpful. Standard state evolution applies:

    "},{"location":"features/0214-help-me-discover/#requirements","title":"Requirements","text":"

    The following requirements do not change this simple framework, but they introduce some complexity into the messages:

    "},{"location":"features/0214-help-me-discover/#messages","title":"Messages","text":""},{"location":"features/0214-help-me-discover/#request","title":"request","text":"

    A simple request message looks like this:

    {\n    \"@type\": \"https://didcomm.org/help-me-discover/1.0/request\",\n    \"@id\": \"a2248fb5-d46e-4898-a781-2f03e5f23964\"\n    // human-readable, localizable, optional\n    \"comment\": \"any ideas?\",\n    // please help me discover match for this\n    \"desired\": { \n        \"all\": [ // equivalent of boolean AND -- please match this list\n            // first criterion: profession must equal \"mechanic\"\n            {\"profession\": \"mechanic\", \"id\": \"prof\"},\n            // second criterion in \"all\" list: any of the following (boolean OR)\n            {\n                \"any\": [\n                    // average rating > 3.5\n                    {\"averageRating\": 3.5, \"op\": \">\", \"id\": \"rating\"},\n                    // list of certifications contains \"ASE\"\n                    {\"certifications\": \"ASE\", \"op\": \"CONTAINS\", \"id\": \"cert\"},\n                    // zipCode must be in this list\n                    {\"zipCode\": [\"12345\", \"12346\"], \"op\": \"IN\", \"id\": \"where\"}\n                ], // end of \"any\" list\n                \"n\": 2, // match at least 2 from the list\n                \"id\": \"2-of-3\"\n            }\n        ],\n        \"id\": \"everything\"\n    }\n}\n

    In plain language, this particular request says:

    Please help me discover someone who's a mechanic, and who possesses at least 2 of the following 3 characteristis: they have an average rating of at least 3.5 stars; they have an ASE certification; they reside in zip code 12345 or 12346.

    The data type of desired is a criterion object. A criterion object can be of type all (boolean AND), type any (boolean OR), or op (a particular attribute is tested against a value with a specific operator). The all and any objects can nest one another arbitrarily deep.

    Parsing these criteria, and performing matches against them, can be done with the SGL library, which has ports for JavaScript and python. Other ports should be trivial; it's only a couple hundred lines of code. The hardest part of the work is giving the library an object model that contains candidates against which matching can be done.

    Notice that each criterion object has an id property. This is helpful because responses can now refer to the criteria by number to describe what they've matched.

    See Reference for fancier examples of requests.

    "},{"location":"features/0214-help-me-discover/#response","title":"response","text":"

    A response message looks like this:

    {\n    \"@type\": \"https://didcomm.org/help-me-discover/1.0/response\",\n    \"@id\": \"5f2396b5-d84e-689e-78a1-2fa2248f03e4\"\n    \"~thread\": { \"thid\": \"a2248fb5-d46e-4898-a781-2f03e5f23964\" }\n    // human-readable, localizable, optional\n    \"comment\": \"here's the best I've got\", \n    \"candidates\": [\n        {\n            \"id\": \"Alice\",\n            \"@type\": \"person\",\n            \"matches\": [\"prof\",\"rating\",\"cert\",\"2-of-3\",\"everything\"]\n        },\n        {\n            \"id\": \"Bob\",\n            \"@type\": \"drone\",\n            \"matches\": [\"prof\",\"cert\",\"where\",\"2-of-3\",\"everything\"]\n        },\n        {\n            \"id\": \"Carol\",\n            \"matches\": [\"rating\",\"cert\",\"where\"]\n        }\n    ]\n}\n

    In plain language, this response says:

    I found 3 candidates for you. One that I'll call \"Alice\" matches everything except your where criterion. One called \"Bob\" matches everything except your rating criterion. Once called \"Carol\" matches your rating, cert, and where criteria, but because she didn't match prof, she wasn't an overall match.

    "},{"location":"features/0214-help-me-discover/#using-a-help-me-discover-response-in-subsequent-interactions","title":"Using a \"Help me discover\" response in subsequent interactions","text":"

    A candidate in a response message like the one shown above can be referenced in a subsequent interactions by using the RFC 0xxx: Linkable DIDComm Message Paths mechanism. For example, if Fred wanted to ask for an introduction to Bob after engaging in the sample request-response sequence shown above, he could send a request message in the Introduce Protocol, where to (the party to whom he'd like to be introduced) included a discovered property that referenced the candidate with id equal to \"Bob\":

    {\n  \"@type\": \"https://didcomm.org/introduce/1.0/request\",\n  \"@id\": \"df3b699d-3aa9-4fd0-bb67-49594da545bd\",\n  \"to\": {\n    \"discovered\": \"didcomm:///5f2396b5-d84e-689e-78a1-2fa2248f03e4/.candidates%7B.id+%3D%3D%3D+%22Bob%22%7D\"\n  }\n}\n
    "},{"location":"features/0214-help-me-discover/#accuracy-trustworthiness-and-best-effort","title":"Accuracy, Trustworthiness, and Best Effort","text":"

    As with these types of interactions in \"real life\", the \"help me discover\" protocol cannot make any guarantees about the suitability of the answers it generates. The responder could be malicious, misinformed, or simply lazy. The contract for the protocol is:

    The requester must verify results independently, if their need for trust is high.

    "},{"location":"features/0214-help-me-discover/#privacy-considerations","title":"Privacy Considerations","text":"

    Just because Alice knows that Bob is a political dissident who uses a particular handle in online forms does not mean Alice should divulge that information to anybody who engages in the \"Help Me Discover\" protocol with her. When matching criteria focus on people, Alice should be careful and use good judgment about how much she honors a particular request for discovery. In particular, if Alice possesses data about Bob that was shared with her in a previous Present Proof Protocol, the terms of sharing may not permit her to divulge what she knows about Bob to an arbitrary third party. See the Data Consent Receipt RFC.

    These issues probably do not apply when the thing being discovered is not a private individual.

    "},{"location":"features/0214-help-me-discover/#reference","title":"Reference","text":""},{"location":"features/0214-help-me-discover/#discover-someone-who-can-prove","title":"Discover someone who can prove","text":"

    A request message can ask for someone that is capable of proving using verifiable credentials, as per RFC 0037:

    {\n    \"@type\": \"https://didcomm.org/help-me-discover/1.0/request\",\n    \"@id\": \"248fb52a-4898-a781-d46e-e5f239642f03\"\n    \"desired\": { \n        // either subjectRole or subjectDid:\n        //   - subjectRole has value of role in protocol\n        //   - subjectDid has value of a DID (useful in N-Wise settings)\n        \"verb\": \"prove\", \n        \"subjectRole\": \"introducer\", \n        \"car.engine.rating\": \"4\", \n        \"op\": \">\", \n        \"id\": \"engineRating\"\n    }\n}\n

    In plain language, this particular request says:

    Please help me discover someone who can act as introducer in a protocol, and can prove that a car's rating > 4.

    Another example might be:

    {\n    \"@id\": \"a2248fb5-d46e-4898-a781-2f03e5f23964\",\n    \"@type\": \"https://didcomm.org/help-me-discover/1.0/request\",\n    \"comment\": \"blood glucose\",\n    \"desired\": {\n        \"all\": [\n            {\n                \"id\": \"prof\",\n                \"profession\": \"medical-lab\"\n            },\n            {\n                \"id\": \"glucose\",\n                \"provides\": {\n                    \"from\": \"bloodtests\",\n                    \"just\": [\n                        \"glucose\"\n                    ],\n                    \"subject\": \"did:peer:introducer\"\n                }\n            }\n        ],\n        \"id\": \"everything\"\n    }\n}\n

    This says:

    Please help me discover that has profession = \"medical-lab\" and can provide measurements of the introducer's blood-glucose levels"},{"location":"features/0214-help-me-discover/#drawbacks","title":"Drawbacks","text":"

    If we are not careful, this protocol could be used to discover attributes about third parties in a way that subverts privacy. See Privacy Considerations.

    "},{"location":"features/0214-help-me-discover/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0214-help-me-discover/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0234-signature-decorator/","title":"Aries RFC 0234: Signature Decorator","text":""},{"location":"features/0234-signature-decorator/#rfc-archived","title":"RFC ARCHIVED","text":"

    DO NOT USE THIS RFC.

    Use the signed form of the attachment decorator (RFC 0017) instead of this decorator.

    "},{"location":"features/0234-signature-decorator/#summary","title":"Summary","text":"

    The ~sig field-level decorator enables non-repudiation by allowing an Agent to add a digital signature over a portion of a DIDComm message.

    "},{"location":"features/0234-signature-decorator/#motivation","title":"Motivation","text":"

    While today we support a standard way of authenticating messages in a repudiable way, we also see the need for non-repudiable digital signatures for use cases where high authenticity is necessary such as signing a bank loan. There's additional beneficial aspects around having the ability to prove provenance of a piece of data with a digital signature. These are all use cases which would benefit from a standardized format for non-repudiable digital signatures.

    This RFC outlines a field-level decorator that can be used to provide non-repudiable digital signatures in DIDComm messages. It also highlights a standard way to encode data such that it can be deterministically verified later.

    "},{"location":"features/0234-signature-decorator/#tutorial","title":"Tutorial","text":"

    This RFC introduces a new field-level decorator named ~sig and maintains a registry of standard Signature Schemes applicable with it.

    The ~sig field decorator may be used with any field of data. Its value MUST match the json object format of the chosen signature scheme.

    We'll use the following message as an example:

    {\n    \"@type\": \"https://didcomm.org/example/1.0/test_message\",\n    \"@id\": \"df3b699d-3aa9-4fd0-bb67-49594da545bd\",\n    \"msg\": {\n        \"text\": \"Hello World!\",\n        \"speaker\": \"Bob\"\n    }\n}\n

    Digitally signing the msg object with the ed25519sha256_single scheme results in a transformation of the original message to this:

    {\n    \"@type\": \"https://didcomm.org/example/1.0/test_message\",\n    \"@id\": \"df3b699d-3aa9-4fd0-bb67-49594da545bd\",\n    \"msg~sig\": {\n      \"@type\": \"https://didcomm.org/signature/1.0/ed25519Sha512_single\",\n      \"sig_data\": \"base64URL(64bit_integer_from_unix_epoch|msg_object)\",\n      \"signature\": \"base64URL(digital signature function output)\",\n      \"signer\": \"base64URL(inlined_signing_verkey)\"\n    }\n}\n

    The original msg object has been replaced with its ~sig-decorated counterpart in order to prevent message bloat.

    When an Agent receives a DIDComm message with a field decorated with ~sig, it runs the appropriate signature scheme algorithm and restores the DIDComm message's structure back to its original form.

    "},{"location":"features/0234-signature-decorator/#reference","title":"Reference","text":""},{"location":"features/0234-signature-decorator/#applying-the-digital-signature","title":"Applying the digital signature","text":"

    In general, the steps to construct a ~sig are:

    1. Choose a signature scheme. This determines the ~sig decorator's message type URI (the @type seen above) and the signature algorithm.
    2. Serialize the JSON object to be authenticated to a sequence of bytes (msg in the example above). This will be the plaintext input to the signature scheme.
    3. Construct the contents of the new ~sig object according to the chosen signature scheme with the plaintext as input.
    4. Replace the original object (msg in the example above) with the new ~sig object. The new object's label MUST be equal to the label of the original object appended with \"~sig\".
    "},{"location":"features/0234-signature-decorator/#verifying-the-digital-signature","title":"Verifying the digital signature","text":"

    The outcome of a successful signature verification is the replacement of the ~sig-decorated object with its original representation:

    1. Select the signature scheme according to the message type URI (ed25519sha256_single in the example above)
    2. Run the signature scheme's verification algorithm with the ~sig-decorated object as input.
    3. The software MUST cease further processing of the DIDComm message if the verification algorithm fails.
    4. Replace the ~sig-decorated object with the output of the scheme's verification algorithm.

    The end result MUST be semantically identical to the original DIDComm message before application of the signature scheme (eg. the original example message above).

    "},{"location":"features/0234-signature-decorator/#additional-considerations","title":"Additional considerations","text":"

    The data to authenticate is base64URL-encoded and then embedded as-is so as to prevent false negative signature verifications that could potentially occur when sending JSON data which has no easy way to canonicalize the structure. Rather, by including the exact data in Base64URL encoding, the receiver can be certain that the data signed is the same as what was received.

    "},{"location":"features/0234-signature-decorator/#signature-schemes","title":"Signature Schemes","text":"

    This decorator should support a specific set of signatures while being extensible. The list of current supported schemes are outlined below.

    Signature Scheme Scheme Spec ed25519Sha512_single spec

    TODO provide template in this RFC directory.

    To add a new signature scheme to this registry, follow the template provided to detail the new scheme as well as provide some test cases to produce and verify the signature scheme is working.

    "},{"location":"features/0234-signature-decorator/#drawbacks","title":"Drawbacks","text":"

    Since digital signatures are non-repudiable, it's worth noting the privacy implications of using this functionality. In the event that a signer has chosen to share a message using a non-repudiable signature, they forgo the ability to prevent the verifier from sharing this signature on to other parties. This has potentially negative implications with regards to consent and privacy.

    Therefore, this functionality should only be used if non-repudiable digital signatures are absolutely necessary.

    "},{"location":"features/0234-signature-decorator/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    JSON Web Signatures are an alternative to this specification in widespread use. We diverged from this specification for the following reasons:

    "},{"location":"features/0234-signature-decorator/#prior-art","title":"Prior art","text":"

    IETF RFC 7515 (JSON Web Signatures)

    "},{"location":"features/0234-signature-decorator/#unresolved-questions","title":"Unresolved questions","text":"

    Does there need to be an signature suite agreement protocol similar to TLS cipher suites? - No, rather the receiver of the message can send an error response if they're unable to validate the signature.

    How should multiple signatures be represented? - While not supported in this version, one solution would be to support [digital_sig1, digital_sig2] for signature and [verkey1, verkey2] for signer.

    "},{"location":"features/0234-signature-decorator/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Static Agent - Python ed25519sha256_single Aries Framework - .NET ed25519sha256_single Aries Framework - Go ed25519sha256_single"},{"location":"features/0234-signature-decorator/ed25519sha256_single/","title":"The ed25519sha256_single signature scheme","text":""},{"location":"features/0234-signature-decorator/ed25519sha256_single/#tutorial","title":"Tutorial","text":""},{"location":"features/0234-signature-decorator/ed25519sha256_single/#application","title":"Application","text":"

    This scheme computes a single ed25519 digital signature over the input message. Its output is a ~sig object with the following contents:

    {\n    \"@type\": \"https://didcomm.org/signature/1.0/ed25519Sha512_single\",\n    \"sig_data\": \"base64URL(64bit_integer_from_unix_epoch|msg)\",\n    \"signature\": \"base64URL(ed25519 signature)\",\n    \"signer\": \"base64URL(inlined_ed25519_signing_verkey)\"\n}\n
    "},{"location":"features/0234-signature-decorator/ed25519sha256_single/#verification","title":"Verification","text":"

    The successful outcome of this scheme is the plaintext.

    1. base64URL-decode signer
    2. base64URL-decode signature
    3. Verify the ed25519 signature over sig_data with the key provided in signer
    4. Further processing is halted if verification fails and an \"authentication failure\" error is returned
    5. base64URL-decode the sig_data
    6. Strip out the first 8 bytes
    7. Return the remaining bytes
    "},{"location":"features/0249-rich-schema-contexts/","title":"Aries RFC 0249: Aries Rich Schema Contexts","text":""},{"location":"features/0249-rich-schema-contexts/#summary","title":"Summary","text":"

    Every rich schema object may have an associated @context. Contexts are JSON or JSON-LD objects. They are the standard mechanism for defining shared semantic meaning among rich schema objects.

    Context objects are processed in a generic way defined in Rich Schema Objects Common.

    "},{"location":"features/0249-rich-schema-contexts/#motivation","title":"Motivation","text":"

    @context is JSON-LD\u2019s namespacing mechanism. Contexts allow rich schema objects to use a common vocabulary when referring to common attributes, i.e. they provide an explicit shared semantic meaning.

    "},{"location":"features/0249-rich-schema-contexts/#tutorial","title":"Tutorial","text":""},{"location":"features/0249-rich-schema-contexts/#intro-to-context","title":"Intro to @context","text":"

    @context is a JSON-LD construct that allows for namespacing and the establishment of a common vocabulary.

    Context object is immutable, so it's not possible to update existing Context, If the Context needs to be evolved, a new Context with a new version or name needs to be created.

    Context object may be stored in either JSON or JSON-LD format.

    "},{"location":"features/0249-rich-schema-contexts/#example-context","title":"Example Context","text":"

    An example of the content field of a Context object:

    {\n    \"@context\": [\n        \"did:sov:UVj5w8DRzcmPVDpUMr4AZhJ\",\n        \"did:sov:JjmmTqGfgvCBnnPJRas6f8xT\",\n        \"did:sov:3FtTB4kzSyApkyJ6hEEtxNH4\",\n        {\n            \"dct\": \"http://purl.org/dc/terms/\",\n            \"rdf\": \"http://www.w3.org/1999/02/22-rdf-syntax-ns#\",\n            \"rdfs\": \"http://www.w3.org/2000/01/rdf-schema#\",\n            \"Driver\": \"did:sov:2mCyzXhmGANoVg5TnsEyfV8\",\n            \"DriverLicense\": \"did:sov:36PCT3Vj576gfSXmesDdAasc\",\n            \"CategoryOfVehicles\": \"DriverLicense:CategoryOfVehicles\"\n        }\n    ]\n}\n

    "},{"location":"features/0249-rich-schema-contexts/#data-registry-storage","title":"Data Registry Storage","text":"

    Aries will provide a means for writing contexts to and reading contexts from a verifiable data registry (such as a distributed ledger).

    @context will be written to the ledger in a generic way defined in Rich Schema Objects Common.

    "},{"location":"features/0249-rich-schema-contexts/#aries-data-registry-interface","title":"Aries Data Registry Interface","text":"

    Aries Data Registry Interface methods for adding and retrieving @context from the ledger comply with the generic approach described in Rich Schema Objects Common.

    This means the following methods can be used: - write_rich_schema_object - read_rich_schema_object_by_id - read_rich_schema_object_by_metadata

    "},{"location":"features/0249-rich-schema-contexts/#reference","title":"Reference","text":"

    More information on the Verifiable Credential data model use of @context may be found here.

    More information on @context from the JSON-LD specification may be found here and here.

    "},{"location":"features/0249-rich-schema-contexts/#drawbacks","title":"Drawbacks","text":"

    Requiring a @context for each rich schema object introduces more complexity.

    "},{"location":"features/0249-rich-schema-contexts/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0249-rich-schema-contexts/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0249-rich-schema-contexts/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0281-rich-schemas/","title":"Aries RFC 0281: Aries Rich Schemas","text":""},{"location":"features/0281-rich-schemas/#summary","title":"Summary","text":"

    The proposed schemas are JSON-LD objects. This allows credentials issued according to the proposed schemas to have a clear semantic meaning, so that the verifier can know what the issuer intended. They support explicitly typed properties and semantic inheritance. A schema may include other schemas as property types, or extend another schema with additional properties. For example a schema for \"employee\" may inherit from the schema for \"person.\"

    Schema objects are processed in a generic way defined in Rich Schema Objects Common.

    "},{"location":"features/0281-rich-schemas/#motivation","title":"Motivation","text":"

    Many organizations, such as HL7 who publish the FHIR standard for heath care data exchange, have invested time and effort into creating data schemas that are already in use. Many schemas are shared publicly via web sites such as https://schema.org/, whose mission is, \"to create, maintain, and promote schemas for structured data on the Internet, on web pages, in email messages, and beyond.\"

    These schemas ought to be usable as the basis for verifiable credentials.

    Although verifiable credentials are the primary use case for schemas considered in this document, other future uses may include defining message formats or objects in a verifiable data registry.

    "},{"location":"features/0281-rich-schemas/#interoperability","title":"Interoperability","text":"

    Existing applications make use of schemas to organize and semantically describe data. Using those same schemas within Aries verifiable credentials provides a means of connecting existing applications with this emerging technology. This allows for an easy migration path for those applications to incorporate verifiable credentials and join the Aries ecosystem.

    Aries is only one of several verifiable credentials ecosystems. Using schemas which may be shared among these ecosystems allows for semantic interoperability between them, and enables a path toward true multi-lateral credential exchange.

    Using existing schemas, created in accordance with widely-supported common standards, allows Aries verifiable credentials to benefit from the decades of effort and thought that went into those standards and to work with other applications which also adhere to those standards.

    "},{"location":"features/0281-rich-schemas/#re-use","title":"Re-use","text":"

    Rich schemas can be re-used within the Aries credential ecosystem. Because these schemas are hierarchical and composable, even unrelated schemas may share partial semantic meaning due to the commonality of sub-schemas within both. For example, a driver license schema and an employee record are not related schemas, but both may include a person schema.

    A schema that was created for a particular use-case and accepted within a trust framework may be re-used within other trust frameworks for their use-cases. The visibility of these schemas across trust boundaries increases the ability of these schemas to be examined in greater detail and evaluated for fitness of purpose. Over time the schemas will gain reputation.

    "},{"location":"features/0281-rich-schemas/#extensibility","title":"Extensibility","text":"

    Applications that are built on top of the Aries frameworks can use these schemas as a basis for complex data objects for use within the application, or exposed through external APIs.

    "},{"location":"features/0281-rich-schemas/#immutability","title":"Immutability","text":"

    One important aspect of relying on schemas to provide the semantic meaning of data within a verifiable credential, is that the meaning of the credential properties should not change. It is not enough for entities within the ecosystem to have a shared understanding of the data in the present, it may be necessary for them to have an understanding of the credential at the time it was issued and signed. This depends on the trust framework within which the credential was issued and the needs of the parties involved. A verifiable data registry can provide immutable storage of schemas.

    "},{"location":"features/0281-rich-schemas/#tutorial","title":"Tutorial","text":""},{"location":"features/0281-rich-schemas/#intro-to-schemas","title":"Intro to Schemas","text":"

    schema objects are used to enforce structure and semantic meaning on a set of data. They allow Issuers to assert, and Holders and Verifiers to understand, a particular semantic meaning for the properties in a credential.

    Rich schemas are JSON-LD objects. Examples of the type of schemas supported here may be found at schema.org. At this time we do not support other schema representations such as RDFS, JSON Schema, XML Schema, OWL, etc.

    "},{"location":"features/0281-rich-schemas/#properties","title":"Properties","text":"

    Rich Schema properties follow the generic template defined in Rich Schema Common.

    Rich Schema's content field is a JSON-LD-serialized string with the following fields:

    "},{"location":"features/0281-rich-schemas/#id","title":"@id","text":"

    A rich schema must have an @id property. The value of this property must be equal to the id field which is a DID (see Identification of Rich Schema Objects).

    A rich schema may refer to the @id of another rich schema to define a parent schema. A property of a rich schema may use the @id of another rich schema as the value of its @type or @id property.

    A mapping object will contain the @id of the rich schema being mapped.

    A presentation definition will contain the @id of any schemas a holder may use to present proofs to a verifier.

    "},{"location":"features/0281-rich-schemas/#type","title":"@type","text":"

    A rich schema must have a @type property. The value of this property must be (or map to, via a context object) a URI.

    "},{"location":"features/0281-rich-schemas/#context","title":"@context","text":"

    A rich schema may have a @context property. If present, the value of this property must be a context object or a URI which can be dereferenced to obtain a context object.

    "},{"location":"features/0281-rich-schemas/#use-in-verifiable-credentials","title":"Use in Verifiable Credentials","text":"

    These schemas will be used in conjunction with the JSON-LD representation of the verifiable credentials data model to specify which properties may be included as part of the verifiable credential's credentialSubject property, as well as the types of the property values.

    The @id of a rich schema may be used as an additional value of the type property property of a verifiable credential. Because the type values of a verifiable credential are not required to be dereferenced, in order for the rich schema to support assertion of the structure and semantic meaning of the claims in the credential, an additional reference to the rich schema should be made through the credentialSchema property. This may be done as a direct reference to the rich schema @id, or via another rich schema object which references the rich schema @id such as a credential definition as would be the case for anonymous credentials, as discussed in the mapping section of the rich schema overview RFC.

    "},{"location":"features/0281-rich-schemas/#data-registry-storage","title":"Data Registry Storage","text":"

    Aries will provide a means for writing schema objects to and reading schema objects from a verifiable data registry (such as a distributed ledger).

    As discussed previously, the ability to specify the exact schema that was used to issue a verifiable credential, and the assurance that the meaning of that schema has not changed, may be critical for the trust framework. Verifiable data registries which provide immutability guarantees provide this assurance. Some alternative storage mechanisms do not. Hashlinks, which may be used to verify the hash of web-based schemas, are one example. These can be used inform a verifier that a schema has changed, but do not provide access to the original version of the schema in the event the original schema has been updated.

    "},{"location":"features/0281-rich-schemas/#example-schema","title":"Example Schema","text":"

    An example of the content field of a Rich Schema object:

       \"@id\": \"did:sov:2f9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n   \"@type\": \"rdfs:Class\",\n   \"@context\": {\n    \"schema\": \"http://schema.org/\",\n    \"bibo\": \"http://purl.org/ontology/bibo/\",\n    \"dc\": \"http://purl.org/dc/elements/1.1/\",\n    \"dcat\": \"http://www.w3.org/ns/dcat#\",\n    \"dct\": \"http://purl.org/dc/terms/\",\n    \"dcterms\": \"http://purl.org/dc/terms/\",\n    \"dctype\": \"http://purl.org/dc/dcmitype/\",\n    \"eli\": \"http://data.europa.eu/eli/ontology#\",\n    \"foaf\": \"http://xmlns.com/foaf/0.1/\",\n    \"owl\": \"http://www.w3.org/2002/07/owl#\",\n    \"rdf\": \"http://www.w3.org/1999/02/22-rdf-syntax-ns#\",\n    \"rdfa\": \"http://www.w3.org/ns/rdfa#\",\n    \"rdfs\": \"http://www.w3.org/2000/01/rdf-schema#\",\n    \"schema\": \"http://schema.org/\",\n    \"skos\": \"http://www.w3.org/2004/02/skos/core#\",\n    \"snomed\": \"http://purl.bioontology.org/ontology/SNOMEDCT/\",\n    \"void\": \"http://rdfs.org/ns/void#\",\n    \"xsd\": \"http://www.w3.org/2001/XMLSchema#\",\n    \"xsd1\": \"hhttp://www.w3.org/2001/XMLSchema#\"\n  },\n  \"@graph\": [\n    {\n      \"@id\": \"schema:recipeIngredient\",\n      \"@type\": \"rdf:Property\",\n      \"rdfs:comment\": \"A single ingredient used in the recipe, e.g. sugar, flour or garlic.\",\n      \"rdfs:label\": \"recipeIngredient\",\n      \"rdfs:subPropertyOf\": {\n        \"@id\": \"schema:supply\"\n      },\n      \"schema:domainIncludes\": {\n        \"@id\": \"schema:Recipe\"\n      },\n      \"schema:rangeIncludes\": {\n        \"@id\": \"schema:Text\"\n      }\n    },\n    {\n      \"@id\": \"schema:ingredients\",\n      \"schema:supersededBy\": {\n        \"@id\": \"schema:recipeIngredient\"\n      }\n    }\n  ]\n
    recipeIngredient schema from schema.org.

    "},{"location":"features/0281-rich-schemas/#data-registry-storage_1","title":"Data Registry Storage","text":"

    Aries will provide a means for writing contexts to and reading contexts from a verifiable data registry (such as a distributed ledger).

    A Schema will be written to the ledger in a generic way defined in Rich Schema Objects Common.

    "},{"location":"features/0281-rich-schemas/#aries-data-registry-interface","title":"Aries Data Registry Interface","text":"

    Aries Data Registry Interface methods for adding and retrieving a Schema from the ledger comply with the generic approach described in Rich Schema Objects Common.

    This means the following methods can be used: - write_rich_schema_object - read_rich_schema_object_by_id - read_rich_schema_object_by_metadata

    "},{"location":"features/0281-rich-schemas/#reference","title":"Reference","text":"

    More information on the Verifiable Credential data model use of schemas may be found here.

    "},{"location":"features/0281-rich-schemas/#drawbacks","title":"Drawbacks","text":"

    Rich schema objects introduce more complexity.

    "},{"location":"features/0281-rich-schemas/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0281-rich-schemas/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0281-rich-schemas/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0303-v01-credential-exchange/","title":"Aries RFC 0303: V0.1 Credential Exchange (Deprecated)","text":""},{"location":"features/0303-v01-credential-exchange/#summary","title":"Summary","text":"

    The 0.1 version of the ZKP Credential Exchange protocol (based on Hyperledger Indy) covering both issuing credentials and presenting proof. These messages were implemented to enable demonstrating credential exchange amongst interoperating agents for IIW 28 in Mountain View, CA. The use of these message types continues to today (November 2019) and so they are being added as an RFC for historical completeness and to enable reference in Aries Interop Profile.

    "},{"location":"features/0303-v01-credential-exchange/#motivation","title":"Motivation","text":"

    Enables the exchange of Indy ZKP-based verifiable credentials - issuing verifiable credentials and proving claims from issued verifiable credentials.

    "},{"location":"features/0303-v01-credential-exchange/#tutorial","title":"Tutorial","text":"

    This RFC defines a minimal credential exchange protocols. For more details of a complete credential exchange protocol, see the Issue Credentials and Present Proof RFCs.

    "},{"location":"features/0303-v01-credential-exchange/#issuing-a-credential","title":"Issuing a credential:","text":"
    1. The issuer sends the holder a Credential Offer
    2. The holder responds with a Credential Request to the issuer
    3. The issuer sends a Credential Issue to the holder, issuing the credential
    "},{"location":"features/0303-v01-credential-exchange/#presenting-a-proof","title":"Presenting a proof:","text":"
    1. The verifier sends the holder/prover a Proof Request
    2. The holder/prover constructs a proof to satisfy the proof requests and sends the proof to the verifier
    "},{"location":"features/0303-v01-credential-exchange/#reference","title":"Reference","text":"

    The following messages are supported in this credential exchange protocol.

    "},{"location":"features/0303-v01-credential-exchange/#issue-credential-protocol","title":"Issue Credential Protocol","text":"

    The process begins with a credential-offer. The thread decorator is implied for all messages except the first.

    The element is used in most messages and is the string returned from libindy for the given purpose - an escaped JSON string. The agent must process the string if there is a need to extract a data element from the JSON - for example to get the cred-def-id from the credential-offer.

    Acknowledgments and Errors should be signalled via adopting the standard ack and problem-report message types, respectively.

    "},{"location":"features/0303-v01-credential-exchange/#credential-offer","title":"Credential Offer","text":"
    {\n    \"@type\": \"https://didcomm.org/credential-issuance/0.1/credential-offer\",\n    \"@id\": \"<uuid-offer>\",\n    \"comment\": \"some comment\",\n    \"credential_preview\": <json-ld object>,\n    \"offer_json\": <libindy json string>\n}\n
    "},{"location":"features/0303-v01-credential-exchange/#credential-request","title":"Credential Request","text":"
    {\n    \"@type\": \"https://didcomm.org/credential-issuance/0.1/credential-request\",\n    \"@id\": \"<uuid-request>\",\n    \"comment\": \"some comment\",\n    \"request\": <libindy json string>\n}\n
    "},{"location":"features/0303-v01-credential-exchange/#credential-issue","title":"Credential Issue","text":"
    {\n    \"@type\": \"https://didcomm.org/credential-issuance/0.1/credential-issue\",\n    \"@id\": \"<uuid-credential>\",\n    \"issue\": <libindy json string>\n}\n
    "},{"location":"features/0303-v01-credential-exchange/#presentation-protocol","title":"Presentation Protocol","text":"

    The message family to initiate a presentation. The verifier initiates the process. The thread decorator is implied on every message other than the first message. The ack and problem-report messages are to be adopted by this message family.

    "},{"location":"features/0303-v01-credential-exchange/#presentation-request","title":"Presentation Request","text":"
    {\n    \"@type\": \"https://didcomm.org/credential-presentation/0.1/presentation-request\",\n    \"@id\": \"<uuid-request>\",\n    \"comment\": \"some comment\",\n    \"request\": <libindy json string>\n}\n
    "},{"location":"features/0303-v01-credential-exchange/#credential-presentation","title":"Credential Presentation","text":"
    {\n    \"@type\": \"https://didcomm.org/credential-presentation/0.1/credential-presentation\",\n    \"@id\": \"<uuid-presentation>\",\n    \"comment\": \"some comment\",\n    \"presentation\": <libindy json string>\n}\n
    "},{"location":"features/0303-v01-credential-exchange/#drawbacks","title":"Drawbacks","text":"

    The RFC is not technically needed, but is useful to have as an Archived RFC of a feature in common usage.

    "},{"location":"features/0303-v01-credential-exchange/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    N/A

    "},{"location":"features/0303-v01-credential-exchange/#prior-art","title":"Prior art","text":"

    N/A

    "},{"location":"features/0303-v01-credential-exchange/#unresolved-questions","title":"Unresolved questions","text":"

    N/A

    "},{"location":"features/0303-v01-credential-exchange/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Framework - .NET .NET framework for building agents of all types; MISSING test results Streetcred.id Commercial mobile and web app built using Aries Framework - .NET; MISSING test results Aries Cloud Agent - Python Contributed by the government of British Columbia.; MISSING test results OSMA - Open Source Mobile Agent Open SOurce mobile app built on Aries Framework - .NET; MISSING test results"},{"location":"features/0309-didauthz/","title":"Aries RFC 0309: DIDAuthZ","text":""},{"location":"features/0309-didauthz/#summary","title":"Summary","text":"

    DIDAuthZ is an attribute-based resource discovery and authorization protocol for Layer 2 of the ToIP Stack[1]. It enables a requesting party to discover protected resources and obtain credentials that grant limited access to these resources with the authorization of their owner. These credentials can be configured such that the requesting party may further delegate unto others the authority to access these resources on their behalf.

    "},{"location":"features/0309-didauthz/#motivation","title":"Motivation","text":"

    In the online world, individuals frequently consent to a service provider gaining access to their protected resources located on a different service provider. Individuals are challenged with an authentication mechanism, informed of the resources being requested, consent to the use of their resources, and can later revoke access at any time. OAuth 2.0[6], and other frameworks built on top of it, were developed to address this need.

    A DIDComm protocol[2] can address these use cases and enhance them with secure end-to-end encryption[3] independent of the transport used. The risk of correlation of the individual's relationships with other parties can be mitigated with the use of peer DIDs[4]. With a sufficiently flexible grammar, the encoding of the access scope can be fine-grained down to the individual items that are requested, congruent with the principle of selective disclosure[5].

    It is expected that future higher-level protocols and governance frameworks[1] can leverage DIDAuthZ to enable authorized sharing of an identity owner's attributes held by a third party.

    "},{"location":"features/0309-didauthz/#tutorial","title":"Tutorial","text":""},{"location":"features/0309-didauthz/#roles","title":"Roles","text":"

    DIDAuthZ adapts the following roles from OAuth 2.0[6] and UMA 2.0[7]:

    Resource Server (RS) An agent holding the protected resources. These resources MAY be credentials of which the subject MAY be a third party identity owner. The RS is also a resource owner at the root of the chain of delegation. Resource Owner (RO) An agent capable of granting access to a protected resource. The RO is a delegate of the RS to the extent encoded in a credential issued by the RS. Authorization Server (AS) An agent that protects, on the resource owner's behalf, resources held by the resource server. The AS is a delegate of the RO capable of issuing and refreshing access credentials. Requesting Party (RP) An agent that requests access to the resources held by the resource server. The RP is a delegate of the AS to the extent encoded in a credential issued by the AS.

    "},{"location":"features/0309-didauthz/#transaction-flow","title":"Transaction Flow","text":"

    The requesting party initiates a transaction by communicating directly with the authorization server with prior knowledge of their location.

    (1) RP requests a resource

    The requesting party requests the authorization server for a resource. A description of the resource requested and the desired access is included in this request.

    TODO does the request need to also include proof of \"user engagement\"?

    (2) AS requests authorization from the RO

    The authorization server processes the request and determines if the resource owner's authorization is needed. The authorization server MUST obtain the resource owner's authorization if no previous grant of authorization is found in valid state. Otherwise, the authorization server MAY issue new access credentials to the requesting party without any interaction with the resource owner. In such a case, the authorization server MUST revoke all access credentials previously issued to the requesting party.

    The authorization server interacts with the resource owner through their existing DIDComm connection to obtain their authorization.

    (3) AS issues an access token to the RP

    The authorization server issues access credentials to the requesting party.

    (4) AS introduces the RP to the RS

    The authorization server connects the requesting party to the resource server via the Introduce Protocol[9].

    "},{"location":"features/0309-didauthz/#access-credentials","title":"Access Credentials","text":"

    Access credentials are chained delegate credentials[17] used to access the protected resources. Embedded in them is proof of the chain of delegation and authorization.

    (1) RS delegates unto RO

    The resource server issues a credential to the resource owner that identifies the latter as the owner of a subset of resources hosted by the former. It also identifies the resource owner's chosen authorization server in their respective role. The resources will also have been registered at the authorization server.

    (2) RO delegates unto AS

    The resource owner issues a grant-type credential to the authorization server at the end of each AS-RO interaction. This credential is derived from the one issued by the RS. It authorizes the AS to authorize access to the RP with a set scope.

    (3) AS issues access credential to RP

    The authorization server issues an access credential to the requesting party derived from the grant credential issued by the resource owner for this transaction. This credential encodes the same access scope as found in the parent credential.

    (4) RP presents proof of access credential to RS

    The requesting party shows proof of this access credential when attempting to access the resource on the resource server.

    "},{"location":"features/0309-didauthz/#revocation","title":"Revocation","text":"

    The resource server makes available a revocation registry and grants read/write access to both the resource owner and the authorization server.

    "},{"location":"features/0309-didauthz/#reference","title":"Reference","text":""},{"location":"features/0309-didauthz/#discovery-of-authorization-servers","title":"Discovery of authorization servers","text":"

    The resource owner advertises their chosen authorization server to other parties with a new type of service definition in their DID document[8]:

    {\n  \"@context\": [\"https://www.w3.org/ns/did/v1\", \"https://w3id.org/security/v1\"],\n  \"id\": \"did:example:123456789abcdefghi\",\n  \"publicKey\": [{\n    \"id\": \"did:example:123456789abcdefghi#keys-1\",\n    \"type\": \"Ed25519VerificationKey2018\",\n    \"controller\": \"did:example:123456789abcdefghi\",\n    \"publicKeyBase58\": \"H3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV\"\n  }],\n  \"service\": [{\n    \"id\": \"did:example:123456789abcdefghi#did-authz\",\n    \"type\": \"did-authz\",\n    \"serviceEndpoint\": \"did:example:xyzabc456#auth-svc\"\n  }]\n}\n

    TODO define json-ld context for new service type

    The mechanisms by which the resource owner discovers authorization servers are beyond the scope of this specification.

    Authorization servers MUST make available a DID document containing metadata about their service endpoints and capabilities at a well-known location.

    TODO define \"well-known locations\" in several transports

    TODO register well-known URI for http transport as per IETF RFC 5785

    "},{"location":"features/0309-didauthz/#discovery-of-revocation-registry","title":"Discovery of revocation registry","text":"

    TODO

    "},{"location":"features/0309-didauthz/#resources","title":"Resources","text":""},{"location":"features/0309-didauthz/#describing-resources","title":"Describing resources","text":"

    TODO

    "},{"location":"features/0309-didauthz/#describing-access-scope","title":"Describing access scope","text":"

    TODO

    "},{"location":"features/0309-didauthz/#registering-resources","title":"Registering resources","text":"

    TODO

    "},{"location":"features/0309-didauthz/#requesting-resources","title":"Requesting resources","text":"

    TODO

    "},{"location":"features/0309-didauthz/#protocol-messages","title":"Protocol messages","text":"

    TODO

    "},{"location":"features/0309-didauthz/#gathering-consent-from-the-resource-owner","title":"Gathering consent from the resource owner","text":"

    TODO didcomm messages

    "},{"location":"features/0309-didauthz/#credentials","title":"Credentials","text":"

    TODO format of these credentials, JWTs or JSON-LDs?

    TODO

    "},{"location":"features/0309-didauthz/#drawbacks","title":"Drawbacks","text":"

    (None)

    "},{"location":"features/0309-didauthz/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0309-didauthz/#prior-art","title":"Prior art","text":"

    Aries RFC 0167

    The Data Consent Lifecycle[10] is a reference implementation of data privacy agreements based on the GDPR framework[11]. The identity owner grants access to a verifier in the form of a proof of possession of a credential issued by the issuer. The identity owner may grant access to several verifiers in this manner. Access cannot be revoked on a per-verifier basis. To revoke access to a verifier, the identity owner's credential needs to be revoked, which in turn revokes all existing proofs the identity owner may have provided. The identity owner does not have the means to revoke access to a third party without directly involving the issuer.

    OAuth 2.0

    OAuth 2.0[6] is a role-based authorization framework in widespread use that enables a third-party to obtain limited access to HTTP services on behalf of a resource owner. The access token's scope is a simple data structure composed of space-delimited strings more suitable for a role-based authorization model than an attribute-based model.

    Although allowing for different types of tokens to be issued to clients as credentials, only the use of bearer tokens was formalized[12]. As a result, most implementations use bearer tokens as credentials. An expiry is optionally set on these tokens, but they nevertheless pose an unacceptable security risk in an SSI context and other contexts with high-value resources and need extra layers of security to address the risks of theft and impersonation. M. Jones and D. Hardt recommend the use of TLS to protect these tokens [12], but this transport is not guaranteed as a DIDComm message travels from the sender to the recipient. The specification for mac tokens[13] never materialized and its TLS Channel Binding Support was never specified, therefore not solving the issue of unwanted TLS termination in a hop. There is ongoing work in the draft for OAuth 2.0 Token Binding[14] that binds tokens to the cryptographic key material produced by the client, but it also relies on TLS as the means of transport.

    OpenID Connect 1.0

    OIDC[15] is \"a simple layer on top of the OAuth 2.0 protocol\" that standardizes simple data structures that contain claims about the end-user's identity.

    Being based upon OAuth 2.0, it suffers from the same security weaknesses - see the extensive section on Security Considerations that references the OAuth 2.0 Thread Model and Security Considerations[16].

    User-Managed Access 2.0

    UMA[7] is an extension to OAuth 2.0 that formalizes the authorization server's role as a delegate of the resource owner in order for the latter to grant access to requesting parties asynchronously and independently from the time of access. It relies on pre-defined resource scopes18 and is thus more suited to role-based access control.

    "},{"location":"features/0309-didauthz/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0309-didauthz/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0309-didauthz/#references","title":"References","text":"
    1. Matthew Davie, Dan Gisolfi, Daniel Hardman, John Jordan, Darrell O'Donnell, Drummond Reed: Aries RFC 0289: Trust over IP Stack, status PROPOSED
    2. Daniel Hardman: Aries RFC 0005 - DIDComm, status DEMONSTRATED
    3. Kyle Den Hartog, Stephen Curran, Mike Lodder: Aries RFC 0019 - Encryption Envelope, status ACCEPTED
    4. Oskar Deventer, Christian Lundkvist, Marton Csernai, Kyle Den Hartor, Markus Sabadello, Sam Curren, Dan Gisolfi, Mike Varley, Sven Hammannn, John Jordan, Lovesh Harchandani, Devin Fisher, Tobias Looker, Brent Zundel, Stephen Currant: Peer DID Method Specification, W3C Editor's Draft 16 October 2019
    5. The Sovrin Foundation: The Sovrin Glossary, v2.0
    6. Ed. D. Hardt: IETF RFC 6749 - The OAuth 2.0 Authorization Framework, October 2012
    7. Ed. E. Maler, M. Machulak, J. Richer, T. Hardjono: IETF I-D - User-Managed Access (UMA) 2.0 Grant for OAuth 2.0 Authorization, February 2019
    8. Drummond Reed, Manu Sporny, Dave Longley, Christopher Allen, Ryan Grant, Markus Sabadello: Decentralized Identifiers (DIDs) v1.0, W3C First Public Working Draft 07 November 2019
    9. Daniel Hardman, Sam Curren, Stephen Curran, Tobias Looker, George Aristy: Aries RFC 0028 - Introduce Protocol 1.0, status PROPOSED
    10. Jan Lindquist, Dativa; Paul Knowles, Dativa; Mark Lizar, OpenConsent; Harshvardhan J. Pandit, ADAPT Centre, Trinity College Dublin: Aries RFC 0167 - Data Consent Lifecycle, status PROPOSED
    11. Intersoft Consulting: General Data Protection Regulation, November 2019
    12. M. Jones, D. Hardt: IETF RFC 6750 - The OAuth 2.0 Authorization Framework: Bearer Token Usage, October 2012
    13. J. Richer, W. Mills, P. Hunt: IETF I-D - OAuth 2.0 Message Authentication Code (MAC) Tokens, January 2014
    14. M. Jones, B. Campbell, J. Bradley, W. Denniss: IETF I-D - OAuth 2.0 Token Binding, October 2018
    15. N. Sakimura, J. Bradley, M. Jones, B. de Medeiros, C. Mortimore: OpenID Connect Core 1.0, November 2014
    16. T. Lodderstedt, M. McGloin, P. Hunt: OAuth 2.0 Thread Model nd Security Considerations, January 2013
    17. Daniel Hardman, Lovesh Harchandani: Aries RFC 0104 - Chained Credentials, status PROPOSED
    18. E. Maler, M. Machulak, J. Richer, T. Hardjono: Federated Authorization for User-Managed Access (UMA) 2.0, February 2019
    "},{"location":"features/0317-please-ack/","title":"Aries RFC 0317: Please ACK Decorator","text":""},{"location":"features/0317-please-ack/#retirement-of-please_ack","title":"Retirement of ~please_ack","text":"

    The please_ack decorator was initially added to Aries Interop Protocol 2.0. However, this was done prior to attempts at an implementation. When such an attempt was made, it was found that the decorator is not practical as a general purpose mechanism. The capability assumed that the feature would be general purpose and could be applied outside of the protocols with which it was used. That assumption proved impossible to implement. The inclusion of the ~please_ack decorator cannot be implemented without altering any protocol with which it is used, and so it is not practical. Instead, any protocols that can benefit from such a feature can be extended to explicitly support the feature.

    For the \"on\": [\"OUTCOME\"] type of ACK, the problem manifests in two ways. First, the definition of OUTCOME is protocol (and in fact, protocol message) specific. The definition of \"complete\" for each message is specific to each message, so there is no \"general purpose\" way to know when an OUTCOME ACK is to be sent. Second, the addition of a ~please_ack decorator changes the protocol state machine for a given protocol, introducing additional states, and hence, additional state handling. Supporting \"on\": [\"OUTCOME\"] processing requires making changes to all protocols, which would be better handled on a per protocol basis, and where useful (which, it was found, is rare), adding messages and states. For example, what is the point of an extra ACK message on an OUTCOME in the middle of a protocol that itself results in the sending of the response message?

    Our experimentation found that it would be easier to achieve a general purpose \"on\": [\"RECEIPT\"] capability, but even then there were problems. Most notably, the capability is most useful when added to the last message of a protocol, where the message sender would like confirmation that the recipient got the message. However, it is precisely that use of the feature that also introduces breaking changes to the protocol state machine for the protocols to which it applies, requiring per protocol updates. So while the feature would be marginally useful in some cases, the complexity cost of the capability -- and the lack of demand for its creation -- led us to retire the entire RFC.

    For more details on the great work done by Alexander Sukhachev @alexsdsr, please see these pull requests, including both the changes proposed in the PRs, and the subsequent conversations about the ../../features.

    Much thanks for Alexander for the effort he put into trying to implement this capability.

    "},{"location":"features/0317-please-ack/#summary","title":"Summary","text":"

    Explains how one party can request an acknowledgment to and clarify the status of processes.

    "},{"location":"features/0317-please-ack/#motivation","title":"Motivation","text":"

    An acknowledgment or ACK is one of the most common procedures in protocols of all types. The ACK message is defined in Aries RFC 0015-acks and is adopted into other protocols for use at explicit points in the execution of a protocol. In addition to receiving ACKs at predefined places in a protocol, agents also need the ability to request additional ACKs at other points in an instance of a protocol. Such requests may or may not be answered by the other party, hence the \"please\" in the name of decorator.

    "},{"location":"features/0317-please-ack/#tutorial","title":"Tutorial","text":"

    If you are not familiar with the tutorial section of the ACK message,please review that first.

    Agents interact in very complex ways. They may use multiple transport mechanisms, across varied protocols, through long stretches of time. While we usually expect messages to arrive as sent, and to be processed as expected, a vital tool in the agent communication repertoire is the ability to request and receive acknowledgments to confirm a shared understanding.

    "},{"location":"features/0317-please-ack/#requesting-an-ack-please_ack","title":"Requesting an ack (~please_ack)","text":"

    A protocol may stipulate that an ack is always necessary in certain circumstances. Launch mechanics for spacecraft do this, because the stakes for a miscommunication are so high. In such cases, there is no need to request an ack, because it is hard-wired into the protocol definition. However, acks make a channel more chatty, and in doing so they may lead to more predictability and correlation for point-to-point communications. Requiring an ack is not always the right choice. For example, an ack should probably be optional at the end of credential issuance (\"I've received your credential. Thanks.\") or proving (\"I've received your proof, and it satisfied me. Thanks.\").

    In addition, circumstances at a given moment may make an ad hoc ack desirable even when it would normally NOT be needed. Suppose Alice likes to bid at online auctions. Normally she may submit a bid and be willing to wait for the auction to unfold organically to see the effect. But if she's bidding on a high-value item and is about to put her phone in airplane mode because her plane's ready to take off, she may want an immediate ACK that the bid was accepted.

    The dynamic need for acks is expressed with the ~please_ack message decorator. An example of the decorator looks like this:

    {\n  \"~please_ack\": {\n    \"on\": [\"RECEIPT\"]\n  }\n}\n

    This says, \"Please send me an ack as soon as you receive this message.\"

    "},{"location":"features/0317-please-ack/#examples","title":"Examples","text":"

    Suppose AliceCorp and Bob are involved in credential issuance. Alice is an issuer; Bob wants to hold the issued credential.

    "},{"location":"features/0317-please-ack/#on-receipt","title":"On Receipt","text":"

    In the final required message of the issue-credential protocol, AliceCorp sends the credential to Bob. But AliceCorp wants to know for sure that Bob has received it, for its own accounting purposes. So it decorates the final message with an ack request:

    {\n  \"~please_ack\": {\n    \"on\": [\"RECEIPT\"]\n  }\n}\n

    Bob honors this request and returns an ack as soon as he receives it and stores its payload.

    "},{"location":"features/0317-please-ack/#on-outcome","title":"On Outcome","text":"

    Same as with the previous example, AliceCorp wants an acknowledgement from Bob. However, in contrast to the previous example that just requests an acknowledgement on receipt of message, this time AliceCorp wants to know for sure Bob verified the validity of the credential. To do this AliceCorp decorates the issue-credential message with an ack request for the OUTCOME.

    {\n  \"~please_ack\": {\n    \"on\": [\"OUTCOME\"]\n  }\n}\n

    Bob honors this request and returns an ack as soon as he has verified the validity of the issued credential.

    "},{"location":"features/0317-please-ack/#reference","title":"Reference","text":""},{"location":"features/0317-please-ack/#please_ack-decorator","title":"~please_ack decorator","text":""},{"location":"features/0317-please-ack/#on","title":"on","text":"

    The only field for the please ack decorator. Required array. Describes the circumstances under which an ack is desired. Possible values in this array include RECEIPT and OUTCOME.

    If both values are present, it means an acknowledgement is requested for both the receipt and outcome of the message

    "},{"location":"features/0317-please-ack/#receipt","title":"RECEIPT","text":"

    The RECEIPT acknowledgement is the easiest ack mechanism and requests that an ack is sent on receipt of the message. This way of requesting an ack is to verify whether the other agent successfully received the message. It implicitly means the agent was able to unpack the message, to see

    "},{"location":"features/0317-please-ack/#outcome","title":"OUTCOME","text":"

    The OUTCOME acknowledgement is the more advanced ack mechanism and requests that an ack is sent on outcome of the message. By default outcome means the message has been handled and processed without business logic playing a role in the decision.

    In the context of the issue credential protocol, by default, this would mean an ack is requested as soon as the received credentials is verified to be valid. It doesn't mean the actual contents of the credential are acknowledged. For the issue credential protocol it makes more sense to send the acknowledgement after the contents of the credential have also been verified.

    Therefore protocols can override the definition of outcome in the context of that protocol. Examples of protocols overriding this behavior are Issue Credential Protocol 2.0, Present Proof Protocol 2.0 and Revocation Notification Protocol 1.0

    "},{"location":"features/0317-please-ack/#drawbacks","title":"Drawbacks","text":"

    None specified.

    "},{"location":"features/0317-please-ack/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    The first version of this RFC was a lot more advanced, but also introduced a lot of complexities. A lot of complex ../../features have been removed so it could be included in AIP 2.0 in a simpler form. More advanced ../../features from the initial RFC can be added back in when needed.

    "},{"location":"features/0317-please-ack/#prior-art","title":"Prior art","text":"

    None specified.

    "},{"location":"features/0317-please-ack/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0317-please-ack/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0327-crypto-service/","title":"Aries RFC 0327: Crypto service Protocol 1.0","text":""},{"location":"features/0327-crypto-service/#summary","title":"Summary","text":"

    Within decentralized data economy with user-centric approach the user is the one who should control data flows and all interaction even on 3rd parties platforms. To achieve that we start to talk about access instead of ownership of the data. Within the space we can identify some services which are dealing with the users data but they don't necessarily need to be able to see the data. In this category we have services like archive, data vaults, data transportation (IM, Email, Sent file), etc. To be able to better support privacy in such cases this document proposes a protocol which uses underlying security mechanisms within agents like Lox to provide an API for cryptographic operations like asymetric encryption/decryption, signature/verification, delegation (proxy re-encryption) to let those services provide an additional security layer on their platform within connection to SSI wallet agents.

    "},{"location":"features/0327-crypto-service/#motivation","title":"Motivation","text":"

    Identity management and key management are complex topics with which even big players have problems. To help companies and their products build secure and privacy preserving services with SSI they need a simple mechanism to get access to the cryptographic operations within components of the wallets.

    "},{"location":"features/0327-crypto-service/#todays-best-practice-approach-to-cryptographically-secured-services","title":"Todays 'Best Practice' approach to cryptographically secured Services","text":"

    Many 3rd party services today provide solutions like secure storage, encrypted communication, secure data transportation and to achieve that they are using secure keys to provide cryptography for their use cases. The problem is that in many cases those keys are generated and/or stored within the 3rd party Services - either in the Client App or in the Backend - which requires the users explicit trust into the 3rd parties good intentions.

    Even in the case that a 3rd party has the best possible intentions in keeping the users secrets save and private. There is still the increased risk for the users keys of leakage or beeing compromised while beeing stored with a (centralized) 3rd party Service.

    Last but not least the users usage of multiple such cryptografically secured services would lead to the distribution of the users secrets over different systems where the user needs to keep track of them and manage them via differnt 3rd party tools.

    "},{"location":"features/0327-crypto-service/#vision-seperation-of-service-business-logic-and-identity-artefacts","title":"Vision - seperation of Service-(Business-)Logic and Identity Artefacts","text":"

    In the context of SSI and decentralized identity the ideal solution is that the keys are generated within user agent and that the private (secret) key never leaves that place. This would be a clear seperation of a services business logic and the users keys which we also count to the users unique sets of identifying information (identity artefacts).

    After seperating these two domains their follows the obvious need for providing a general crypto API to the user wallet which allows to support generic use cases where a cryptographic layer is required in the 3rd party service business logic, for example:

    The desired outcome would be to have an Agent which is able to expose a standardized Crypto Services API to external 3rd party services which then can implement cryptographically secured aplications without the need to have access to the actual user secrets.

    "},{"location":"features/0327-crypto-service/#tutorial","title":"Tutorial","text":""},{"location":"features/0327-crypto-service/#name-and-version","title":"Name and Version","text":"

    This defines the crypto-service protocol. version 1.0, as identified by the following PIURI:

    TODO: Add PIURI when ready\n
    "},{"location":"features/0327-crypto-service/#roles","title":"Roles","text":"

    The minimum amount of roles involved in the crypto-service protocol are: a sender and a receiver. The sender requests a specific cryptographic operation from the receiver and the receiver provides the result in a form of a payload or an error. The protocol could include more roles (e.g. a proxy) which could be involved in processes like delegation (proxy re-encryption), etc.

    "},{"location":"features/0327-crypto-service/#constraints","title":"Constraints","text":"

    Each message which is send to the agent requires an up front established relationship between sender and receiver in the form of an authorization. This means that the sender is allowed to use only the specific key which is meant for him. There should not be the case that the sender is able to trigger any operation with keys which where never used within his service.

    "},{"location":"features/0327-crypto-service/#reference","title":"Reference","text":""},{"location":"features/0327-crypto-service/#examples","title":"Examples","text":"

    Specific use case example:

    A platform providing secure document transportation between parties and archiving functionality.

    Actors:

    Here is how it could work:

    In this scenario DocuArch has no way to learn about what is in the payload which is sent between Sender and Receiver as only the person who is in possession of the private key is able to decrypt the payload - which is the Receiver. Therfore the decrypted payload is only available on the Receivers client side app which is in communication with the Agent on behalf of the users DID identity.

    Such ../../features within the Agent allow companies to build faster and more secure systems as the identity management and key management part comes from Agents and they just interact with it via API.

    "},{"location":"features/0327-crypto-service/#messages","title":"Messages","text":"

    Protocol: did:sov:1234;spec/crypto-service/1.0

    encrypt

        {\n        \"@id\": \"1234567889\",\n        \"@type\": \"did:sov:1234;spec/crypto-service/1.0/encrypt\",\n        \"payload\": \"Text to be encrypted\",\n        \"key_id\": \"did:example:123456789abcdefghi#keys-1\"\n\n    }\n

    decrypt

        {\n        \"@id\": \"1234567889\",\n        \"@type\": \"did:sov:1234;spec/crypto-service/1.0/decrypt\",\n        \"encryptedPayload\": \"ASDD@J(!@DJ!DASD!@F\",\n        \"key_id\": \"did:example:123456789abcdefghi#keys-1\"\n\n    }\n

    sign

        {\n        \"@id\": \"1234567889\",\n        \"@type\": \"did:sov:1234;spec/crypto-service/1.0/sign\",\n        \"payload\": \"I say so!\",\n        \"key_id\": \"did:example:123456789abcdefghi#keys-1\"\n\n    }\n

    verify

        {\n        \"@id\": \"1234567889\",\n        \"@type\": \"did:sov:1234;spec/crypto-service/1.0/verify\",\n        \"signature\": \"12312d8u182d812d9182d91827d179\",\n        \"key_id\": \"did:example:123456789abcdefghi#keys-1\"\n\n    }\n

    delegate

        {\n        \"@id\": \"1234567889\",\n        \"@type\": \"did:sov:1234;spec/crypto-service/1.0/delegate\",\n        \"delegate\": \"did:example:ihgfedcba987654321\",\n        \"key_id\": \"did:example:123456789abcdefghi#keys-1\"\n\n    }\n
    "},{"location":"features/0327-crypto-service/#message-catalog","title":"Message Catalog","text":"

    TODO: add error codes and response messages/statuses

    "},{"location":"features/0327-crypto-service/#responses","title":"Responses","text":"

    TODO

    "},{"location":"features/0327-crypto-service/#drawbacks","title":"Drawbacks","text":""},{"location":"features/0327-crypto-service/#-potentialy-expose-agent-for-different-types-of-attacts-eg-someone-would-try-to-decrypt-your-private-documents-without-you-being-notice-of-that","title":"- Potentialy expose Agent for different types of attacts: e.g. someone would try to decrypt your private documents without you being notice of that.","text":""},{"location":"features/0327-crypto-service/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    We can not expect that each services will switch directly to the DIDComm and other ../../features of the agents. Not all ../../features are even desier to have within agent. But if the Agent can exposer base API for identity management and crypto operations this would allow others to build on top of it much more richer ans secure applications and platforms.

    We are not aware of any alternatives atm. Anyone?

    "},{"location":"features/0327-crypto-service/#prior-art","title":"Prior art","text":"

    Similar approach is taken within HSM world where API is expose to the outside world without exposing keys. Here we take same approach in the context of KMS within Agent.

    "},{"location":"features/0327-crypto-service/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0327-crypto-service/#implementations","title":"Implementations","text":"

    Implementation Notes

    Name / Link Implementation Notes"},{"location":"features/0334-jwe-envelope/","title":"Aries RFC 0334: JWE envelope 1.0","text":""},{"location":"features/0334-jwe-envelope/#summary","title":"Summary","text":"

    Agents need to use a common set of algorithms when exchanging and persisting data. This RFC supplies a cipher suite and examples for DIDComm envelopes.

    "},{"location":"features/0334-jwe-envelope/#motivation","title":"Motivation","text":"

    The goal of this RFC is to define cipher suites for Anoncrypt and Authcrypt such that we can achieve better compatibility with JOSE. We also aim to supply both a compliant suite and a constrained device suite. The compliant suite is suitable for implementations that contain AES hardware acceleration or desire to use NIST / FIPS algorithms (where possible).

    "},{"location":"features/0334-jwe-envelope/#encryption-algorithms","title":"Encryption Algorithms","text":"

    The next two sub-sections describe the encryption algorithms that must be supported. On devices with AES hardware acceleration or requiring compliance, AES GCM is the recommended algorithm. Otherwise, XChacha20Poly1305 should be used.

    "},{"location":"features/0334-jwe-envelope/#content-encryption-algorithms","title":"Content Encryption Algorithms","text":"

    The following table defines the supported content encryption algorithms for DIDComm JWE envelopes:

    Content Encryption Encryption Algorithm identifier Authcrypt/Anoncrypt Reference A256CBC-HS512 (512 bit) AES_256_CBC_HMAC_SHA_512 Authcrypt/Anoncrypt ECDH-1PU section 2.1 and RFC 7518 section 5.2.5 AES-GCM (256 bit) A256GCM Anoncrypt RFC7518 section 5.1 and more specifically RFC7518 section 5.3 XChacha20Poly1305 XC20P Anoncrypt xchacha draft 03"},{"location":"features/0334-jwe-envelope/#key-encryption-algorithms","title":"Key Encryption Algorithms","text":"

    The following table defines the supported key wrapping encryption algorithms for DIDComm JWE envelopes:

    Key Encryption Encryption algorithm identifier Anoncrypt/Authcrypt ECDH-ES + AES key wrap ECDH-ES+A256KW Anoncrypt ECDH-1PU + AES key wrap ECDH-1PU+A256KW Authcrypt"},{"location":"features/0334-jwe-envelope/#curves-support","title":"Curves support","text":"

    The following curves are supported:

    Curve Name Curve identifier X25519 (aka Curve25519) X25519 (default) NIST P256 (aka SECG secp256r1 and ANSI X9.62 prime256v1, ref here) P-256 NIST P384 (aka SECG secp384r1, ref here) P-384 NIST P521 (aka SECG secp521r1, ref here) P-521

    Other curves are optional.

    "},{"location":"features/0334-jwe-envelope/#security-consideration-for-curves","title":"Security Consideration for Curves","text":"

    As noted in the ECDH-1PU IETF draft security considerations section, all implementations must ensure the following:

    When performing an ECDH key agreement between a static private key\nand any untrusted public key, care should be taken to ensure that the\npublic key is a valid point on the same curve as the private key.\nFailure to do so may result in compromise of the static private key.\nFor the NIST curves P-256, P-384, and P-521, appropriate validation\nroutines are given in Section 5.6.2.3.3 of [NIST.800-56A]. For the\ncurves used by X25519 and X448, consult the security considerations\nof [RFC7748].\n

    "},{"location":"features/0334-jwe-envelope/#jwe-examples","title":"JWE Examples","text":"

    AES GCM encryption and key wrapping examples are found in Appendix C of the JSON Web Algorithm specs.

    The Proposed JWE Formats below lists a combination of content encryption and key wrapping algorithms formats.

    "},{"location":"features/0334-jwe-envelope/#drawbacks","title":"Drawbacks","text":""},{"location":"features/0334-jwe-envelope/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Our approach for AuthCrypt compliance is to use the NIST approved One-Pass Unified Model for ECDH scheme described in SP 800-56A Rev. 3. The JOSE version is defined as ECDH-1PU in this IETF draft.

    Aries agents currently use the envelope described in RFC0019. This envelope uses libsodium (NaCl) encryption/decryption, which is based on Salsa20Poly1305 algorithm.

    Another prior effort towards enhancing JWE compliance is to use XChacha20Poly1305 encryption and ECDH-SS key wrapping mode. See Aries-RFCs issue-133 and Go JWE Authcrypt package for an implementation detail. As ECDH-SS is not specified by JOSE, a new recipient header field, spk, was needed to contain the static encrypted public key of the sender. Additionally (X)Chacha20Poly1305 key wrapping is also not specified by JOSE. For these reasons, this option is mentioned here as reference only.

    "},{"location":"features/0334-jwe-envelope/#jwe-formats","title":"JWE formats","text":""},{"location":"features/0334-jwe-envelope/#anoncrypt-using-ecdh-es-key-wrapping-mode-and-xc20p-content-encryption","title":"Anoncrypt using ECDH-ES key wrapping mode and XC20P content encryption","text":"
     {\n  \"protected\": base64url({\n      \"typ\": \"didcomm-envelope-enc\",\n      \"enc\": \"XC20P\", // or \"A256GCM\"\n  }),\n  \"recipients\": [\n    {\n      \"header\": {\n        \"kid\": base64url(recipient KID), // e.g: base64url(\"urn:123\") or base64url(jwk thumbprint as KID)\n        \"alg\": \"ECDH-ES+A256KW\",\n        \"epk\": { // defining X25519 key as an example JWK, but this can be EC key as well \n          \"kty\": \"OKP\",\n          \"crv\": \"X25519\",\n          \"x\": \"-3bLMSHYDG3_LVNh-MJvoYs_a2sAEPr4jwFfFjTrmUo\" // sender's ephemeral public key value raw (no padding) base64url encoded\n        },\n        \"apu\": base64url(epk.x value above),\n        \"apv\": base64url(recipients[*].header.kid)\n      },\n      \"encrypted_key\": \"Sls6zrMW335GJsJe0gJU4x1HYC4TRBZS1kTS1GATEHfH_xGpNbrYLg\"\n    }\n  ],\n  \"aad\": \"base64url(sha256(concat('.',sort([recipients[0].kid, ..., recipients[n].kid])))))\",\n  \"iv\": \"K0PfgxVxLiW0Dslx\",\n  \"ciphertext\": \"Sg\",\n  \"tag\": \"PP31yGbQGBz9zgq9kAxhCA\"\n}\n

    typ header field is the DIDComm Transports value as mentioned in RFC-0025. This RFC states the prefix application/ but according to IANA Media types the prefix is implied therefore not needed here.

    "},{"location":"features/0334-jwe-envelope/#anoncrypt-using-ecdh-es-key-wrapping-mode-and-a256gcm-content-encryption","title":"Anoncrypt using ECDH-ES key wrapping mode and A256GCM content encryption","text":"
    {\n  \"protected\": base64url({\n          \"typ\": \"didcomm-envelope-enc\",\n          \"enc\": \"A256GCM\", // \"XC20P\"\n  }),\n  \"recipients\": [\n    {\n      \"header\": {\n        \"kid\": base64url(recipient KID),\n        \"alg\": \"ECDH-ES+XC20PKW\", // or \"ECDH-ES+A256KW\" with \"epk\" as EC key\n        \"epk\": {\n          \"kty\": \"OKP\",\n          \"crv\": \"X25519\",\n          \"x\": \"aOH-76BRwkHf0nbGokaBsO6shW9McEs6jqVXaF0GNn4\" // sender's ephemeral public key value raw (no padding) base64url encoded\n        },\n        \"apu\": base64url(epk.x value above),\n        \"apv\": base64url(recipients[*].header.kid)\n      },\n      \"encrypted_key\": \"wXzKi-XXb6fj_KSY5BR5hTUsZIiAQKrxblTo3d50B1KIeFwBR98fzQ\"\n    }\n  ],\n  \"aad\": \"base64url(sha256(concat('.',sort([recipients[0].kid, ..., recipients[n].kid])))))\",\n  \"iv\": \"9yjR8zvgeQDZFbIS\",\n  \"ciphertext\": \"EvIk_Rr6Nd-0PqQ1LGimSqbKyx_qZjGnmt6nBDdCWUcd15yp9GTeYqN_q_FfG7hsO8c\",\n  \"tag\": \"9wP3dtNyJERoR7FGBmyF-w\"\n}\n

    In the above two examples, apu is the encoded ephemeral key used to encrypt the cek stored in encrypted_key and apv is the encoded key id of the static public key of the recipient. Both are raw (no padding) base64Url encoded. kid is the value of a key ID in a DID document that should be resolvable to fetch the raw public key used.

    "},{"location":"features/0334-jwe-envelope/#authcrypt-using-ecdh-1pu-key-wrapping-mode","title":"Authcrypt using ECDH-1PU key wrapping mode","text":"
    {\n    \"protected\": base64url({\n        \"typ\": \"didcomm-envelope-enc\",\n        \"enc\":\"A256CBC-HS512\", // or one of: \"A128CBC-HS256\", \"A192CBC-HS384\"\n        \"skid\": base64url(sender KID),\n        \"alg\": \"ECDH-1PU+A256KW\", // or \"ECDH-1PU+XC20P\" with \"epk\" as X25519 key\n        \"apu\": base64url(\"skid\" header value),\n        \"apv\": base64url(sha256(concat('.',sort([recipients[0].kid, ..., recipients[n].kid]))))),\n        \"epk\": {\n            \"kty\": \"EC\",\n            \"crv\": \"P-256\",\n            \"x\": \"gfdM68LgZWhHwdVyMAPh1oWqV_NcYGR4k7Bjk8uBGx8\",\n            \"y\": \"Gwtgz-Bl_2BQYdh4f8rd7y85LE7fyfdnb0cWyYCrAb4\"\n        }\n    }),\n    \"recipients\": [\n        {\n            \"header\": {\n                \"kid\": base64url(recipient KID)\n            },\n            \"encrypted_key\": \"base64url(encrypted CEK)\"\n        },\n       ...\n    ],\n    \"aad\": \"base64url(sha256(concat('.',sort([recipients[0].kid, ..., recipients[n].kid])))))\",\n    \"iv\": \"base64url(content encryption IV)\",\n    \"ciphertext\": \"base64url(XC20P(DIDComm payload, base64Url(json($protected)+'.'+$aad), content encryption IV, CEK))\"\n    \"tag\": \"base64url(AEAD Authentication Tag)\"\n}\n

    With the recipients headers representing an ephemeral key that can be used to derive the key to be used for AEAD decryption of the CEK following the ECDH-1PU encryption scheme.

    The function XC20P in the example above is defined as the XChahcha20Poly1035 cipher function. This can be replaced by the AES-CBC+HMAC_SHA family of cipher functions for authcrypt or AES-GCM cipher function for anoncrypt.

    "},{"location":"features/0334-jwe-envelope/#concrete-examples","title":"Concrete examples","text":"

    See concrete anoncrypt and authcrypt examples

    "},{"location":"features/0334-jwe-envelope/#jwe-detached-mode-nested-envelopes","title":"JWE detached mode nested envelopes","text":"

    There are situations in DIDComm messaging where an envelope could be nested inside another envelope -- particularly RFC 46: Mediators and Relays. Normally nesting envelopes implies that the envelope payloads will incur additional encryption and encoding operations at each parent level in the nesting. This section describes a mechanism to extract the nested payloads outside the nesting structure to avoid these additional operations.

    "},{"location":"features/0334-jwe-envelope/#detached-mode","title":"Detached mode","text":"

    JWS defines detached mode where the payload can be removed. As stated in IETF RFC7515, this strategy has the following benefit:

    Note that this method needs no support from JWS libraries, as applications can use this method by modifying the inputs and outputs of standard JWS libraries.

    We will leverage a similar detached mode for JWE in the mechanism described below.

    "},{"location":"features/0334-jwe-envelope/#mechanism","title":"Mechanism","text":"

    Sender:

    1. Creates the \"final\" JWE intended for the recipient (normal JWE operation).
    2. Extracts the ciphertext and replace with an empty string.
    3. Creates the nested envelopes around the \"final\" JWE (but with the empty string ciphertext).
    4. Sends the nested envelope (normal JWE) plus the ciphertext from the \"final\" JWE.

    Mediator:

    1. Decrypt their layer (normal JWE operation). The detached ciphertext(s) are filtered out prior to invoking the JWE library (normal JWE structure).
    2. Remove the next detached ciphertext from the structure and insert back into the ciphertext field for the next nesting level.

    Receiver:

    1. Decrypts the \"final\" JWE (normal JWE operation).

    The detached ciphertext steps are repeated at each nesting level. In this case, an array of ciphertexts is sent along with the nested envelope.

    This solution has the following characteristics:

    "},{"location":"features/0334-jwe-envelope/#serialization","title":"Serialization","text":"

    The extracted ciphertext serialization format should have additional thought for both compact and JSON modes. As a starting point:

    For illustration, the following compact serialization represents nesting due to two mediators (the second mediator being closest to the Receiver).

    First Mediator receives:

      BASE64URL(UTF8(JWE Protected Header for First Mediator)) || '.' ||\n  BASE64URL(JWE Encrypted Key for First Mediator) || '.' ||\n  BASE64URL(JWE Initialization Vector for First Mediator) || '.' ||\n  BASE64URL(JWE Ciphertext for First Mediator) || '.' ||\n  BASE64URL(JWE Authentication Tag for First Mediator) || '.' ||\n  BASE64URL(JWE Ciphertext for Receiver) || '.' ||\n  BASE64URL(JWE Ciphertext for Second Mediator)\n

    Second Mediator receives:

      BASE64URL(UTF8(JWE Protected Header for Second Mediator)) || '.' ||\n  BASE64URL(JWE Encrypted Key for Second Mediator) || '.' ||\n  BASE64URL(JWE Initialization Vector for Second Mediator) || '.' ||\n  BASE64URL(JWE Ciphertext for Second Mediator) || '.' ||\n  BASE64URL(JWE Authentication Tag for Second Mediator) || '.' ||\n  BASE64URL(JWE Ciphertext for Receiver)\n

    Finally, the Receiver has a normal JWE (as usual):

      BASE64URL(UTF8(JWE Protected Header for Receiver)) || '.' ||\n  BASE64URL(JWE Encrypted Key for Receiver) || '.' ||\n  BASE64URL(JWE Initialization Vector for Receiver) || '.' ||\n  BASE64URL(JWE Ciphertext for Receiver) || '.' ||\n  BASE64URL(JWE Authentication Tag for Receiver)\n

    This illustration extends the serialization shown in RFC 7516.

    "},{"location":"features/0334-jwe-envelope/#prior-art","title":"Prior art","text":""},{"location":"features/0334-jwe-envelope/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0334-jwe-envelope/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes

    Note: Aries Framework - Go is almost done with a first draft implementation of this RFC.

    "},{"location":"features/0334-jwe-envelope/anoncrypt-examples/","title":"Table of Contents","text":"

    Created by gh-md-toc

    "},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#anoncrypt-jwe-concrete-examples","title":"Anoncrypt JWE Concrete examples","text":"

    The following examples are for JWE anoncrypt packer for encrypting the payload secret message and aad value set as the concatenation of recipients' KIDs (ASCII sorted) joined by . for non-compact serializations (JWE Compact serializations don't have AAD).

    "},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#notes","title":"Notes","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#1-a256gcm-content-encryption","title":"1 A256GCM Content Encryption","text":"

    The packer generates the following protected headers for A256GCM content encryption in the below examples: - Generated protected headers: {\"cty\":\"application/didcomm-plain+json\",\"enc\":\"A256GCM\",\"typ\":\"application/didcomm-encrypted+json\"} - raw (no padding) base64URL encoded: eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJBMjU2R0NNIiwidHlwIjoiYXBwbGljYXRpb24vZGlkY29tbS1lbmNyeXB0ZWQranNvbiJ9

    "},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#11-multi-recipients-jwes","title":"1.1 Multi recipients JWEs","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#111-nist-p-256-keys","title":"1.1.1 NIST P-256 keys","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#112-nist-p-384-keys","title":"1.1.2 NIST P-384 keys","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#113-nist-p-521-keys","title":"1.1.3 NIST P-521 keys","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#114-x25519-keys","title":"1.1.4 X25519 keys","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#12-single-recipient-jwes","title":"1.2 Single Recipient JWEs","text":"

    Packing a message with 1 recipient using the Flattened JWE JSON serialization and Compact JWE serialization formats as mentioned in the notes above.

    "},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#121-nist-p-256-key","title":"1.2.1 NIST P-256 key","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#122-nist-p-384-key","title":"1.2.2 NIST P-384 key","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#123-nist-p-521-key","title":"1.2.3 NIST P-521 key","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#124-x25519-key","title":"1.2.4 X25519 key","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#2-xc20p-content-encryption","title":"2 XC20P content encryption","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#21-multi-recipients-jwes","title":"2.1 Multi recipients JWEs","text":"

    The packer generates the following protected headers for XC20P content encryption in the below examples with XC20P enc: - Generated protected headers: {\"cty\":\"application/didcomm-plain+json\",\"enc\":\"XC20P\",\"typ\":\"application/didcomm-encrypted+json\"} - raw (no padding) base64URL encoded: eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJYQzIwUCIsInR5cCI6ImFwcGxpY2F0aW9uL2RpZGNvbW0tZW5jcnlwdGVkK2pzb24ifQ

    The same notes above apply here.

    "},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#211-nist-p-256-keys","title":"2.1.1 NIST P-256 keys","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#212-nist-p-384-keys","title":"2.1.2 NIST P-384 keys","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#213-nist-p-521-keys","title":"2.1.3 NIST P-521 keys","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#214-x25519-keys","title":"2.1.4 X25519 keys","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#22-single-recipient-jwes","title":"2.2 Single Recipient JWEs","text":"

    Packing a message with 1 recipient using the Flattened JWE JSON serialization and the Compact JWE serialization formats as mentioned in the notes above.

    "},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#221-nist-p-256-key","title":"2.2.1 NIST P-256 key","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#222-nist-p-384-key","title":"2.2.2 NIST P-384 key","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#223-nist-p-521-key","title":"2.2.3 NIST P-521 key","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#224-x25519-key","title":"2.2.4 X25519 key","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/","title":"Table of Contents","text":"

    Created by gh-md-toc

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#authcrypt-jwe-concrete-examples","title":"Authcrypt JWE Concrete examples","text":"

    The following examples are for JWE authcrypt packer for encrypting the payload secret message and aad value set as the concatenation of recipients' KIDs (ASCII sorted) joined by . for non-compact serializations (JWE Compact serializations don't have AAD).

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#notes","title":"Notes","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#1-a256gcm-content-encryption","title":"1 A256GCM Content Encryption","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#11-multi-recipients-jwes","title":"1.1 Multi recipients JWEs","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#111-nist-p-256-keys","title":"1.1.1 NIST P-256 keys","text":"

    The packer generates the following protected headers that includes the skid: - Generated protected headers: {\"cty\":\"application/didcomm-plain+json\",\"enc\":\"A256GCM\",\"skid\":\"6PBTUbcLB7-Z4fuAFn42oC1PaMsNmjheq1FeZEUgV_8\",\"typ\":\"application/didcomm-encrypted+json\"} - raw (no padding) base64URL encoded: eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJBMjU2R0NNIiwic2tpZCI6IjZQQlRVYmNMQjctWjRmdUFGbjQyb0MxUGFNc05tamhlcTFGZVpFVWdWXzgiLCJ0eXAiOiJhcHBsaWNhdGlvbi9kaWRjb21tLWVuY3J5cHRlZCtqc29uIn0

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#112-nist-p-384-keys","title":"1.1.2 NIST P-384 keys","text":"

    The packer generates the following protected headers that includes the skid: - Generated protected headers: {\"cty\":\"application/didcomm-plain+json\",\"enc\":\"A256GCM\",\"skid\":\"0Bz8yRwu9eC8Gi7cYOwAKMJ8jysInhAtwH8k8m9MX04\",\"typ\":\"application/didcomm-encrypted+json\"} - raw (no padding) base64URL encoded: eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJBMjU2R0NNIiwic2tpZCI6IjBCejh5Und1OWVDOEdpN2NZT3dBS01KOGp5c0luaEF0d0g4azhtOU1YMDQiLCJ0eXAiOiJhcHBsaWNhdGlvbi9kaWRjb21tLWVuY3J5cHRlZCtqc29uIn0 - Sender key JWK format:

    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-384\",\n  \"x\": \"UW1LtMXuZdFS0gyp0_F19uxHqECvCcJA7SmeeuRSSc_PQfsbZWXt5L0KyLYpNIQb\",\n  \"y\": \"FBdPcUvanB7igwkX0NN5rOvH3OKZ1gQHhcad7cCy6QNYKKz7lBWUUOmzypee31pS\",\n  \"d\": \"wrXW0wsFKjvpTWqOAd1mohRublQs4P014-U4_K-eTRFmzhkyLJqNn91dH_AHUc4-\"\n}\n
    - Sender kid (jwk thumbprint raw base64 URL encoded): 0Bz8yRwu9eC8Gi7cYOwAKMJ8jysInhAtwH8k8m9MX04 - Recipient 1 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-384\",\n  \"x\": \"k3W_RR59uUG3HFlhhqNNmBDyFdMtWHxKAsJaxLBqQgQer3d3aAN-lfdxzGnHtwj1\",\n  \"y\": \"VMSy5zxFEGaGRINailLTUH6NlP0JO2qu0j0_UbS7Ng1b8JkzHDnDbjGgsLqVJaMM\",\n  \"d\": \"iM5K8uqNvFYJnuToMDBguGwUIlu1erT-K0g7NYtJrQnHZOumS8yIC4MCNC60Ch91\"\n}\n
    - Recipient 1 kid (jwk thumbprint raw base64 URL encoded): obCHRLVDx634Cax_Kr3B8fd_-xj5kAj0r0Kvvvmq1z8 - Recipient 2 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-384\",\n  \"x\": \"W3iUHCzh_PWzUKONKeHwIKcjWNN--c7OlL2H23lV13C9tlkqOleFUmioW-AeitEk\",\n  \"y\": \"CIzVD6KsuDLyKQPm0r62LPZikkT2kiXJpLjcVO3op2kgePQkZ31xniKE0VbUBnTH\",\n  \"d\": \"V_vQwOqHVCGxSjX_dN8H5VXvOGYDRTGI00mNXwB0I0mKDd8kqCJmNtGlf-eUrbub\"\n}\n
    - Recipient 2 kid (jwk thumbprint raw base64 URL encoded): PfuTIXG60dvOwnFOfMxJ0i59_L7vqNytROX_bLRR-3M - Recipient 3 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-384\",\n  \"x\": \"bsX8qtEtj5IDLp9iDUKlgdu_3nluupFtFBrfIK1nza1bGZQRlZ3JG3PdBzVAoePz\",\n  \"y\": \"QX_2v0BHloNS7iWoB4CcO9UWHdtirMVmbNcB8ZGczCJOfUyjYcQxGr0RU_tGkFC4\",\n  \"d\": \"rQ-4ZmWn09CsCqRQJhpQhDeUZXeZ3cy_Pei-fchVPFTa2FnAzvjwEF2Nsm2f3MmR\"\n}\n
    - Recipient 3 kid (jwk thumbprint raw, no padding, base64 URL encoded): VTVlkyBsoW4ey0sh7TMJBErLGeBeKQsOttFRrXD6eqI - List of kids used for AAD for the above recipients (sorted kid values joined with .): PfuTIXG60dvOwnFOfMxJ0i59_L7vqNytROX_bLRR-3M.VTVlkyBsoW4ey0sh7TMJBErLGeBeKQsOttFRrXD6eqI.obCHRLVDx634Cax_Kr3B8fd_-xj5kAj0r0Kvvvmq1z8 - Resulting AAD value (sha256 of above list raw, no padding, base64 URL encoded): m72Q9j28hFk0imbFVzqY4KfTE77L8itJoX75N3hwiwA - Finally, packing the payload outputs the following JWE (pretty printed for readability):
    {\n  \"protected\": \"eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJBMjU2R0NNIiwic2tpZCI6IjBCejh5Und1OWVDOEdpN2NZT3dBS01KOGp5c0luaEF0d0g4azhtOU1YMDQiLCJ0eXAiOiJhcHBsaWNhdGlvbi9kaWRjb21tLWVuY3J5cHRlZCtqc29uIn0\",\n  \"recipients\": [\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+A256KW\",\n        \"apu\": \"MEJ6OHlSd3U5ZUM4R2k3Y1lPd0FLTUo4anlzSW5oQXR3SDhrOG05TVgwNA\",\n        \"apv\": \"b2JDSFJMVkR4NjM0Q2F4X0tyM0I4ZmRfLXhqNWtBajByMEt2dnZtcTF6OA\",\n        \"kid\": \"obCHRLVDx634Cax_Kr3B8fd_-xj5kAj0r0Kvvvmq1z8\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-384\",\n          \"x\": \"cjJHtV4VkMCww9ig94-_4e4yMfo2WI4Rh4dZh6NkYFvz-EGylA7RLSO5TRC-JJ_G\",\n          \"y\": \"RJe2QisAYpfuTWTV6KVeoLGshsJqYokbcSUqdMxrFGXSp4ZMNrW4yj410Xsn6hy6\"\n        }\n      },\n      \"encrypted_key\": \"o0ZZ_xNtmUPcpQAK3kzjOLp8xWBJ31tr-ORQjXtwpqgTuvM_nvhk_w\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+A256KW\",\n        \"apu\": \"MEJ6OHlSd3U5ZUM4R2k3Y1lPd0FLTUo4anlzSW5oQXR3SDhrOG05TVgwNA\",\n        \"apv\": \"b2JDSFJMVkR4NjM0Q2F4X0tyM0I4ZmRfLXhqNWtBajByMEt2dnZtcTF6OA\",\n        \"kid\": \"PfuTIXG60dvOwnFOfMxJ0i59_L7vqNytROX_bLRR-3M\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-384\",\n          \"x\": \"u1HYhdUJGx49J6wSLYM_JLHTkJrkR7wMSm5uYZMH7ZpcC3qF8MUyKTuKN0FGCBcN\",\n          \"y\": \"K-XI-KAGd2jHebNq44yQrDA6Ubs5M99mIlre0chzI13bSLDOuUG4RJ8yjYjXysWF\"\n        }\n      },\n      \"encrypted_key\": \"iCV1_peiRwnsrrBQWmp7GOd-taee-Yk8t6XqJCZPGziglDpGBu_ZhA\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+A256KW\",\n        \"apu\": \"MEJ6OHlSd3U5ZUM4R2k3Y1lPd0FLTUo4anlzSW5oQXR3SDhrOG05TVgwNA\",\n        \"apv\": \"b2JDSFJMVkR4NjM0Q2F4X0tyM0I4ZmRfLXhqNWtBajByMEt2dnZtcTF6OA\",\n        \"kid\": \"VTVlkyBsoW4ey0sh7TMJBErLGeBeKQsOttFRrXD6eqI\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-384\",\n          \"x\": \"Twps_QU6ShP18uQFNCcdOx9sU9YrHBznNnSbhQD474tLUcnslq5Trubq3ogp-LTX\",\n          \"y\": \"oSES1a5xve9e-lKQ3NMN5_CW9Sii9rTorqUMggDzodLsRGm0Jud3HAy2-uE956Xq\"\n        }\n      },\n      \"encrypted_key\": \"dLDKyXeZJDcB_i1Tnn_EUxqCc2ukneaummXF_FwcbpnMH8B0eVizvA\"\n    }\n  ],\n  \"aad\": \"m72Q9j28hFk0imbFVzqY4KfTE77L8itJoX75N3hwiwA\",\n  \"iv\": \"nuuuri2fyNl3jBo6\",\n  \"ciphertext\": \"DCWevJuEo5dx-MmqPvw\",\n  \"tag\": \"Pyt1S_Smg9Pnd1u_5Z7nbA\"\n}\n

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#113-nist-p-521-keys","title":"1.1.3 NIST P-521 keys","text":"

    The packer generates the following protected headers that includes the skid: - Generated protected headers: {\"cty\":\"application/didcomm-plain+json\",\"enc\":\"A256GCM\",\"skid\":\"oq-WBIGQm-iHiNRj6nId4-E1QtY8exzp8C56SziUfeU\",\"typ\":\"application/didcomm-encrypted+json\"} - raw (no padding) base64URL encoded: eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJBMjU2R0NNIiwic2tpZCI6Im9xLVdCSUdRbS1pSGlOUmo2bklkNC1FMVF0WThleHpwOEM1NlN6aVVmZVUiLCJ0eXAiOiJhcHBsaWNhdGlvbi9kaWRjb21tLWVuY3J5cHRlZCtqc29uIn0 - Sender key JWK format:

    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-521\",\n  \"x\": \"AXKDGPnD6hlQIre8aEeu33bQffkfl-eQfQXgzNXQX7XFYt5GKA1N6w4-f0_Ci7fQNKGkQuCoAu5-6CNk9M_cHiDi\",\n  \"y\": \"Ae4-APhoZAmM99MdY9io9IZA43dN7dA006wlFb6LJ9bcusJOi5R-o3o3FhCjt5KTv_JxYbo6KU4PsBwQ1eeKyJ0U\",\n  \"d\": \"AP9l2wmQ85P5XD84CkEQVWHaX_46EDvHxLWHEKsHFSQYjEh6BDSuyy1TUNv68v8kpbLCDjvsBc3cIBqC4_T1r4pU\"\n}\n
    - Sender kid (jwk thumbprint raw base64 URL encoded): oq-WBIGQm-iHiNRj6nId4-E1QtY8exzp8C56SziUfeU - Recipient 1 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-521\",\n  \"x\": \"ALmUHkd9Gi2NApJojNzzA34Qdd1-KLnq6jd2UJ9wl-xJzTQ2leg8qi3-hrFs7NqNfxqO6vE5bBoWYFeAcf3LqJOU\",\n  \"y\": \"AN-MutmkAXGzlgzSQJRnctHDcjQQNpRek-8BeqyUDXdZKNGKSMEAzw6Hnl3VdvsvihQfrxcajpx5PSnwxbbdakHq\",\n  \"d\": \"AKv-YbKdI6y8NRMP-e17-RjZyRTfGf0Xh9Og5g7q7aq0xS2mO59ttIJ67XHW5SPTBQDbltdUcydKroWNUIGvhKNv\"\n}\n
    - Recipient 1 kid (jwk thumbprint raw base64 URL encoded): wax1T_hGUvM0NmlbFJi2RizQ_gWajumI5j0Hx7CbgAw - Recipient 2 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-521\",\n  \"x\": \"ALmUHkd9Gi2NApJojNzzA34Qdd1-KLnq6jd2UJ9wl-xJzTQ2leg8qi3-hrFs7NqNfxqO6vE5bBoWYFeAcf3LqJOU\",\n  \"y\": \"AN-MutmkAXGzlgzSQJRnctHDcjQQNpRek-8BeqyUDXdZKNGKSMEAzw6Hnl3VdvsvihQfrxcajpx5PSnwxbbdakHq\",\n  \"d\": \"AKv-YbKdI6y8NRMP-e17-RjZyRTfGf0Xh9Og5g7q7aq0xS2mO59ttIJ67XHW5SPTBQDbltdUcydKroWNUIGvhKNv\"\n}\n
    - Recipient 2 kid (jwk thumbprint raw base64 URL encoded): XmLVV-CqMkTGQIe6-KecWZWtZVwORTMP2y5aqMPV7P4 - Recipient 3 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-521\",\n  \"x\": \"AHCbpo-299Q0Fk71CtBoPu-40-Z0UOu4cGZfgtHHwcu3ciMWVR8IWF4bgvFpAPfKG8Dqx7JJWO8uEgLE67A7aQOL\",\n  \"y\": \"AQ_JBjS3lt8zz3njFhUoJwEdSJMyrSfGPCLpaWkKuRo25k3im-7IjY8T43gvzZXYwV3PKKR3iJ1jnQCrYmfRrmva\",\n  \"d\": \"ACgCw3U3eWTYD5vcygoOpoGPost9TojYJH9FllyRuqwlS3L8dkZu7vKhFyoEg6Bo8AqcOUj5Mtgxhd6Wu02YvqK3\"\n}\n
    - Recipient 3 kid (jwk thumbprint raw, no padding, base64 URL encoded): pRJtTY7V1pClPu8WEgEZonzaHq3K0El9Vcb8qmjucSg - List of kids used for AAD for the above recipients (sorted kid values joined with .): S8s7FFL7f0fUMXt93WOWC-3PJrV1iuAmB_ZlCDyjXqs.XmLVV-CqMkTGQIe6-KecWZWtZVwORTMP2y5aqMPV7P4.pRJtTY7V1pClPu8WEgEZonzaHq3K0El9Vcb8qmjucSg - Resulting AAD value (sha256 of above list raw, no padding, base64 URL encoded): tOS8nLSCERw2V9WOZVo6cenGuM4DJvHse1dsvTk8_As - Finally, packing the payload outputs the following JWE (pretty printed for readability):
    {\n  \"protected\": \"eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJBMjU2R0NNIiwic2tpZCI6Im9xLVdCSUdRbS1pSGlOUmo2bklkNC1FMVF0WThleHpwOEM1NlN6aVVmZVUiLCJ0eXAiOiJhcHBsaWNhdGlvbi9kaWRjb21tLWVuY3J5cHRlZCtqc29uIn0\",\n  \"recipients\": [\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+A256KW\",\n        \"apu\": \"b3EtV0JJR1FtLWlIaU5SajZuSWQ0LUUxUXRZOGV4enA4QzU2U3ppVWZlVQ\",\n        \"apv\": \"WG1MVlYtQ3FNa1RHUUllNi1LZWNXWld0WlZ3T1JUTVAyeTVhcU1QVjdQNA\",\n        \"kid\": \"XmLVV-CqMkTGQIe6-KecWZWtZVwORTMP2y5aqMPV7P4\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-521\",\n          \"x\": \"AP630J9yi2UFBfRWKucXB8eu9-SSKbbbD1fzFhLgbI3xTRTRNMGm-U5EGHbplMLsOfP2pNxtAgo2-d6abiZiD6gg\",\n          \"y\": \"AE1Grtp1iFvySLN4yHVvE0kYWChqVfkO_kHEMujjL6vVu_AAOvl3aogquLv1zgduitCPbKRTno89r3rv0L0Kuj0M\"\n        }\n      },\n      \"encrypted_key\": \"FSYpXFfgPlSfj91VFQ4zAs0Wb3CEpWcBcGeW4nld9szVfb_WRbqTtA\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+A256KW\",\n        \"apu\": \"b3EtV0JJR1FtLWlIaU5SajZuSWQ0LUUxUXRZOGV4enA4QzU2U3ppVWZlVQ\",\n        \"apv\": \"WG1MVlYtQ3FNa1RHUUllNi1LZWNXWld0WlZ3T1JUTVAyeTVhcU1QVjdQNA\",\n        \"kid\": \"S8s7FFL7f0fUMXt93WOWC-3PJrV1iuAmB_ZlCDyjXqs\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-521\",\n          \"x\": \"Acw0XM1IZl63ltysb-ivw8zBhZ-Wz54SaXM_vGGea8Sa5w6VWdZflp1tibzHkfu4novFFpNbKtnCKi-28AqQnOYZ\",\n          \"y\": \"AajoBj0KMrlaIA17RKnShFNzIb1S81oLYZu5MXzAg-XvT8_q83dXajOCiYJLo3taUvHTlcPjkHMG3_8442DgWpU_\"\n        }\n      },\n      \"encrypted_key\": \"3ct4awH6xyp9BjA74Q_j6ot6F32okEYXbS2e6NIkiAgs-JGyEPWoxw\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+A256KW\",\n        \"apu\": \"b3EtV0JJR1FtLWlIaU5SajZuSWQ0LUUxUXRZOGV4enA4QzU2U3ppVWZlVQ\",\n        \"apv\": \"WG1MVlYtQ3FNa1RHUUllNi1LZWNXWld0WlZ3T1JUTVAyeTVhcU1QVjdQNA\",\n        \"kid\": \"pRJtTY7V1pClPu8WEgEZonzaHq3K0El9Vcb8qmjucSg\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-521\",\n          \"x\": \"APu-ArpY-GUntHG7BzTvUauKVP_YpCcVnZFX6r_VvYY2iPbFSZYxvUdUbX3TGK-Q92rTHNaNnutjbPcrCaBpJecM\",\n          \"y\": \"AONhGq1vGU20Wdrx1FT5SBdLOIvqOK_pxhTJZhS0Vwi_JYQdKN6PHrX9GyJ23ZhaY3bBKX6V2uzRJzV8Qam1FUbz\"\n        }\n      },\n      \"encrypted_key\": \"U51txv9yfZASl8tlT7GbNtLjeAqTHUVT4O9MEqBKaYIdAcA7Qd7dnw\"\n    }\n  ],\n  \"aad\": \"tOS8nLSCERw2V9WOZVo6cenGuM4DJvHse1dsvTk8_As\",\n  \"iv\": \"LJl-9ygxPGMAmVHP\",\n  \"ciphertext\": \"HOfi-W7mcQv93scr1z8\",\n  \"tag\": \"zaM6OfzhVhYCsqD2VW5ztw\"\n}\n

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#114-x25519-keys","title":"1.1.4 X25519 keys","text":"

    The packer generates the following protected headers that includes the skid: - Generated protected headers: {\"cty\":\"application/didcomm-plain+json\",\"enc\":\"A256GCM\",\"skid\":\"X5INSMIv_w4Q7pljH7xjeUrRAKiBGHavSmOYyyiRugc\",\"typ\":\"application/didcomm-encrypted+json\"} - raw (no padding) base64URL encoded: eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJBMjU2R0NNIiwic2tpZCI6Ilg1SU5TTUl2X3c0UTdwbGpIN3hqZVVyUkFLaUJHSGF2U21PWXl5aVJ1Z2MiLCJ0eXAiOiJhcHBsaWNhdGlvbi9kaWRjb21tLWVuY3J5cHRlZCtqc29uIn0 - Sender key JWK format:

    {\n  \"kty\": \"OKP\",\n  \"crv\": \"X25519\",\n  \"x\": \"WKkktGWkUB9hDITcqa1Z6MC8rcWy8fWtxuT7xwQF1lw\",\n  \"d\": \"-LEcVt6bW_ah9gY7H_WknTsg1MXq8yc42SrSJhqP0Vo\"\n}\n
    - Sender kid (jwk thumbprint raw base64 URL encoded): X5INSMIv_w4Q7pljH7xjeUrRAKiBGHavSmOYyyiRugc - Recipient 1 key JWK format:
    {\n  \"kty\": \"OKP\",\n  \"crv\": \"X25519\",\n  \"x\": \"NJzDtIa7vjz-isjaI-6GKGDe2EUx26-D44d6jLILeBI\",\n  \"d\": \"MEBNdr6Tpb0XfD60NeHby-Tkmlpgr7pvVe7Q__sBbGw\"\n}\n
    - Recipient 1 kid (jwk thumbprint raw base64 URL encoded): 2UR-nzYjVhsq0cZakWjE38-wUdG0S2EIrLZ8Eh0KVO0 - Recipient 2 key JWK format:
    {\n  \"kty\": \"OKP\",\n  \"crv\": \"X25519\",\n  \"x\": \"_aiA8rwrayc2k9EL-mkqtSh8onyl_-EzVif3L-q-R20\",\n  \"d\": \"ALBfdypF_lAbBtWXhwvq9Rs7TGjcLd-iuDh0s3yWr2Y\"\n}\n
    - Recipient 2 kid (jwk thumbprint raw base64 URL encoded): dvDd4h1rHj-onj-Xz9O1KRIgkMhh3u23d-94brHbBKo - Recipient 3 key JWK format:
    {\n  \"kty\": \"OKP\",\n  \"crv\": \"X25519\",\n  \"x\": \"zuHJfrIarLGFga0OwZqDlvlI5P1bb9DFhAtdnI54pwQ\",\n  \"d\": \"8BVFAqxPHXB5W-EBxr-EjdUmA4HqY1gwDjiYvt0UxUk\"\n}\n
    - Recipient 3 kid (jwk thumbprint raw, no padding, base64 URL encoded): hj57wbrmOygTc_ktMPqKMHdiL85FdiGJa5DKzoLIzeU - List of kids used for AAD for the above recipients (sorted kid values joined with .): 2UR-nzYjVhsq0cZakWjE38-wUdG0S2EIrLZ8Eh0KVO0.dvDd4h1rHj-onj-Xz9O1KRIgkMhh3u23d-94brHbBKo.hj57wbrmOygTc_ktMPqKMHdiL85FdiGJa5DKzoLIzeU - Resulting AAD value (sha256 of above list raw, no padding, base64 URL encoded): L-QV1cHI5u8U9BQa8_S4CFW-LhKNXHCjmqydtQYuSLw - Finally, packing the payload outputs the following JWE (pretty printed for readability):
    {\n  \"protected\": \"eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJBMjU2R0NNIiwic2tpZCI6Ilg1SU5TTUl2X3c0UTdwbGpIN3hqZVVyUkFLaUJHSGF2U21PWXl5aVJ1Z2MiLCJ0eXAiOiJhcHBsaWNhdGlvbi9kaWRjb21tLWVuY3J5cHRlZCtqc29uIn0\",\n  \"recipients\": [\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+A256KW\",\n        \"apu\": \"WDVJTlNNSXZfdzRRN3Bsakg3eGplVXJSQUtpQkdIYXZTbU9ZeXlpUnVnYw\",\n        \"apv\": \"MlVSLW56WWpWaHNxMGNaYWtXakUzOC13VWRHMFMyRUlyTFo4RWgwS1ZPMA\",\n        \"kid\": \"2UR-nzYjVhsq0cZakWjE38-wUdG0S2EIrLZ8Eh0KVO0\",\n        \"epk\": {\n          \"kty\": \"OKP\",\n          \"crv\": \"X25519\",\n          \"x\": \"IcuAA7zPN0mLt4GSZLQJ6f8p3yPALQaSyupbSRpDnwA\"\n        }\n      },\n      \"encrypted_key\": \"_GoKcbrlbPR8hdgpDdpotO4WvAKOzyOEXo5A2RlxVaEb0enFej2DFQ\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+A256KW\",\n        \"apu\": \"WDVJTlNNSXZfdzRRN3Bsakg3eGplVXJSQUtpQkdIYXZTbU9ZeXlpUnVnYw\",\n        \"apv\": \"MlVSLW56WWpWaHNxMGNaYWtXakUzOC13VWRHMFMyRUlyTFo4RWgwS1ZPMA\",\n        \"kid\": \"dvDd4h1rHj-onj-Xz9O1KRIgkMhh3u23d-94brHbBKo\",\n        \"epk\": {\n          \"kty\": \"OKP\",\n          \"crv\": \"X25519\",\n          \"x\": \"_BVh0oInkDiqnTkHKLvNMa8cldr79TZS00MJCYwZo3Y\"\n        }\n      },\n      \"encrypted_key\": \"gacTLNP-U5mYAHJLG9F97R52aG244NfLeWg_Dj4Fy0C96oIIN-3psw\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+A256KW\",\n        \"apu\": \"WDVJTlNNSXZfdzRRN3Bsakg3eGplVXJSQUtpQkdIYXZTbU9ZeXlpUnVnYw\",\n        \"apv\": \"MlVSLW56WWpWaHNxMGNaYWtXakUzOC13VWRHMFMyRUlyTFo4RWgwS1ZPMA\",\n        \"kid\": \"hj57wbrmOygTc_ktMPqKMHdiL85FdiGJa5DKzoLIzeU\",\n        \"epk\": {\n          \"kty\": \"OKP\",\n          \"crv\": \"X25519\",\n          \"x\": \"alPo4cjEjondCmz8mw8tntYxlpGPSLaqe3SSI_wu11s\"\n        }\n      },\n      \"encrypted_key\": \"q2RpqrdZA9mvVBGTvMNHg3P6SysnuCpfraLWhRseiQ1ImJWdLq53TA\"\n    }\n  ],\n  \"aad\": \"L-QV1cHI5u8U9BQa8_S4CFW-LhKNXHCjmqydtQYuSLw\",\n  \"iv\": \"J-OEJGFWvJ6rw9dX\",\n  \"ciphertext\": \"BvFi1vAzq0Uostj0_ms\",\n  \"tag\": \"C6itmqZ7ehMx9FF70fdGGQ\"\n}\n

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#12-single-recipient-jwes","title":"1.2 Single Recipient JWEs","text":"

    Packing a message with 1 recipient using the Flattened JWE JSON serialization and Compact JWE serialization formats as mentioned in the notes above.

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#121-nist-p-256-key","title":"1.2.1 NIST P-256 key","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#122-nist-p-384-key","title":"1.2.2 NIST P-384 key","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#123-nist-p-521-key","title":"1.2.3 NIST P-521 key","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#124-x25519-key","title":"1.2.4 X25519 key","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#2-xc20p-content-encryption","title":"2 XC20P content encryption","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#21-multi-recipients-jwes","title":"2.1 Multi recipients JWEs","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#211-nist-p-256-keys","title":"2.1.1 NIST P-256 keys","text":"

    The packer generates the following protected headers that includes the skid: - Generated protected headers: {\"cty\":\"application/didcomm-plain+json\",\"enc\":\"XC20P\",\"skid\":\"T1jGtZoU-Xa_5a1QKexUU0Jq9WKDtS7TCowVvjoFH04\",\"typ\":\"application/didcomm-encrypted+json\"} - raw (no padding) base64URL encoded: eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJYQzIwUCIsInNraWQiOiJUMWpHdFpvVS1YYV81YTFRS2V4VVUwSnE5V0tEdFM3VENvd1Z2am9GSDA0IiwidHlwIjoiYXBwbGljYXRpb24vZGlkY29tbS1lbmNyeXB0ZWQranNvbiJ9 - Sender key JWK format:

    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-256\",\n  \"x\": \"46OXm1dUTO3MB-8zoxbn-9dk0khgeIqsKFO-nTJ9keM\",\n  \"y\": \"8IlrwB-dl5bFd5RT4YAbgAdj5Y-a9zhc9wCMnXDZDvA\",\n  \"d\": \"58GZDz9_opy-nEeaJ_cyEL63TO-l063aV5nLADCgsGY\"\n}\n
    - Sender kid (jwk thumbprint raw base64 URL encoded): T1jGtZoU-Xa_5a1QKexUU0Jq9WKDtS7TCowVvjoFH04 - Recipient 1 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-256\",\n  \"x\": \"r9MRjEQ7CBxAgMyEG3ZjIlkGCuRX0rTaBdbkAcY17hA\",\n  \"y\": \"MRSgHQycDFPdSABGv5V0Qd-2q7ebs_x0_fNFyabGgXU\",\n  \"d\": \"LK9yfSxuET5n5uZDNO-64sJKWxJs7LTkqhA4mAuKQnE\"\n}\n
    - Recipient 1 kid (jwk thumbprint raw base64 URL encoded): dmfXisqWjRT-tFpODOD-G0CBF6zjHywNUjrrD3IFmcs - Recipient 2 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-256\",\n  \"x\": \"PMhlaU_KNEWou004AEyAFoJi8vNOnY75ROiRzzjhDR0\",\n  \"y\": \"tEcJNRv2rqYlYWeRloRabcp2lRorRaZTLM0ZNBoEyN0\",\n  \"d\": \"t1-QysBdkbkpqEBDo_JPsi-6YqD24UoAGBrruI2XNhA\"\n}\n
    - Recipient 2 kid (jwk thumbprint raw base64 URL encoded): 2_Sf_YshIFhQ11NH9muAxLWwyFUvJnfXbYFOAC-8HTw - Recipient 3 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-256\",\n  \"x\": \"V9dWH69KZ_bvrxdWgt5-o-KnZLcGuWjAKVWMueiQioM\",\n  \"y\": \"lvsUBieuXV6qL4R3L94fCJGu8SDifqh3fAtN2plPWX4\",\n  \"d\": \"llg97kts4YxIF-r3jn7wcZ-zV0hLcn_AydIKHDF-HJc\"\n}\n
    - Recipient 3 kid (jwk thumbprint raw, no padding, base64 URL encoded): mKtrI7SV3z2U9XyhaaTYlQFX1ANi6Wkli8b3NWVq4C4 - List of kids used for AAD for the above recipients (sorted kid values joined with .): 2_Sf_YshIFhQ11NH9muAxLWwyFUvJnfXbYFOAC-8HTw.dmfXisqWjRT-tFpODOD-G0CBF6zjHywNUjrrD3IFmcs.mKtrI7SV3z2U9XyhaaTYlQFX1ANi6Wkli8b3NWVq4C4 - Resulting AAD value (sha256 of above list raw, no padding, base64 URL encoded): PNKzNc6e0MtDtIGamjsx2fytSu6t8GygofQbzTrtMNA - Finally, packing the payload outputs the following JWE (pretty printed for readability):
    {\n  \"protected\": \"eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJYQzIwUCIsInNraWQiOiJUMWpHdFpvVS1YYV81YTFRS2V4VVUwSnE5V0tEdFM3VENvd1Z2am9GSDA0IiwidHlwIjoiYXBwbGljYXRpb24vZGlkY29tbS1lbmNyeXB0ZWQranNvbiJ9\",\n  \"recipients\": [\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"VDFqR3Rab1UtWGFfNWExUUtleFVVMEpxOVdLRHRTN1RDb3dWdmpvRkgwNA\",\n        \"apv\": \"ZG1mWGlzcVdqUlQtdEZwT0RPRC1HMENCRjZ6akh5d05VanJyRDNJRm1jcw\",\n        \"kid\": \"dmfXisqWjRT-tFpODOD-G0CBF6zjHywNUjrrD3IFmcs\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-256\",\n          \"x\": \"80NGcUh0mIy_XrcaAqD7GCHF0FU2W5j4Jt-wfwxvJVs\",\n          \"y\": \"KpsNL9A-FGgL7S97ce8wcWYc9J1Q6_luxKAFIu7BNIw\"\n        }\n      },\n      \"encrypted_key\": \"wGQO8LX7o9JmYI0PIGUruU7i6ybZYefsTanZuo7hIDyn21ix6fSFPOmvgjPxZ8q_-hZF2yGYtudfLiuPzXlybWJkmTlP9PcY\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"VDFqR3Rab1UtWGFfNWExUUtleFVVMEpxOVdLRHRTN1RDb3dWdmpvRkgwNA\",\n        \"apv\": \"ZG1mWGlzcVdqUlQtdEZwT0RPRC1HMENCRjZ6akh5d05VanJyRDNJRm1jcw\",\n        \"kid\": \"2_Sf_YshIFhQ11NH9muAxLWwyFUvJnfXbYFOAC-8HTw\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-256\",\n          \"x\": \"4YrbAQCLLya1XqRvjfcYdonllWQulrLP7zE0ooclKXA\",\n          \"y\": \"B3tI8lsWHRwBQ19pAFzXiBkLgpE6leTeQT6b709gllE\"\n        }\n      },\n      \"encrypted_key\": \"5tY3t1JI8L6s974kmXbzKMaePHygNan2Qqpd1B0BiqBsjaHNUH2Unv1IMGiT3oQD0xXeVPAxQq7vNZgANitxBbgG_uxGiRld\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"VDFqR3Rab1UtWGFfNWExUUtleFVVMEpxOVdLRHRTN1RDb3dWdmpvRkgwNA\",\n        \"apv\": \"ZG1mWGlzcVdqUlQtdEZwT0RPRC1HMENCRjZ6akh5d05VanJyRDNJRm1jcw\",\n        \"kid\": \"mKtrI7SV3z2U9XyhaaTYlQFX1ANi6Wkli8b3NWVq4C4\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-256\",\n          \"x\": \"-e9kPGp2rmtpFs2zzTaY6xfeXjr1Xua1vHCQZRKJ54s\",\n          \"y\": \"Mc7b8U06KHV__1-XMaReilLxa63LcICqsPtkZGXEkEs\"\n        }\n      },\n      \"encrypted_key\": \"zVQUQytYv4EmQS0zye3IsXiN_2ol-Qn2nvyaJgEPvNdwFuzTFPOupTl-PeOhkRvxPfuLlw5TKnSRyPUejP8zyHbBgUZ6gDmz\"\n    }\n  ],\n  \"aad\": \"PNKzNc6e0MtDtIGamjsx2fytSu6t8GygofQbzTrtMNA\",\n  \"iv\": \"UKgm1XTPf1QFDXoRWlf-KrsBRQKSwpBA\",\n  \"ciphertext\": \"pbwy8HEnr1hPA0Jt5ho\",\n  \"tag\": \"nUazXvxpMXGoL1__92CAyA\"\n}\n

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#212-nist-p-384-keys","title":"2.1.2 NIST P-384 keys","text":"

    The packer generates the following protected headers that includes the skid: - Generated protected headers: {\"cty\":\"application/didcomm-plain+json\",\"enc\":\"XC20P\",\"skid\":\"xXdnS3M4Bb497A0ko9c6H0D4NNbj1XpwGr4Tk9Fcw7k\",\"typ\":\"application/didcomm-encrypted+json\"} - raw (no padding) base64URL encoded: eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJYQzIwUCIsInNraWQiOiJ4WGRuUzNNNEJiNDk3QTBrbzljNkgwRDROTmJqMVhwd0dyNFRrOUZjdzdrIiwidHlwIjoiYXBwbGljYXRpb24vZGlkY29tbS1lbmNyeXB0ZWQranNvbiJ9 - Sender key JWK format:

    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-384\",\n  \"x\": \"bfuATmVQ_jxLIgfuhKNYrNRNu-VnK4FzTCCVRvycgekS8fIuC4rZS9uQi6Q2Ujwd\",\n  \"y\": \"XkVJ93cLKpeZeCMEOsHRKk4rse1zXpzY6yUibEtwZG9nFWF05Ro8OQs5fZVK2TWC\",\n  \"d\": \"OVzGxGyyaHGJpx1MoSwPjmWPas28sfq1tj7UkYFoK3ENsujmzUduAW6HwyaBlXRW\"\n}\n
    - Sender kid (jwk thumbprint raw base64 URL encoded): xXdnS3M4Bb497A0ko9c6H0D4NNbj1XpwGr4Tk9Fcw7k - Recipient 1 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-384\",\n  \"x\": \"xhk5K7x4xw9OJpkFhmsY39jceQqx57psvcZstiNZmKbXD7kT9ajfGKFA6YA-ali5\",\n  \"y\": \"7Hj32-JDMNDYWRGy3f-0E9lbUGp6yURMaZ9M36Q_FPgljKgHa9i0Fn1ogr_zEmO3\",\n  \"d\": \"Pc3r6eg15XZeKgTDMPcGjf_SvImZxG4bDzgCh3QShClAwMdmoNbzPZGhBByNrlvO\"\n}\n
    - Recipient 1 kid (jwk thumbprint raw base64 URL encoded): aIlhDTWJmT-_Atad5EBbvbZPkPnz2IYT85I6T44kcE4 - Recipient 2 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-384\",\n  \"x\": \"wqW3DUkUAT0Cyk3hq0KVJbqtPSJOoqulp_Tqa29jBEPliIJ9rnq7cRkJyxArCYAj\",\n  \"y\": \"ZfBtdTTVRh9SeQDCwsgAo15cCX2I-7J6xdyxDPyH8LBhbUA_8npHvNquKGta9p8x\",\n  \"d\": \"krddjYsOD4YIIkNjWXTrYV9rOVlmLNaeoLHChJ5oUr4c21LHxGL4xTI1bEoXKgJ2\"\n}\n
    - Recipient 2 kid (jwk thumbprint raw base64 URL encoded): 02WdA5ip_Amam611KA6fdoTs533yZH-ovfpt8t9zVjg - Recipient 3 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-384\",\n  \"x\": \"If6iEafrkcL53mVYCbm5rmnwAw3kjb13gUjBoDePggO7xMiSFyej4wbTabdCyfbg\",\n  \"y\": \"nLX6lEce-9r19NA_nI5mGK3YFLiX9IYRgXZZCUd_Br91PaE8Mr1JR01utAPoGx36\",\n  \"d\": \"jriJKFpQfzJtOrp7PhGvH0osHJQJbZrAKjD95itivioVawzMz9wcI_h9VsFV3ff0\"\n}\n
    - Recipient 3 kid (jwk thumbprint raw, no padding, base64 URL encoded): zeqnfYLFWtnJ_e5npBs7CtM5KkToyyM9kCKIFlcyId0 - List of kids used for AAD for the above recipients (sorted kid values joined with .): 02WdA5ip_Amam611KA6fdoTs533yZH-ovfpt8t9zVjg.aIlhDTWJmT-_Atad5EBbvbZPkPnz2IYT85I6T44kcE4.zeqnfYLFWtnJ_e5npBs7CtM5KkToyyM9kCKIFlcyId0 - Resulting AAD value (sha256 of above list raw, no padding, base64 URL encoded): CftHmHttuxR6mRrHe-zBXV2UEvL2wvZEt5yeFDhYSF8 - Finally, packing the payload outputs the following JWE (pretty printed for readability):
    {\n  \"protected\": \"eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJYQzIwUCIsInNraWQiOiJ4WGRuUzNNNEJiNDk3QTBrbzljNkgwRDROTmJqMVhwd0dyNFRrOUZjdzdrIiwidHlwIjoiYXBwbGljYXRpb24vZGlkY29tbS1lbmNyeXB0ZWQranNvbiJ9\",\n  \"recipients\": [\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"eFhkblMzTTRCYjQ5N0Ewa285YzZIMEQ0Tk5iajFYcHdHcjRUazlGY3c3aw\",\n        \"apv\": \"YUlsaERUV0ptVC1fQXRhZDVFQmJ2YlpQa1BuejJJWVQ4NUk2VDQ0a2NFNA\",\n        \"kid\": \"aIlhDTWJmT-_Atad5EBbvbZPkPnz2IYT85I6T44kcE4\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-384\",\n          \"x\": \"k7SRlQ7EwCR8VZ-LF92zOgvpFDAed0mN3mmZeCHHDznZp5TLQShFT9TdnwgsvJFP\",\n          \"y\": \"ZHzkS9BD-I2DtNPhbXuTzf6vUnykdZPus9xZnRu1rWgxVtLQ8j-Jp4YoJgdQmcOu\"\n        }\n      },\n      \"encrypted_key\": \"BO597Rs1RU3ZU-WdzWPgRnPmRULcFBihZxE7Jvl3qw3VUmR5RUXY0Xy9k_dWRnuRCh9Yzxef7tXlqVMaL4KBCfaAbAEOReQw\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"eFhkblMzTTRCYjQ5N0Ewa285YzZIMEQ0Tk5iajFYcHdHcjRUazlGY3c3aw\",\n        \"apv\": \"YUlsaERUV0ptVC1fQXRhZDVFQmJ2YlpQa1BuejJJWVQ4NUk2VDQ0a2NFNA\",\n        \"kid\": \"02WdA5ip_Amam611KA6fdoTs533yZH-ovfpt8t9zVjg\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-384\",\n          \"x\": \"QT9Q_zU9VE3K9r_50mKh7iG8SxYeXVvwnhykphMAk8akfnTeB7FIRC2MzFat9JMT\",\n          \"y\": \"3HeQPqQ_BS5vy2e2L7kgMhHNwNQ2K1pmL9LImrBg8XROuc9EaAGnFSQ439bZXg9y\"\n        }\n      },\n      \"encrypted_key\": \"oKVlxrYhp8Bvr6s6CW7DxTSCMIFMkqLjDP9sCIkLoetHlXM5Mngq46CUqHusKTceHdSOL8sGUbeSBo6lXRKArywtjiVVyStW\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"eFhkblMzTTRCYjQ5N0Ewa285YzZIMEQ0Tk5iajFYcHdHcjRUazlGY3c3aw\",\n        \"apv\": \"YUlsaERUV0ptVC1fQXRhZDVFQmJ2YlpQa1BuejJJWVQ4NUk2VDQ0a2NFNA\",\n        \"kid\": \"zeqnfYLFWtnJ_e5npBs7CtM5KkToyyM9kCKIFlcyId0\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-384\",\n          \"x\": \"GGFw14WnABx5S__MLwjy7WPgmPzCNbygbJikSqwx1nQ7APAiIyLeiAeZnAFQSr8C\",\n          \"y\": \"Bjev4lkaRbd4Ery0vnO8Ox4QgIDGbuflmFq0HhL-QHIe3KhqxrqZqbQYGlDNudEv\"\n        }\n      },\n      \"encrypted_key\": \"S8vnyPjW_19Hws3-igk-cVTSqVTY0_D9SWahnYnWBFBqTdx0b0e8hf06Oiou31Ww-Y3p8Z3O_okqQGzZMWUMLSxUPeCR2ZWx\"\n    }\n  ],\n  \"aad\": \"CftHmHttuxR6mRrHe-zBXV2UEvL2wvZEt5yeFDhYSF8\",\n  \"iv\": \"jTaCuNXs4QdX6HuWvl5AsqIEv4nh2JMP\",\n  \"ciphertext\": \"7y463zoRKgVfpKh3EBw\",\n  \"tag\": \"8YKdJpF2DnQQwEkBcbuEnw\"\n}\n

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#213-nist-p-521-keys","title":"2.1.3 NIST P-521 keys","text":"

    The packer generates the following protected headers that includes the skid: - Generated protected headers: {\"cty\":\"application/didcomm-plain+json\",\"enc\":\"XC20P\",\"skid\":\"bq3OI5517dSIMeD9K3lTqvkvvkmsRtifD6tvjlrKYsU\",\"typ\":\"application/didcomm-encrypted+json\"} - raw (no padding) base64URL encoded: eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJYQzIwUCIsInNraWQiOiJicTNPSTU1MTdkU0lNZUQ5SzNsVHF2a3Z2a21zUnRpZkQ2dHZqbHJLWXNVIiwidHlwIjoiYXBwbGljYXRpb24vZGlkY29tbS1lbmNyeXB0ZWQranNvbiJ9 - Sender key JWK format:

    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-521\",\n  \"x\": \"ACN9T83BbPNn1eRyo-TrL0GyC7kBNQvgUxk55fCeQKDSTVhbzCKia7WecCUshyEF-BOQbfEsOIUCq3g7xY3VEeth\",\n  \"y\": \"APDIfDv6abLQ-Zb_p8PxwJe1x3U0-PdgXLNbtS7evGuUROHt79SVkpfXcZ3UaEc6cMoFfd2oMvbmUjCMM4-Sgipn\",\n  \"d\": \"AXCGyR9uXY8vDr7D4HvMxep-d5biQzgHR6WsdOF4R5M9qYb8FhRIQCMbmDSZzCuqgGgXrPRMPm5-omvWVeYqwwa3\"\n}\n
    - Sender kid (jwk thumbprint raw base64 URL encoded): bq3OI5517dSIMeD9K3lTqvkvvkmsRtifD6tvjlrKYsU - Recipient 1 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-521\",\n  \"x\": \"AZi-AxJkB09qw8dBnNrz53xM-wER0Y5IYXSEWSTtzI5Sdv_5XijQn9z-vGz1pMdww-C75GdpAzp2ghejZJSxbAd6\",\n  \"y\": \"AZzRvW8NBytGNbF3dyNOMHB0DHCOzGp8oYBv_ZCyJbQUUnq-TYX7j8-PlKe9Ce5acxZzrcUKVtJ4I8JgI5x9oXIW\",\n  \"d\": \"AHGOZNkAcQCdDZOpQRdbH-f89mpjY_kGmtEpTExd51CcRlHhXuuAr6jcgb8YStwy9FN7vCU1y5LnJfKhGUGrP2a4\"\n}\n
    - Recipient 1 kid (jwk thumbprint raw base64 URL encoded): 7icoqReWFlpF16dzZD3rBgK1cJ265WzfF9sJJXqOe0M - Recipient 2 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-521\",\n  \"x\": \"ASRvWU-d_XI2S1UQb7PoXIXTLXd8vgLmFb-9mmmIrzTMmIXFXpsDN9_1-Xg_r3qkEg-zBjTi327GIseWFGMa0Mrp\",\n  \"y\": \"AJ0VyjDn4Rn6SKamFms4593mW5K936d4Jr7-J5OjJqTZtS6APgNkrwFjhKPHQfg7o8T4pmX7vlfFY5Flx7IOYJuw\",\n  \"d\": \"ALzWMohuwSqkiqqEhijiBoH6kJ580Dtxe7CfgqEboc5DG0pMtAUf-a91VbmR1U8bQox-B4_YRXoFLRns2tI_wPYz\"\n}\n
    - Recipient 2 kid (jwk thumbprint raw base64 URL encoded): BUEVQ3FlDsml4JYrLCwwsL5BUZt-hYwb2B0SoJ6dzHc - Recipient 3 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-521\",\n  \"x\": \"AB2ke_2nVg95OP3Xb4Fg0Gg4KgfZZf3wBEYoOlGhXmHNCj56G10vnOe1hGRKIoD-JkPWuulcUtsIUO7r3Rz2mLP0\",\n  \"y\": \"AJTaqfF8d4cFv_fP4Uoqq-uCCObmyPsD1CphbCuCZumarfzjA5SpAQCdfz3No4Nhn53OqdcTkm654Yvfj1vOp5t6\",\n  \"d\": \"Af6Ba1x6i6glhRcR2RmZMZJ5BJXibpMB0TqjY_2Fe2LekS9QQK21JtrF20dj_gahxcrnfcn8oJ2xCrEMKaexgcsb\"\n}\n
    - Recipient 3 kid (jwk thumbprint raw, no padding, base64 URL encoded): C9iN-jkTFBbTz3Yv3FquR3dAsHYnAIg1_hT0jsefLDE - List of kids used for AAD for the above recipients (sorted kid values joined with .): 7icoqReWFlpF16dzZD3rBgK1cJ265WzfF9sJJXqOe0M.BUEVQ3FlDsml4JYrLCwwsL5BUZt-hYwb2B0SoJ6dzHc.C9iN-jkTFBbTz3Yv3FquR3dAsHYnAIg1_hT0jsefLDE - Resulting AAD value (sha256 of above list raw, no padding, base64 URL encoded): VBNrffp39h1F6sg0dzkArcd2WjpKeqEvqt6HNXaVfKU - Finally, packing the payload outputs the following JWE (pretty printed for readability):
    {\n  \"protected\": \"eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJYQzIwUCIsInNraWQiOiJicTNPSTU1MTdkU0lNZUQ5SzNsVHF2a3Z2a21zUnRpZkQ2dHZqbHJLWXNVIiwidHlwIjoiYXBwbGljYXRpb24vZGlkY29tbS1lbmNyeXB0ZWQranNvbiJ9\",\n  \"recipients\": [\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"YnEzT0k1NTE3ZFNJTWVEOUszbFRxdmt2dmttc1J0aWZENnR2amxyS1lzVQ\",\n        \"apv\": \"N2ljb3FSZVdGbHBGMTZkelpEM3JCZ0sxY0oyNjVXemZGOXNKSlhxT2UwTQ\",\n        \"kid\": \"7icoqReWFlpF16dzZD3rBgK1cJ265WzfF9sJJXqOe0M\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-521\",\n          \"x\": \"ABd71Xomy3mv-mkAipKb18UQ-1xXt7tGDDwf0k5fpLADg1qK--Jhn8TdzyjTuve7rJQrlCJH4GjuQjCWVs4T7J_T\",\n          \"y\": \"ANrWrk69QRi4cr8ZbU2vF_0jSjTIUn-fQCHJtxLg3uuvLtzGW7oIEkUFJq_sTZXL_gaPdFIWlI4aIjKRgzOUP_ze\"\n        }\n      },\n      \"encrypted_key\": \"lZa-4LTyaDP01wmN8bvoD69MLl3VY2H_wNaNJ7kYzTFExlgYTPNrFJ5XL6T_h1DUULX0TYJVxbIWQeJ_x_7i-xSv7-BHbFcm\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"YnEzT0k1NTE3ZFNJTWVEOUszbFRxdmt2dmttc1J0aWZENnR2amxyS1lzVQ\",\n        \"apv\": \"N2ljb3FSZVdGbHBGMTZkelpEM3JCZ0sxY0oyNjVXemZGOXNKSlhxT2UwTQ\",\n        \"kid\": \"BUEVQ3FlDsml4JYrLCwwsL5BUZt-hYwb2B0SoJ6dzHc\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-521\",\n          \"x\": \"ALGN2OH1_DKtEZ-990uL1kzHYhYmZD-stOdL6_NMReCKEPZil7Z1tsq0g9l0HNi6DWuMjNyiJCfDd1erWpByFAOX\",\n          \"y\": \"AQgB2aE_3GltqbWzKbWbLa6Fdq6jO4A3LrYUnNDNIuHY6eRH9sRU0yWjmcmWCoukT98wksXJ3isHr9-NqFuZLehi\"\n        }\n      },\n      \"encrypted_key\": \"bybMPkSjuSz8lLAPFJHrxjl1buE8cfONEzvQ2U64h8L0QEZPLK_VewbXVflEPNrOo3oTWlI_878GIKvkxJ8cJOD6a0kZmr87\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"YnEzT0k1NTE3ZFNJTWVEOUszbFRxdmt2dmttc1J0aWZENnR2amxyS1lzVQ\",\n        \"apv\": \"N2ljb3FSZVdGbHBGMTZkelpEM3JCZ0sxY0oyNjVXemZGOXNKSlhxT2UwTQ\",\n        \"kid\": \"C9iN-jkTFBbTz3Yv3FquR3dAsHYnAIg1_hT0jsefLDE\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-521\",\n          \"x\": \"AZKyI6Mg8OdKUYqo3xuKjHiVrlV56_qGBzdwr86QSnebq3Y69Z0qETiTumQv5J3ECmZzs4DiETryRuzdHc2RkKBZ\",\n          \"y\": \"ARJJT7MWjTWWB7leblQgg7PYn_0deScO7AATlcnukFsLbzly0LHs1msVXaerQUCHPg2t-sYGxDP7w0iaDHB8k3Tj\"\n        }\n      },\n      \"encrypted_key\": \"nMGoNk1brn9uO9hlSa7NwVgFUMXnxpKKPkuFHSE2aM_N8q8wJbVBLC9rJ9sPIiSU20tq2sJXaAcoMteajOX6wj_Hzl1uRT1e\"\n    }\n  ],\n  \"aad\": \"VBNrffp39h1F6sg0dzkArcd2WjpKeqEvqt6HNXaVfKU\",\n  \"iv\": \"h0bbZygiAx9MMO2Huxym_QnwrXZHhdyQ\",\n  \"ciphertext\": \"LABYmf_sfPNGgls0wvk\",\n  \"tag\": \"z1rZOEgyryiW_3d5gxnMUQ\"\n}\n

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#214-x25519-keys","title":"2.1.4 X25519 keys","text":"

    The packer generates the following protected headers that includes the skid: - Generated protected headers: {\"cty\":\"application/didcomm-plain+json\",\"enc\":\"XC20P\",\"skid\":\"j8E-tcw1Z_eOCoKEH-7a9T532r8zXfcavbPZlofN0Ek\",\"typ\":\"application/didcomm-encrypted+json\"} - raw (no padding) base64URL encoded: eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJYQzIwUCIsInNraWQiOiJqOEUtdGN3MVpfZU9Db0tFSC03YTlUNTMycjh6WGZjYXZiUFpsb2ZOMEVrIiwidHlwIjoiYXBwbGljYXRpb24vZGlkY29tbS1lbmNyeXB0ZWQranNvbiJ9 - Sender key JWK format:

    {\n  \"kty\": \"OKP\",\n  \"crv\": \"X25519\",\n  \"x\": \"g3Lpdd_DRgjK28qi0sR0-hI-zv7a1X52vpzKc6ZM1Qs\",\n  \"d\": \"cPU_Io7RRHNb_xkQ_D6u3ER4vSjvsILDCKwOj8kVHXQ\"\n}\n
    - Sender kid (jwk thumbprint raw base64 URL encoded): j8E-tcw1Z_eOCoKEH-7a9T532r8zXfcavbPZlofN0Ek - Recipient 1 key JWK format:
    {\n  \"kty\": \"OKP\",\n  \"crv\": \"X25519\",\n  \"x\": \"VlhpUXj-oGs9ge-VLrmYF7Xuzy73YchIfckaYcQefBw\",\n  \"d\": \"QFHCCy0wzgJ_AlGMnjetTd0tnDaZ_7yqJODSV0d-kkg\"\n}\n
    - Recipient 1 kid (jwk thumbprint raw base64 URL encoded): _DHSbVaMeZxriDJn5VoHXYXo6BJacwZx_fGIBfCiJ5c - Recipient 2 key JWK format:
    {\n  \"kty\": \"OKP\",\n  \"crv\": \"X25519\",\n  \"x\": \"y52sexwATOR5J5znNp94MFx19J0rkgzNyLESMVhkE2M\",\n  \"d\": \"6NwEk3_8lKOwLaZM2YkLdW9MF2zDqMjAx_G-uDoAAkw\"\n}\n
    - Recipient 2 kid (jwk thumbprint raw base64 URL encoded): n2MxD23PaCkz7vptma_1j9X2JdUoCFLzrtYuDvOA0Kc - Recipient 3 key JWK format:
    {\n  \"kty\": \"OKP\",\n  \"crv\": \"X25519\",\n  \"x\": \"BYL51mNvx1LKD2wDfga_7GZc0YYI82HhRmHtXfiz_ko\",\n  \"d\": \"MLd_nsRRb_CSzc6Ou8TZFm-A17ZpT1Aen6fIvC6ZuV8\"\n}\n
    - Recipient 3 kid (jwk thumbprint raw, no padding, base64 URL encoded): HHN2ZcES5ps7gCjK-06bCE4EjX_hh7nq2cWd-GfnI5s - List of kids used for AAD for the above recipients (sorted kid values joined with .): HHN2ZcES5ps7gCjK-06bCE4EjX_hh7nq2cWd-GfnI5s._DHSbVaMeZxriDJn5VoHXYXo6BJacwZx_fGIBfCiJ5c.n2MxD23PaCkz7vptma_1j9X2JdUoCFLzrtYuDvOA0Kc - Resulting AAD value (sha256 of above list raw, no padding, base64 URL encoded): K1oFStibrX4x6LplTB0-tO3cwGiZzMvG_6w0LfguVuI - Finally, packing the payload outputs the following JWE (pretty printed for readability):
    {\n  \"protected\": \"eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJYQzIwUCIsInNraWQiOiJqOEUtdGN3MVpfZU9Db0tFSC03YTlUNTMycjh6WGZjYXZiUFpsb2ZOMEVrIiwidHlwIjoiYXBwbGljYXRpb24vZGlkY29tbS1lbmNyeXB0ZWQranNvbiJ9\",\n  \"recipients\": [\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"ajhFLXRjdzFaX2VPQ29LRUgtN2E5VDUzMnI4elhmY2F2YlBabG9mTjBFaw\",\n        \"apv\": \"X0RIU2JWYU1lWnhyaURKbjVWb0hYWVhvNkJKYWN3WnhfZkdJQmZDaUo1Yw\",\n        \"kid\": \"_DHSbVaMeZxriDJn5VoHXYXo6BJacwZx_fGIBfCiJ5c\",\n        \"epk\": {\n          \"kty\": \"OKP\",\n          \"crv\": \"X25519\",\n          \"x\": \"77VAbpx5xn2iavmhzZATXwGnxjRyxjBbtNzojdWP7wo\"\n        }\n      },\n      \"encrypted_key\": \"dvBscDJj2H6kZJgfdqazZ9pXZxUzai-mcExsdr11-RNvxxPd4_Cy6rolLSsY6ugm1sCo9BgRhAW1e6vxgTnY3Ctv0_xZIhvr\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"ajhFLXRjdzFaX2VPQ29LRUgtN2E5VDUzMnI4elhmY2F2YlBabG9mTjBFaw\",\n        \"apv\": \"X0RIU2JWYU1lWnhyaURKbjVWb0hYWVhvNkJKYWN3WnhfZkdJQmZDaUo1Yw\",\n        \"kid\": \"n2MxD23PaCkz7vptma_1j9X2JdUoCFLzrtYuDvOA0Kc\",\n        \"epk\": {\n          \"kty\": \"OKP\",\n          \"crv\": \"X25519\",\n          \"x\": \"sZtHwxjaS51BR2SBGC32jFvUgVlABZ7rkBFqJk8ktXM\"\n        }\n      },\n      \"encrypted_key\": \"2gIQKw_QpnfGbIOso_XesSGWC9ZKu4-ox1eqRu71aS-nBWAbFrdJPqSY7gzAOGUNqg_o6mC1q7coG69G9yen37DIjcoR6mD1\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"ajhFLXRjdzFaX2VPQ29LRUgtN2E5VDUzMnI4elhmY2F2YlBabG9mTjBFaw\",\n        \"apv\": \"X0RIU2JWYU1lWnhyaURKbjVWb0hYWVhvNkJKYWN3WnhfZkdJQmZDaUo1Yw\",\n        \"kid\": \"HHN2ZcES5ps7gCjK-06bCE4EjX_hh7nq2cWd-GfnI5s\",\n        \"epk\": {\n          \"kty\": \"OKP\",\n          \"crv\": \"X25519\",\n          \"x\": \"48AJF8kNoxfHXpUtBApRMUcTf8B0Ho4i_6CvGT4arGY\"\n        }\n      },\n      \"encrypted_key\": \"o_toInYq_NP45UqqFg461O6ruUNSQNKrBXRDA06JQ-faMUUfMGRtzNHK-FzrhtodZLW5bRFFFry9aFjwg5aYloe2JG9-fEcw\"\n    }\n  ],\n  \"aad\": \"K1oFStibrX4x6LplTB0-tO3cwGiZzMvG_6w0LfguVuI\",\n  \"iv\": \"tcThx2bVV8jhteYknijC-vxSED_BKPF8\",\n  \"ciphertext\": \"DUZLQAnWzApBFdwlZDg\",\n  \"tag\": \"YLuHzCD4xSTDxe_0AWukyw\"\n}\n

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#22-single-recipient-jwes","title":"2.2 Single Recipient JWEs","text":"

    Packing a message with 1 recipient using the Flattened JWE JSON serialization and the Compact JWE serialization formats as mentioned in the notes above.

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#221-nist-p-256-key","title":"2.2.1 NIST P-256 key","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#222-nist-p-384-key","title":"2.2.2 NIST P-384 key","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#223-nist-p-521-key","title":"2.2.3 NIST P-521 key","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#224-x25519-key","title":"2.2.4 X25519 key","text":""},{"location":"features/0335-http-over-didcomm/","title":"0335: HTTP Over DIDComm","text":""},{"location":"features/0335-http-over-didcomm/#summary","title":"Summary","text":"

    Allows HTTP traffic to be routed over a DIDComm channel, so applications built to communicate over HTTP can make use of DID-based communication.

    "},{"location":"features/0335-http-over-didcomm/#motivation","title":"Motivation","text":"

    This protocol allows a client-server system that doesn't use DIDs or DIDComm to piggyback on top of a DID-based infrastructure, gaining the benefits of DIDs by using agents as HTTP proxies.

    Example use case: Carl wants to apply for a car loan from his friendly neighborhood used car dealer. The dealer wants a proof of his financial stability from his bank, but he doesn't want to expose the identity of his bank, and his bank doesn't want to develop a custom in-house system (using DIDs) for anonymity. HTTP over DIDComm allows Carl to introduce his car dealer to his bank, using Aries agents and protocols, while all they need to do is install a standard agent to carry arbitrary HTTP messages.

    HTTP over DIDComm turns a dev + ops problem, of redesigning and deploying your server and client to use DID communication, into an ops problem - deploying Aries infrastructure in front of your server and to your clients.

    Using HTTP over DIDComm as opposed to HTTPS between a client and server offers some key benefits: - The client and server can use methods provided by Aries agents to verify their trust in the other party - for example, by presenting verifiable credential proofs. In particular, this allows decentralized client verification and trust, as opposed to client certs. - The client and server can be blind to each others' identities (for example, using fresh peer DIDs and communicating through a router), even while using their agents to ensure trust.

    "},{"location":"features/0335-http-over-didcomm/#tutorial","title":"Tutorial","text":""},{"location":"features/0335-http-over-didcomm/#name-and-version","title":"Name and Version","text":"

    This is the HTTP over DIDComm protocol. It is uniquely identified by the URI:

    \"https://didcomm.org/http-over-didcomm/1.0\"\n
    "},{"location":"features/0335-http-over-didcomm/#concepts","title":"Concepts","text":"

    This RFC assumes that you are familiar with DID communication, and the ~purpose decorator.

    This protocol introduces a new message type which carries an HTTP message, and a method by which an Aries agent can serve as an HTTP proxy. The Aries agent determines the target agent to route the HTTP message through (for example, by parsing the HTTP message's request target), and when the target agent receives the message, it serves the message over HTTP.

    The specifics of determining the target agent or route are not part of this specification, allowing room for a wide array of uses: - A network of enterprise servers behind agents, with the agents being a known, managed pool, with message routing controlled by business logic. - A privacy mix network, with in-browser agents making requests, and routing agents sending messages on random walks through the network until an agent serves the request over the public internet. - A network of service providers behind a routing network, accessed by clients, with any provider able to handle the same class of requests, so routing is based on efficiency/load. - A network of service providers behind a routing network, accessed by clients, where the routing network hides the identity of the service provider and client from each other.

    "},{"location":"features/0335-http-over-didcomm/#protocol-flow","title":"Protocol Flow","text":"

    This protocol takes an HTTP request-response loop and stretches it out across DIDComm, with agents in the middle serving as DIDComm relays, passing along messages.

    The entities involved in this protocol are as follows: - The client and server: the HTTP client and server, which could communicate via HTTP, but in this protocol communicate over DIDComm. - The client agent: the aries agent which receives the HTTP request from the client, and converts it to a DIDComm message, which it sends to the server agent, and translates the reply from the server agent into an HTTP response. - The server agent: the aries agent which receives the DIDComm request message from the client agent, creates an HTTP request for the server, receives the HTTP response, and translates it into a DIDComm message which it sends to the client agent.

    Before a message can be sent, the server must register with its agent using the ~purpose decorator, registering on one or more purpose tags.

    When a client sends an HTTP request to a client agent, the agent may need to maintain an open connection, or store a record of the client's identity/IP address, so the client can receive the coming response.

    The client agent can include some logic to decide whether to send the message, and may need to include some logic to decide where to route the message (note that in some architectures, another agent along the route makes the decision, so the agent might always send to the same target). If it does, it constructs a request DIDComm message (defined below) and sends it to the chosen server agent.

    The route taken by the DIDComm message between the client and server agents is not covered by this RFC.

    The server agent receives the request DIDComm message. It can include some logic to decide whether to permit the message to continue to the server. If so, it makes an HTTP request using the data in the request DIDComm message, and sends it to the server.

    Note: in some use-cases, it might make sense for the server agent to act as a transparent proxy, so the server thinks its talking directly to the client, while in others it might make sense to override client identity information so the server thinks it's connecting to the server agent, for example, as a gateway. In this case, the client agent could anonymize the request, rather than leaving it up to the server agent.

    This same anonymization can be done in the other direction as well.

    The communication happens in reverse when the server sends an HTTP response to its agent, which may again decide whether to permit it to continue. If so, the contents of the HTTP response are encoded into a response DIDComm message (defined below), sent to the client agent, which also makes a go/no-go decision, does some logic (for example, looking up its thread-id to client database) to figure out where the original request in this thread came from, encodes the response data into an HTTP response, and sends that response to the client.

    "},{"location":"features/0335-http-over-didcomm/#message-format","title":"Message Format","text":"

    DIDComm messages for this protocol look as follows:

    "},{"location":"features/0335-http-over-didcomm/#request","title":"request","text":"
    {\n  \"@type\": \"https://didcomm.org/http-over-didcomm/1.0/request\",\n  \"@id\": \"2a0ec6db-471d-42ed-84ee-f9544db9da4b\",\n  \"~purpose\": [],\n  \"method\": <method>,\n  \"resource-uri\": <resource uri value>,\n  \"version\": <version>,\n  \"headers\": [],\n  \"body\": b64enc(body)\n}\n

    The body field is optional.

    The resource-uri field is also optional - if omitted, the server agent needs to set the uri based on the server that is being sent the message.

    Each element of the headers array is an object with two elements: {\"name\": \"<header-name>\", \"value\": \"<header-value>\"}.

    "},{"location":"features/0335-http-over-didcomm/#response","title":"response","text":"
    {\n  \"@type\": \"https://didcomm.org/http-over-didcomm/1.0/response\",\n  \"@id\": \"63d6f6cf-b723-4eaf-874b-ae13f3e3e5c5\",\n  \"~thread\": {\n    \"thid\": \"2a0ec6db-471d-42ed-84ee-f9544db9da4b\",\n    \"sender_order\": 1\n  },\n  \"status\": {\n      \"code\":\"\",\n      \"string\":\"\"\n  },\n  \"version\": <version>,\n  \"headers\": [],\n  \"body\": b64enc(body)\n}\n

    Responses need to indicate their target - the client who sent the request. Response DIDComm messages must include a ~thread decorator so the client agent can correlate thread IDs with its stored HTTP connections.

    The body field is optional.

    Each element of the headers array is an object with two elements: {\"name\": \"<header-name>\", \"value\": \"<header-value>\"}.

    "},{"location":"features/0335-http-over-didcomm/#receiving-http-messages-over-didcomm","title":"Receiving HTTP Messages Over DIDComm","text":"

    Aries agents intended to receive HTTP-over-DIDComm messages have many options for how they handle them, with configuration dependent on intended use. For example: - Serve the message over the internet, configured to use a DNS, etc. - Send the message to a specific server, set in configuration, for an enterprise system where a single server is behind an agent. - Send the message to a server which registered for the message's purpose.

    In cases where a specific server or application is always the target of certain messages, the server/application should register with the server agent on the specific purpose decorator. In cases where the agent may need to invoke additional logic, the agent itself can register a custom handler.

    An agent may implement filtering to accept or reject requests based on any combination of the purpose, sender, and request contents.

    "},{"location":"features/0335-http-over-didcomm/#purpose-value","title":"Purpose Value","text":"

    The purpose values used in the message should be values whose meanings are agreed upon by the client and server. For example, the purpose value can: - indicate the required capabilities of the server that handles a request - contain an anonymous identifier for the server, which has previously been communicated to the client.

    For example, to support the use of DIDComm as a client-anonymizing proxy, agents could use a purpose value like \"web-proxy\" to indicate that the HTTP request (received by the server agent) should be made on the web.

    "},{"location":"features/0335-http-over-didcomm/#reference","title":"Reference","text":""},{"location":"features/0335-http-over-didcomm/#determining-the-recipient-did-by-the-resource-uri","title":"Determining the recipient DID by the Resource URI","text":"

    In an instance of the HTTP over DIDComm protocol, it is assumed that the client agent has the necessary information to be able to determine the DID of the server agent based on the resource-uri provided in the request. It's reasonable to implement a configuration API to allow a sender or administrator to specify the DID to be used for a particular URI.

    "},{"location":"features/0335-http-over-didcomm/#-alive-timeout","title":"-Alive & Timeout","text":"

    The client agent should respect the timeout parameter provided in a keep-alive header if the request header is a keep-alive connection.

    If a client making an HTTP request expects a response over the same HTTP connection, its agent should keep this connection alive while it awaits a DIDComm response from the server agent, which it should recognize by the ~thread decorator in the response message. Timing information can be provided in an optional ~timing decorator.

    Agents implementing this RFC can make use of the ~transport decorator to enable response along the same transport.

    "},{"location":"features/0335-http-over-didcomm/#when-the-client-agent-is-the-server-agent","title":"When the Client Agent is the Server Agent","text":"

    There is a degenerate case of this protocol where the client and server agents are the same agent. In this case, instead of constructing DIDComm messages, sending them to yourself, and then unpacking them, it would be reasonable to take incoming HTTP messages, apply any pre-send logic (filtering, etc), apply any post-receive logic, and then send them out over HTTP, as a simple proxy.

    To support this optimization/simplification, the client agent should recognize if the recipient DID is its own, after determining the DID from the resource URI.

    "},{"location":"features/0335-http-over-didcomm/#http-error-codes","title":"HTTP Error Codes","text":"

    Failures within the DIDComm protocol can inform the status code returned to the client.

    If the client agent waits for the time specified in the request keep-alive timeout field, it should respond with a standard 504 gateway timeout status.

    Error codes which are returned by the server will be transported over DIDComm as normal.

    "},{"location":"features/0335-http-over-didcomm/#why-http1x","title":"Why HTTP/1.x?","text":"

    The DIDComm messages in this protocol wrap HTTP/1(.x) messages for a few reasons: - Wire-level benefits of HTTP/2 are lost by wrapping in DIDComm and sending over another transport (which could itself be HTTP/2) - DIDComm is not, generally, intended to be a streaming or latency-critical transport layer, so HTTP responses, for example, can be sent complete, including their bodies, instead of being split into frames which are sent over DIDComm separately.

    The agents are free to support communicating with the client/server using HTTP/2 - the agents simply wait until they've received a complete request or response, before sending it onwards over DIDComm.

    "},{"location":"features/0335-http-over-didcomm/#https","title":"HTTPS","text":"

    The client and server can use HTTPS to communicate with their agents - this protocol only specifies that the messages sent over DIDComm are HTTP, not HTTPS.

    "},{"location":"features/0335-http-over-didcomm/#partial-use-of-http-over-didcomm","title":"Partial use of HTTP over DIDComm","text":"

    This protocol specifies the behaviour of clients, servers, and their agents. However, the client-side and server-side are decoupled by design, meaning a custom server or client, which obeys all the semantics in this RFC while diverging on technical details, can interoperate with other compliant applications.

    For example, a client-side agent can construct request messages based on internal logic rather than a request from an external application. On the server side, an agent can handle requests and send responses directly by registering its own listener on a purpose value, rather than having a separate application register.

    "},{"location":"features/0335-http-over-didcomm/#drawbacks","title":"Drawbacks","text":"

    You might find the cost too high, with wrapping your messages into HTTP messages, and then wrapping those into DIDComm envelopes. This cost includes the time it takes to wrap and unwrap payloads, as well as the increase in message size. Small messages and simple formats would benefit from being encoded as JSON payloads within custom DIDComm message formats, instead of being wrapped in HTTP messages within DIDComm messages. Large data might benefit from being sent over another channel, encrypted, with identification, decryption, and authentication information sent over DIDComm.

    "},{"location":"features/0335-http-over-didcomm/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    The main alternative to the method proposed in this RFC is to implement DIDComm in your non-DIDComm applications, if you want them to be able to communicate with each other over DIDComm.

    Another alternative to sending HTTP messages over DIDComm is sending HTTPS over DIDComm, by establishing a TLS connection between the client and server over the DIDComm transport. This offers some tradeoffs and drawbacks which make it an edge case - it identifies the server with a certificate, it breaks the anonymity offered by DIDComm, and it is not necessary for security since DIDComm itself is securely encrypted and authenticated, and DIDComm messages can be transported over HTTPS as well.

    "},{"location":"features/0335-http-over-didcomm/#prior-art","title":"Prior art","text":"

    VPNs and onion routing (like Tor) provide solutions for similar use cases, but none so far use DIDs, which enable more complex use cases with privacy preservation.

    TLS/HTTPS, being HTTP over TLS, provides a similar transport-layer secure channel to HTTP over DIDComm. Note, this is why this RFC doesn't specify a means to perform HTTPS over DIDComm - DIDComm serves the same role as TLS does in HTTPS, but offers additional benefits: - Verifiable yet anonymous authentication of the client, for example, using delegated credentials. - Access to DIDComm mechanisms, such as using the introduce protocol to connect the client and server.

    "},{"location":"features/0335-http-over-didcomm/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0335-http-over-didcomm/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"features/0347-proof-negotiation/","title":"Aries RFC 0347: Proof Negotiation","text":""},{"location":"features/0347-proof-negotiation/#summary","title":"Summary","text":"

    This RFC proposes an extension to Aries RFC 0037: Present Proof Protocol 1.0 by taking the concept of groups out of the DID credential manifest and including them in the present proof protocol. Additionally to the rules described in the credential manifest, an option to provide alternative attributes with a weight is being introduced here. Also, the possibility to include not only attributes, but also credentials and openids in a proof by using a \"type\" was taken from the DID credential manifest. The goal of this is an approach to make proof presentation more flexible, allowing attributes to be required or optional as well as allowing a choose-from-a-list scenario. So far, proof requests were to be replied to with a proof response that contained all attributes listed in the proof request. To this, this RFC adds a way to mark attributes as optional, so that they are communicated as nice-to-have to the user of a wallet.

    "},{"location":"features/0347-proof-negotiation/#motivation","title":"Motivation","text":"

    We see a need in corporate identity and access management for a login process handling not only user authentication against an application, but also determining which privileges the user is being granted inside the application and which data the user must or may provide. Aries can provide this by combining a proof request with proof negotiation.

    "},{"location":"features/0347-proof-negotiation/#use-case-example","title":"Use Case Example","text":"

    A bank needs a customer to prove they are credit-worthy using Aries-based Self-Sovereign Identity. For this, the bank wants to make the proof of credit-worthyness flexible, in that an identity owner can offer different sets and combinations of credentials. For instance, this can be a choice between a certificate of credit-worthiness from another trusted bank or alternatively a set of credentials proving ownership over real estate and a large fortune in a bank account, for example. Optionally, an Identity Owner can add certain credentials to the proof to further prove worthiness in order to be able to obtain larger loans.

    "},{"location":"features/0347-proof-negotiation/#tutorial","title":"Tutorial","text":"

    A proof request sent to an identity owner defines the attributes to be included in the proof response, i.e. the ones to prove. To add a degree of flexibility to the process, it is possible to request attributes as necessary (meaning they have to be included in the response for it to be valid) or to allow the identity owner to pick one of or several of multiple attributes from a list. Furthermore, attributes can be marked as optional. For users, this procedure may look like the example of a privacy-friendly access permission process shown in the manifesto of Okuna, an open-source social network that is still in development at the time of this writing (click on \"continue with Okuna\" to see said example). Backend-wise, this may be implemented as follows:

    "},{"location":"features/0347-proof-negotiation/#proof-request-with-attribute-negotiation","title":"Proof Request with attribute negotiation","text":"

    This feature can be implemented building on top of the credential manifest developed by the Decentralized Identity Foundation. One feature the above concept by the Decentralized Identity Foundation lacks is a way of assigning a weight to attributes within the category \"one of\". It is possible that future implementations using this concept will want to prefer certain attributes over others in the same group if both are given, so a way of assigning these different priorities to attributes should be possible. Below is an the above example of a proof request to which a rule \"pick_weighted\" and a group D were added. Furthermore, the categories \"groups_required\" and \"groups_optional\" were added to be able to differentiate between required and optional attributes which the credential manifest did not.

    Example of a proof presentation request (from verifier):

    {\n    \"@type\": \"https://didcomm.org/present-proof/1.0/request-presentation\",\n    \"@id\": \"98fd8d82-81a6-4409-acc2-c35ea39d0f28\",\n    \"comment\": \"some comment\",\n    \"request_presentations~attach\": [\n        {\n            \"@id\": \"libindy-request-presentation-0\",\n            \"mime-type\": \"application/json\",\n            \"data\":  {\n                \"base64\": \"<yaml-formatted string describing attachments, base64url-encoded because of libindy>\"\n            }\n        }\n    ]\n}\n
    The base64url-encoded content above decodes to the following data structure, a presentation preview:
    {\n    \"@type\": \"https://didcomm.org/present-proof/1.0/presentation-preview\",\n    \"@context\": \"https://path.to/schemas/credentials\",\n    \"comment\":\"some comment\",\n    \"~thread\": {\n        \"thid\": \"98fd8d82-81a6-4409-acc2-c35ea39d0f28\",\n        \"sender_order\": 0\n    }\n    \"credential\":\"proof_request\", // verifiable claims elements\n    \"groups_required\": [ // these groups are the key feature to this RFC\n            {\n                \"rule\":\"all\",\n                \"from\": [\"A\", \"B\"]\n            },\n            {\n                \"rule\": \"pick\",\n                \"count\": 1,\n                \"from\": [\"C\"]\n            },\n            {\n                \"rule\": \"pick_weighted\",\n                \"count\": 1,\n                \"from\": [\"D\"]\n            }\n        ],\n        \"groups_optional\": [\n            {\n                \"rule\": \"all\",\n                \"from\": [\"D\"]\n            }\n        ],\n    \"inputs\": [\n        {\n            \"type\": \"data\",\n            \"name\": \"routing_number\",\n            \"group\": [\"A\"],\n            \"cred_def_id\": \"<cred_def_id>\",\n            // \"mime-type\": \"<mime-type>\" is missing, so this defaults to a json-formatted string; if it was non-null, 'value' would be interpreted as a base64url-encoded string representing a binary BLOB with mime-type telling how to interpret it after base64url-decoding\n            \"value\": {\n                \"type\": \"string\",\n                \"maxLength\": 9\n            },\n        },\n        {\n            \"type\": \"data\",\n            \"name\": \"account_number\",\n            \"group\": [\"A\"], \n            \"cred_def_id\": \"<cred_def_id>\",\n            \"value\": {\n                \"type\": \"string\",\n                \"value\": \"12345678\"\n        },\n        {\n            \"type\": \"data\",\n            \"name\": \"current_residence_duration\",\n            \"group\": [\"A\"],\n            \"cred_def_id\": \"<cred_def_id>\",\n            \"value\": {\n                \"type\": \"number\",\n                \"maximum\": 150\n            }\n        },\n        {\n            \"type\": \"credential\",\n            \"group\": [\"C\"],\n            \"schema\": \"https://eu.com/claims/IDCard\",\n            \"constraints\": {\n                \"subset\": [\"prop1\", \"prop2.foo.bar\"],\n                \"issuers\": [\"did:foo:gov1\", \"did:bar:gov2\"]\n            }\n        },\n        {\n            \"type\": \"credential\",\n            \"group\": [\"C\"],\n            \"schema\": \"hub://did:foo:123/Collections/schema.us.gov/Passport\",\n            \"constraints\": {\n                \"issuers\": [\"did:foo:gov1\", \"did:bar:gov2\"]\n            }\n\n        },\n        {\n            \"type\": \"credential\",\n            \"group\": [\"B\"],\n            \"schema\": [\"https://claims.linkedin.com/WorkHistory\", \"https://about.me/WorkHistory\"],\n            \"constraints\": {\n                \"issuers\": [\"did:foo:auditor1\", \"did:bar:auditor2\"]\n            }\n        },\n        {\n            \"type\": \"credential\",\n            \"group\": [\"B\"],\n            \"schema\": \"https://claims.fico.org/CreditHistory\",\n            \"constraints\": {\n                \"issuers\": [\"did:foo:bank1\", \"did:bar:bank2\"]\n            }\n        },\n        {\n            \"type\": \"openid\",\n            \"group\": [\"A\"],\n            \"redirect\": \"https://login.microsoftonline.com/oauth/\"\n            \"parameters\": {\n                \"client_id\": \"dhfiuhsdre\",\n                \"scope\": \"openid+profile\"                    \n            }\n        },\n        {\n            \"type\": \"credential\",\n            \"group\": [\"D\"],\n            \"schema\": \"https://some.login.com/someattribute\",\n            \"constraints\": {\n                \"issuers\": [\"did:foo:iss1\", \"did:foo:iss2\"]\n            },\n            \"weight\": 0.8\n        },\n        {\n            \"type\": \"credential\",\n            \"group\": [\"D\"],\n            \"schema\": \"https://some.otherlogin.com/someotherattribute\",\n            \"constraints\": {\n                \"issuers\": [\"did:foox:iss1\", \"did:foox:iss2\"]\n            },\n            \"weight\": 0.2\n        }\n    ],\n    \"predicates\": [\n        {\n            \"name\": \"<attribute_name>\",\n            \"cred_def_id\": \"<cred_def_id>\",\n            \"predicate\": \"<predicate>\",\n            \"threshold\": <threshold>\n        }\n    ]\n}\n

    "},{"location":"features/0347-proof-negotiation/#valid-proof-response-with-attribute-negotiation","title":"Valid Proof Response with attribute negotiation","text":"

    The following data structure is an example for a valid answer to the above credential request. It contains all attributes from groups A and B as well as one credential from each C and D. Note that the provided credential from Group D is the one weighted 0.2 as the owner did not have or was not willing to provide the one weighted 0.8.

    Valid proof presentation:

    {\n    \"@type\": \"https://didcomm.org/present-proof/1.0/proof-presentation\",\n    \"@id\": \"98fd8d82-81a6-4409-acc2-c35ea39d0f28\",\n    \"comment\": \"some comment\",\n    \"presentations~attach\": [\n        {\n            \"@id\": \"libindy-presentation-0\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<yaml-formatted string describing attachments, base64url-encoded because of libindy>\"\n            }\n        }\n    ]\n}\n
    The base64url-encoded content above would decode to this data:
    {\n    \"@type\": \"https://didcomm.org/present-proof/1.0/presentation-preview\",\n    \"@context\": \"https://path.to/schemas/credentials\"\n    \"comment\":\"some comment\",\n    \"~thread\": {\n        \"thid\": \"98f38d22-71b6-4449-adc2-c33ea39d1f29\",\n        \"sender_order\": 1,\n        \"received_orders\": {did:sov:abcxyz\":1}\n    }\n    \"credential\":\"proof_response\", // verifiable claims elements\n    \"inputs_provided\": [\n        {\n            \"type\": \"data\",\n            \"field\": \"routing_number\",\n            \"value\": \"123456\"\n        },\n        {\n            \"type\": \"data\",\n            \"field\": \"account_number\",\n            \"value\": \"12345678\"\n        },\n        {\n            \"type\": \"data\",\n            \"field\": \"current_residence_duration\",\n            \"value\": 8\n        },      \n        {\n            \"type\": \"credential\",\n            \"schema\": [\"https://claims.linkedin.com/WorkHistory\", \"https://about.me/WorkHistory\"],\n            \"issuer\": \"did:foo:auditor1\"\n        },\n        {\n            \"type\": \"credential\",\n            \"schema\": \"https://claims.fico.org/CreditHistory\",\n            \"issuer\": \"did:foo:bank1\"\n        },\n        {\n            \"type\": \"openid\",\n            \"redirect\": \"https://login.microsoftonline.com/oauth/\"\n            \"client_id\": \"dhfiuhsdre\",\n            \"profile\": \"...\"\n        },\n        {\n            \"type\": \"credential\",\n            \"schema\": \"https://eu.com/claims/IDCard\"\n            \"issuer\": \"did:foo:gov1\"\n        },\n        {\n        \"type\": \"credential\",\n            \"group\": [\"D\"],\n            \"schema\": \"https://some.otherlogin.com/someotherattribute\",\n            \"issuer\": \"did:foox:iss1\"\n        }\n    ],\n    \"predicates\": [ // empty in this case\n    ]\n}\n

    "},{"location":"features/0347-proof-negotiation/#reference","title":"Reference","text":"

    The \"@id\"-Tag and thread decorator in the above JSON-messages is taken from RFC 0008.

    "},{"location":"features/0347-proof-negotiation/#drawbacks","title":"Drawbacks","text":"

    If a user needs to choose from a list of credentials each time a proof request with a \"pick_one\"-rule is being requested, some users may dislike this, as this process requires a significant amount of user interaction and, thereby, time. This could be mitigated by an 'optional'-rule which requests all of the options the 'pick one'-rule offers. Wallets can then offer two pre-settings: \"privacy first\", which offers as little data and as many interactions with the user as possible, while \"usability first\" automatically selects the 'optional'-rule and sends more data, not asking the user before everytime. The example dialog from the Okuna manifesto referred to before shows a great way to implement this. It offers the user the most privacy-friendly option by default (which is what the GDPR requires) or the prividing of optional data. Futhermore, the optional data can be customized to include or exclude specific data.

    "},{"location":"features/0347-proof-negotiation/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Not implementing proof negotiation would mean that Aries-based Distributed Ledgers would be limited to a binary yes-or-no approach to authentication and authorization of a user, while this proof negotiation would add flexibility. An alternative way of implementing the proof negotiation is performing it ahead of the proof request in a seperate request and response. The problem with not implementing this feature would be that a proof request may need to be repeated over and over again with a different list of requested attributes each time, until a list is transferred which the specific user can reply to. This process would be unnecessarily complicated and can be facilitated by implementing this here concept.

    "},{"location":"features/0347-proof-negotiation/#prior-art","title":"Prior art","text":"

    RFC0037-present-proof is the foundation which this RFC builds on using groups from the credential manifest by the decentralized identity foundation, a \"format that normalizes the definition of requirements for the issuance of a credential\".

    "},{"location":"features/0347-proof-negotiation/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0347-proof-negotiation/#implementations","title":"Implementations","text":"Name / Link Implementation Notes"},{"location":"features/0348-transition-msg-type-to-https/","title":"Aries RFC 0348: Transition Message Type to HTTPs","text":""},{"location":"features/0348-transition-msg-type-to-https/#summary","title":"Summary","text":"

    Per issue #225, the Aries community has agreed to change the prefix for protocol message types that currently use did:sov:BzCbsNYhMrjHiqZDTUASHg;spec/ to use https://didcomm.org/. Examples of the two message types forms are:

    This RFC follows the guidance in RFC 0345 about community-coordinated updates to (try to) ensure that independently deployed, interoperable agents remain interoperable throughout this transition.

    The transition from the old to new formats will occur in four steps:

    Note: Any RFCs that already use the new \"https\" message type should continue to use the use new format in all cases\u2014accepting and sending. New protocols defined in new and updated RFCs should use the new \"https\" format.

    The community coordination triggers between the steps above will be as follows:

    "},{"location":"features/0348-transition-msg-type-to-https/#motivation","title":"Motivation","text":"

    To enable agent builders to independently update their code bases and deployed agents while maintaining interoperability.

    "},{"location":"features/0348-transition-msg-type-to-https/#tutorial","title":"Tutorial","text":"

    The general mechanism for this type of transition is documented in RFC 0345 about community-coordinated updates.

    The specific sequence of events to make this particular transition is outlined in the summary section of this RFC.

    "},{"location":"features/0348-transition-msg-type-to-https/#reference","title":"Reference","text":"

    See the summary section of this RFC for the details of this transition.

    "},{"location":"features/0348-transition-msg-type-to-https/#drawbacks","title":"Drawbacks","text":"

    None identified.

    "},{"location":"features/0348-transition-msg-type-to-https/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    This approach balances the speed of adoption with the need for independent deployment and interoperability.

    "},{"location":"features/0348-transition-msg-type-to-https/#prior-art","title":"Prior art","text":"

    The approach outlined in RFC 0345 about community-coordinated updates is a well-known pattern for using deprecation to make breaking changes in an ecosystem. That said, this is the first attempt to use this approach in Aries. Adjustments to the transition plan will be made as needed, and RFC 0345 will be updated based on lessons learned in executing this plan.

    "},{"location":"features/0348-transition-msg-type-to-https/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0348-transition-msg-type-to-https/#implementations","title":"Implementations","text":"

    The following table lists the status of various agent code bases and deployments with respect to the steps of this transition. Agent builders MUST update this table as they complete steps of the transition.

    Name / Link Implementation Notes Aries Protocol Test Suite No steps completed Aries Toolbox Completed Step 1 code change. Aries Framework - .NET Completed Step 1 code change Trinsic.id No steps completed Aries Cloud Agent - Python Completed Step 1 code change Aries Static Agent - Python No steps completed Aries Framework - Go Completed Step 2 Connect.Me No steps completed Verity No steps completed Pico Labs Completed Step 2 even though deprecated IBM Completed Step 1 code change IBM Agent Completed Step 1 Aries Cloud Agent - Pico Completed Step 2 code change Aries Framework JavaScript Completed Step 2 code change"},{"location":"features/0351-purpose-decorator/","title":"Aries RFC 0351: Purpose Decorator","text":""},{"location":"features/0351-purpose-decorator/#summary","title":"Summary","text":"

    This RFC allows Aries agents to serve as mediators or relays for applications that don't use DIDComm. It introduces: - A new decorator, the ~purpose decorator, which defines the intent, usage, or contents of a message - A means for a recipient, who is not DIDComm-enabled, to register with an agent for messages with a particular purpose - A means for a sender, who is not DIDComm-enabled, to send messages with a given purpose through its agent to a target agent - Guidance for creating a protocol which uses the ~purpose decorator to relay messages over DIDComm for non-DIDComm applications

    "},{"location":"features/0351-purpose-decorator/#motivation","title":"Motivation","text":"

    This specification allows applications that aren't Aries agents to communicate JSON messages over DIDComm using Aries agents analogously to mediators. Any agent which implements this protocol can relay arbitrary new types of message for clients - without having to be updated and redeployed.

    The purpose decorator can be used to implement client interfaces for Aries agents. For example: - A client application built using an Aries framework can use the purpose decorator for client-level messaging and protocols - Multiple client applications can connect to an agent, for example to process different types of messages, or to log for auditing purposes - A server with a remote API can include an Aries agent using the purpose decorator to provide a remote API over DIDComm - Multiple client applications can use a single agent to perform transactions on the agent owner's identity

    "},{"location":"features/0351-purpose-decorator/#tutorial","title":"Tutorial","text":"

    This RFC assumes familiarity with mediators and relays, attachments, and message threading.

    "},{"location":"features/0351-purpose-decorator/#the-purpose-decorator","title":"The ~purpose Decorator","text":"

    The ~purpose decorator is a JSON array which describes the semantics of a message - the role it plays within a protocol implemented using this RFC, for example, or the meaning of the data contained within. The purpose is the mechanism for determining which recipient(s) should be sent a message.

    Example: \"~purpose\": [\"team:01453\", \"delivery\", \"geotag\", \"cred\"]

    Each element of the purpose array is a string. An agent provides some means for recipients to register on a purpose, or class of purposes, by indicating the particular string values they are interested in.

    The particular registration semantics are TBD. Some possible formats include: - A tagging system, where if a recipient registers on a list \"foo\", \"bar\", it will be forwarded messages with purposes [\"foo\", \"quux\"] and [\"baz\", \"bar\"] - A hierarchical system, where if a recipient registers on a list \"foo\", \"bar\", it will receive any message with purpose [\"foo\", \"bar\", ...] but not [\"foo\", \"baz\", ...] or [\"baz\", \"foo\", \"bar\", ...] - A hierarchical system with wildcards: \"*\", \"foo\" might match any message with purpose [..., \"foo\", ...]

    "},{"location":"features/0351-purpose-decorator/#handling-multiple-listeners","title":"Handling Multiple Listeners","text":""},{"location":"features/0351-purpose-decorator/#priority","title":"Priority","text":"

    When multiple applications register for overlapping purposes, the agent needs a means to determine which application should receive the message, or which should receive it first. When an application registers on a purpose, it should set an integer priority. When the agent receives a message, it compares the priority of all matching listeners, choosing the lowest number value.

    "},{"location":"features/0351-purpose-decorator/#fall-through","title":"Fall-Through","text":"

    In some cases, an application that received a message can allow other listeners to process after it. In these cases, when the application is handling the message, it can indicate to the agent that it can fall-through, in which case the agent will provide the message to the next listener.

    Optionally, agents can support an always-falls-through configuration, for applications which: - Will always fall through on the messages they receive, and - Can always safely process concurrently with subsequent applications handling the same message.

    This allows the agent to send the message to such listeners concurrently with the next highest-priority listener that does not always-fall-through.

    "},{"location":"features/0351-purpose-decorator/#example-protocol","title":"Example Protocol","text":"

    This is an example protocol which makes use of the ~purpose decorator and other Aries ../../concepts to provide a message format that can carry arbitrary payloads for non-DIDComm edge applications.

    "},{"location":"features/0351-purpose-decorator/#key-concepts","title":"Key Concepts","text":"

    This RFC allows messages to be sent over DIDComm by applications that are not DIDComm-enabled, by using Aries agents as intermediaries. Both the sender and the recipient can be non-DIDComm applications.

    "},{"location":"features/0351-purpose-decorator/#non-didcomm-sender","title":"Non-DIDComm Sender","text":"

    If the sender of the message is not a DIDComm-enabled agent, then it must rely on an agent as a trusted intermediary. This agent is assumed to have configured settings for message timing, communication endpoints, etc.

    1. The sender constructs a JSON message, and provides this to its agent, alongside specifying the purpose, and likely some indication of the destination of the message.
    2. The agent determines the recipient agent - this could be by logic, for example, based on the purpose decorator, or a DID specified by the sender.
    3. The agent wraps the sender's message and purpose in a DIDComm message, and sends it to the recipient agent.
    "},{"location":"features/0351-purpose-decorator/#non-didcomm-recipient","title":"Non-DIDComm Recipient","text":"

    A non-DIDComm recipient relies on trusted agents to relay messages to it, and can register with any number of agents for any number of purposes.

    1. The recipient registers with a trusted agent on certain purpose values.
    2. The agent receives a DIDComm message, and sees it has a purpose decorator.
    3. The agent looks through its recipient registry for all recipients which registered on a matching purpose.
    4. The Agent reverses the wrapping done by the sender agent, and forwards the wrapped message to all matching registered recipients.
    "},{"location":"features/0351-purpose-decorator/#message-format","title":"Message Format","text":"

    A DIDComm message, for a protocol implemented using this RFC, requires: - A means to wrap the payload message - A ~purpose decorator

    This example protocol wraps the message within the data field.

    {\n  \"@id\": \"123456789\",\n  \"@type\": \"https://example.org/didcomm-message\",\n  \"~purpose\": [],\n  \"data\" : {}\n}\n

    For example:

    {\n  \"@id\": \"123456789\",\n  \"@type\": \"https://example.org/didcomm-message\",\n  \"~purpose\": [\"metrics\", \"latency\"],\n  \"data\": {\"mean\": 346457, \"median\": 2344}\n}\n

    "},{"location":"features/0351-purpose-decorator/#reference","title":"Reference","text":"

    This section provides guidance for implementing protocols using this decorator.

    "},{"location":"features/0351-purpose-decorator/#threading-timing","title":"Threading & Timing","text":"

    If a protocol implemented using this RFC requires back and forth communication, use the ~thread decorator and transport return routing. This allows the recipient's agent to relay replies from the recipient to the sender.

    For senders and recipients that aren't aries agents, their respective agent must maintain context to correlate the DIDComm message thread, and the message thread of the communication protocol with the non-DIDComm application.

    If a message is threaded, it can be useful to include a ~timing decorator for timing information. The sender's agent can construct this decorator from timing parameters (eg, timeout) in the communication channel with the sender, or have preconfigured settings.

    "},{"location":"features/0351-purpose-decorator/#communication-with-non-didcomm-edge-applications","title":"Communication with Non-DIDComm Edge Applications","text":"

    An organization using agents to relay messages for non-DIDComm edge applications is expected to secure the connections between their relay agents and their non-DIDComm edge applications. For example, running the agent as a service in the same container. If it is necessary for the organization to have a separate endpoint or mediator agent, it is recommended to have a thin relay agent as close as possible to the edge application, so internal messages sent to the mediator are also secured by DIDComm.

    "},{"location":"features/0351-purpose-decorator/#drawbacks","title":"Drawbacks","text":"

    TODO

    "},{"location":"features/0351-purpose-decorator/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0351-purpose-decorator/#prior-art","title":"Prior art","text":""},{"location":"features/0351-purpose-decorator/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0351-purpose-decorator/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"features/0360-use-did-key/","title":"Aries RFC 0360: did:key Usage","text":""},{"location":"features/0360-use-did-key/#summary","title":"Summary","text":"

    A number of RFCs that have been defined reference what amounts to a \"naked\" public key, such that the sender relies on the receiver knowing what type the key is and how it can be used. The application of this RFC will result in the replacement of \"naked\" verkeys (public keys) in some DIDComm/Aries protocols with the did:key ledgerless DID method, a format that concisely conveys useful information about the use of the key, including the public key type. While did:key is less a DID method than a transformation from a public key and type to an opinionated DIDDoc, it provides a versioning mechanism for supporting new/different cryptographic formats and its use makes clear how a public key is intended to be used. The method also enables support for using standard DID resolution mechanisms that may simplify the use of the key. The use of a DID to represent a public key is seen as odd by some in the community. Should a representation be found that is has better properties than a plain public key but is constrained to being \"just a key\", then we will consider changing from the did:key representation.

    To Do: Update link DID Key Method link (above) from Digital Bazaar to W3C repositories when they are created and populated.

    While it is well known in the Aries community that did:key is fundamentally different from the did:peer method that is the basis of Aries protocols, it must be re-emphasized here. This RFC does NOT imply any changes to the use of did:peer in Aries, nor does it change the content of a did:peer DIDDoc. This RFC only changes references to plain public keys in the JSON of some RFCs to use did:key in place of a plain text string.

    Should this RFC be ACCEPTED, a community coordinated update will be used to apply updates to the agent code bases and impacted RFCs.

    "},{"location":"features/0360-use-did-key/#motivation","title":"Motivation","text":"

    When one Aries agent inserts a public key into the JSON of an Aries message (for example, the ~service decorator), it assumes that the recipient agent will use the key in the intended way. At the time this RFC is being written, this is easy because only one key type is in use by all agents. However, in order to enable the use of different cryptography algorithms, the public references must be extended to at least include the key type. The preferred and concise way to do that is the use of the multicodec mechanism, which provides a registry of encodings for known key types that are prefixed to the public key in a standard and concise way. did:key extends that mechanism by providing a templated way to transform the combination of public key and key type into a DID-standard DIDDoc.

    At the cost of adding/building a did:key resolver we get a DID standard way to access the key and key type, including specific information on how the key can be used. The resolver may be trivial or complex. In a trivial version, the key type is assumed, and the key can be easily extracted from the string. In a more complete implementation, the key type can be checked, and standard DID URL handling can be used to extract parts of the DIDDoc for specific purposes. For example, in the ed25519 did:key DIDDoc, the existence of the keyAgreement entry implies that the key can be used in a Diffie-Hellman exchange, without the developer guessing, or using the key incorrectly.

    Note that simply knowing the key type is not necessarily sufficient to be able to use the key. The cryptography supporting the processing data using the key must also be available in the agent. However, the multicodec and did:key capabilities will simplify adding support for new key types in the future.

    "},{"location":"features/0360-use-did-key/#tutorial","title":"Tutorial","text":"

    An example of the use of the replacement of a verkey with did:key can be found in the ~service decorator RFC. Notably in the example at the beginning of the tutorial section, the verkeys in the recipientKeys and routingKeys items would be changed from native keys to use did:key as follows:

    {\n    \"@type\": \"somemessagetype\",\n    \"~service\": {\n        \"recipientKeys\": [\"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"],\n        \"routingKeys\": [\"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"]\n        \"serviceEndpoint\": \"https://example.com/endpoint\"\n    }\n}\n

    Thus, 8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K becomes did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th using the following transformations:

    The transformation above is only for illustration within this RFC. The did:key specification is the definitive source for the appropriate transformations.

    The did:key method uses the strings that are the DID, public key and key type to construct (\"resolve\") a DIDDoc based on a template defined by the did:key specification. Further, the did:key resolver generates, in the case of an ed25519 public signing key, a key that can be used as part of a Diffie-Hellman exchange appropriate for encryption in the keyAgreement section of the DIDDoc. Presumably, as the did:key method supports other key types, similar DIDDoc templates will become part of the specification. Key types that don't support a signing/key exchange transformation would not have a keyAgreement entry in the resolved DIDDoc.

    The following currently implemented RFCs would be affected by acceptance of this RFC. In these RFCs, the JSON items that currently contain naked public keys (mostly the items recipientKeys and routingKeys) would be changed to use did:key references where applicable. Note that in these items public DIDs could also be used if applicable for a given use case.

    Service entries in did:peer DIDDocs (such as in RFCs 0094-cross-domain-messaging and 0067-didcomm-diddoc-conventions) should NOT use a did:key public key representation. Instead, service entries in the DIDDoc should reference keys defined internally in the DIDDoc where appropriate.

    To Do: Discuss the use of did:key (or not) in the context of encryption envelopes. This will be part of the ongoing discussion about JWEs and the upcoming discussions about JWMs\u2014a soon-to-be-proposed specification. That conversation will likely go on in the DIF DIDComm Working Group.

    "},{"location":"features/0360-use-did-key/#reference","title":"Reference","text":"

    See the did:key specification. Note that the specification is still evolving.

    "},{"location":"features/0360-use-did-key/#drawbacks","title":"Drawbacks","text":"

    The did:key standard is not finalized.

    The DIDDoc \"resolved\" from a did:key probably has more entries in it than are needed for DIDComm. That said, the entries in the DIDDoc make it clear to a developer how they can use the public key.

    "},{"location":"features/0360-use-did-key/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    We should not stick with the status quo and assume that all agents will always know the type of keys being used and how to use them.

    We should at minimum move to a scheme like multicodecs such that the key is self documenting and supports the versioning of cryptographic algorithms. However, even if we do that, we still have to document for developers how they should (and not should not) use the public key.

    Another logical alternative is to use a JWK. However, that representation only adds the type of the key (same as multicodecs) at the cost of being significantly more verbose.

    "},{"location":"features/0360-use-did-key/#prior-art","title":"Prior art","text":"

    To do - there are other instances of this pattern being used. Insert those here.

    "},{"location":"features/0360-use-did-key/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0360-use-did-key/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes

    Name / Link Implementation Notes"},{"location":"features/0418-rich-schema-encoding/","title":"Aries RFC 0418: Aries Rich Schema Encoding Objects","text":""},{"location":"features/0418-rich-schema-encoding/#summary","title":"Summary","text":"

    The introduction of rich schemas and their associated greater range of possible attribute value data types require correspondingly rich transformation algorithms. The purpose of the new encoding object is to specify the algorithm used to perform transformations of each attribute value data type into a canonical data encoding in a deterministic way.

    The initial use for these will be the transformation of attribute value data into 256-bit integers so that they can be incorporated into the anonymous credential signature schemes we use. The transformation algorithms will also allow for extending the cryptographic schemes and various sizes of canonical data encodings (256-bit, 384-bit, etc.). The transformation algorithms will allow for broader use of predicate proofs, and avoid hashed values as much as possible, as they do not support predicate proofs.

    Encoding objects are processed in a generic way defined in Rich Schema Objects Common.

    "},{"location":"features/0418-rich-schema-encoding/#motivation","title":"Motivation","text":"

    All attribute values to be signed in anonymous credentials must be transformed into 256-bit integers in order to support the [Camenisch-Lysyanskaya signature][CL-signatures] scheme.

    The current methods for creating a credential only accept attributes which are encoded as 256-bit integers. The current possible source attribute types are integers and strings. No configuration method exists at this time to specify which transformation method will be applied to a particular attribute. All encoded attribute values rely on an implicit understanding of how they were encoded.

    The current set of canonical encodings consists of integers and hashed strings. The introduction of encoding objects allows for a means of extending the current set of canonical encodings to include integer representations of dates, lengths, boolean values, and floating point numbers. All encoding objects describe how an input is transformed into an encoding of an attribute value according to the transformation algorithm selected by the issuer.

    "},{"location":"features/0418-rich-schema-encoding/#tutorial","title":"Tutorial","text":""},{"location":"features/0418-rich-schema-encoding/#intro-to-encoding-objects","title":"Intro to Encoding Objects","text":"

    Encoding objects are JSON objects that describe the input types, transformation algorithms, and output encodings. The encoding object is stored on the ledger.

    "},{"location":"features/0418-rich-schema-encoding/#properties","title":"Properties","text":"

    Encoding's properties follow the generic template defined in Rich Schema Common.

    Encoding's content field is a JSON-serialized string with the following fields:

    "},{"location":"features/0418-rich-schema-encoding/#example-encoding","title":"Example Encoding","text":"

    An example of the content field of an Encoding object:

    {\n    \"input\": {\n        \"id\": \"DateRFC3339\",\n        \"type\": \"string\"\n    },\n    \"output\": {\n        \"id\": \"UnixTime\",\n        \"type\": \"256-bit integer\"\n    },\n    \"algorithm\": {\n        \"description\": \"This encoding transforms an\n            RFC3339-formatted datetime object into the number\n            of seconds since January 1, 1970 (the Unix epoch).\",\n        \"documentation\": URL to specific github commit,\n        \"implementation\": URL to implementation\n    },\n    \"testVectors\": URL to specific github commit\n}\n

    "},{"location":"features/0418-rich-schema-encoding/#transformation-algorithms","title":"Transformation Algorithms","text":"

    The purpose of a transformation algorithm is to deterministically convert a value into a different encoding. For example, an attribute value may be a string representation of a date, but the CL-signature signing mechanism requires all inputs to be 256-bit integers. The transformation algorithm takes this string value as input, parses it, and encodes it as a 256-bit integer.

    It is anticipated that the encodings used for CL signatures and their associated transformation algorithms will be used primarily by two entities. First, the issuer will use the transformation algorithm to prepare credential values for signing. Second, the verifier will use the transformation algorithm to verify that revealed values were correctly encoded and signed, and to properly transform values against which predicates may be evaluated.

    "},{"location":"features/0418-rich-schema-encoding/#integer-representation","title":"Integer Representation","text":"

    In order to properly encode values as integers for use in predicate proofs, a common 256-bit integer representation is needed. Predicate proofs are kept simple by requiring all inputs to be represented as positive integers. To accomplish this, we introduce a zero-offset and map all integer results onto a range from 9 to 2256 - 10. The zero point in this range is 2255.

    Any transformation algorithm which outputs an integer value should use this representation.

    "},{"location":"features/0418-rich-schema-encoding/#floating-point-representation","title":"Floating Point Representation","text":"

    In order to retain the provided precision of floating point values, we use Q number format, a binary, fixed-point number format. We use 64 fractional bits.

    "},{"location":"features/0418-rich-schema-encoding/#reserved-values","title":"Reserved Values","text":"

    For integer and floating point representations, there are some reserved numeric strings which have a special meaning.

    Special Value Representation Description -\u221e 8 The largest negative number.Always less than any other valid integer. \u221e 2256 - 9 The largest positive number.Always greater than any other valid integer. NULL 7 Indicates that the value of a field is not supplied.Not a valid value for comparisons. NaN 2256 - 8 Floating point NaN.Not a valid value for comparisons. reserved 1 to 6 Reserved for future use. reserved 2256 - 7 to 2256 - 1 Reserved for future use."},{"location":"features/0418-rich-schema-encoding/#documentation","title":"Documentation","text":"

    The value of the documentation field is intended to be a URL which, when dereferenced, will provide specific information about the transformation algorithm such that it may be implemented. We recommend that the URL reference some immutable content, such as a specific github commit, an IPFS file, etc.

    "},{"location":"features/0418-rich-schema-encoding/#implementation","title":"Implementation","text":"

    The value of the implementation field is intended to be a URL which, when dereferenced, will provide a reference implementation of the transformation algorithm.

    "},{"location":"features/0418-rich-schema-encoding/#test-vectors","title":"Test Vectors","text":"

    Test vectors are very important. Although not comprehensive, a set of public test vectors allows for multiple implementations to verify adherence to the transformation algorithm for the set. Test vectors should consist of a set of comma-separated input/output pairs. The input values should be read from the file as strings. The output values should be byte strings encoded as hex values.

    The value of the test_vectors field is intended to be a URL which, when dereferenced, will provide the file of test vectors. We recommend that the URL reference some immutable content, such as a specific github commit, an IPFS file, etc.

    "},{"location":"features/0418-rich-schema-encoding/#data-registry-storage","title":"Data Registry Storage","text":"

    Aries will provide a means for writing contexts to and reading contexts from a verifiable data registry (such as a distributed ledger).

    An Encoding object will be written to the ledger in a generic way defined in Rich Schema Objects Common.

    "},{"location":"features/0418-rich-schema-encoding/#aries-data-registry-interface","title":"Aries Data Registry Interface","text":"

    Aries Data Registry Interface methods for adding and retrieving an Encoding object from the ledger comply with the generic approach described in Rich Schema Objects Common.

    This means the following methods can be used: - write_rich_schema_object - read_rich_schema_object_by_id - read_rich_schema_object_by_metadata

    "},{"location":"features/0418-rich-schema-encoding/#reference","title":"Reference","text":"

    The following is a reference implementation of various transformation algorithms.

    Here is the paper that defines Camenisch-Lysyanskaya signatures.

    "},{"location":"features/0418-rich-schema-encoding/#drawbacks","title":"Drawbacks","text":"

    This increases the complexity of issuing verifiable credentials and verifiying the accompanying verifiable presentations.

    "},{"location":"features/0418-rich-schema-encoding/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Encoding attribute values as integers is already part of using anonymous credentials, however the current method is implicit, and relies on use of a common implementation library for uniformity. If we do not include encodings as part of the Rich Schema effort, we will be left with an incomplete set of possible predicates, a lack of explicit mechanisms for issuers to specify which encoding methods they used, and a corresponding lack of verifiablity of signed attribute values.

    In another design that was considered, the encoding on the ledger was actually a function an end user could call, with the ledger nodes performing the transformation algorithm and returning the encoded value. The benefit of such a design would have been the guarantee of uniformity across encoded values. This design was rejected because of the unfeasibility of using the ledger nodes for such calculations and the privacy implications of submitting attribute values to a public ledger.

    "},{"location":"features/0418-rich-schema-encoding/#prior-art","title":"Prior art","text":"

    A description of a prior effort to add encodings to Indy may be found in this jira ticket and pull request.

    What the prior effort lacked was a corresponding enhancement of schema infrastructure which would have provided the necessary typing of attribute values.

    "},{"location":"features/0418-rich-schema-encoding/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0418-rich-schema-encoding/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0428-prepare-issue-rich-credential/","title":"0428: Prerequisites to Issue Rich Credential","text":""},{"location":"features/0428-prepare-issue-rich-credential/#summary","title":"Summary","text":"

    Describes the prerequisites an issuer must ensure are in place before issuing a rich credential.

    "},{"location":"features/0428-prepare-issue-rich-credential/#motivation","title":"Motivation","text":"

    To inform issuers of the steps they should take in order to make sure they have the necessary rich schema objects in place before they use them to issue credentials.

    "},{"location":"features/0428-prepare-issue-rich-credential/#tutorial","title":"Tutorial","text":""},{"location":"features/0428-prepare-issue-rich-credential/#rich-schema-credential-workflow","title":"Rich Schema Credential Workflow","text":"
    1. The issuer checks the ledger to see if the credential definition he wants to use is already present.
    2. If not, the issuer checks the ledger to see if the mapping he wants to use is already present.
      1. If not, the issuer checks the ledger to see if the schemas he wants to use are already present.
        1. If not, anchor the context used by each schema to the ledger.
        2. Anchor the schemas on the ledger. Schema objects may refer to one or more context objects.
      2. Anchor to the ledger the mapping object that associates each claim with one or more encoding objects and a corresponding attribute. (The issuer selects schema properties and associated encodings to be included as claims in the credential. Encoding objects refer to transformation algorithms, documentation, and code which implements the transformation. The claim is the data; the attribute is the transformed data represented as a 256 bit integer that is signed. The mapping object refers to the schema objects and encoding objects.)
    3. Anchor a credential definition that refers to a single mapping object. The credential definition contains public keys for each attribute. The credential definition refers to the issuer DID.
    4. Using the credential definition, mapping, and schema(s) issue to the holder a credential based on the credential definition and the supplied claim data. The Issue Credential Protocol 1.0 will be the model for another RFC containing minor modifications to issue a credential using the new rich schema objects.

    Subsequent credentials may be issued by repeating only the last step.

    "},{"location":"features/0428-prepare-issue-rich-credential/#reference","title":"Reference","text":""},{"location":"features/0428-prepare-issue-rich-credential/#unresolved-questions","title":"Unresolved questions","text":"

    RFCs for Rich Schema Mappings and Rich Schema Credential Definitions are incomplete.

    "},{"location":"features/0428-prepare-issue-rich-credential/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0429-prepare-req-rich-pres/","title":"0429: Prerequisites to Request Rich Presentation","text":""},{"location":"features/0429-prepare-req-rich-pres/#summary","title":"Summary","text":"

    Describes the prerequisites a verifier must ensure are in place before requesting a rich presentation.

    "},{"location":"features/0429-prepare-req-rich-pres/#motivation","title":"Motivation","text":"

    To inform verifiers of the steps they should take in order to make sure they have the necessary rich schema objects in place before they use them to request proofs.

    "},{"location":"features/0429-prepare-req-rich-pres/#tutorial","title":"Tutorial","text":""},{"location":"features/0429-prepare-req-rich-pres/#rich-schema-presentation-definition-workflow","title":"Rich Schema Presentation Definition Workflow","text":"
    1. The verifier checks his wallet or the ledger to see if the presentation definition already exists. (The verifier determines which attribute or predicates he needs a holder to present to satisfy the verifier's business rules. Presentation definitions specify desired attributes and predicates).
    2. If not, the verifier creates a new presentation definition and stores the presentation definition in his wallet locally and, optionally, anchors it to the verifiable data registry. (Anchoring the presentation definition to the verifiable data registry allows other verifiers to easily use it. It can be done by writing the full presentation definition's content to the ledger, or just writing a digital fingerprint/hash of the content.)
    3. Using the presentation definition, request a presentation from the holder. The Present Proof Protocol 1.0 will be the model for another RFC containing minor modifications for presenting a proof based on verifiable credentials using the new rich schema objects.
    "},{"location":"features/0429-prepare-req-rich-pres/#reference","title":"Reference","text":""},{"location":"features/0429-prepare-req-rich-pres/#unresolved-questions","title":"Unresolved questions","text":"

    The RFC for Rich Schema Presentation Definitions is incomplete.

    "},{"location":"features/0429-prepare-req-rich-pres/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0434-outofband/","title":"Aries RFC 0434: Out-of-Band Protocol 1.1","text":""},{"location":"features/0434-outofband/#summary","title":"Summary","text":"

    The Out-of-band protocol is used when you wish to engage with another agent and you don't have a DIDComm connection to use for the interaction.

    "},{"location":"features/0434-outofband/#motivation","title":"Motivation","text":"

    The use of the invitation in the Connection and DID Exchange protocols has been relatively successful, but has some shortcomings, as follows.

    "},{"location":"features/0434-outofband/#connection-reuse","title":"Connection Reuse","text":"

    A common pattern we have seen in the early days of Aries agents is a user with a browser getting to a point where a connection is needed between the website's (enterprise) agent and the user's mobile agent. A QR invitation is displayed, scanned and a protocol is executed to establish a connection. Life is good!

    However, with the current invitation processes, when the same user returns to the same page, the same process is executed (QR code, scan, etc.) and a new connection is created between the two agents. There is no way for the user's agent to say \"Hey, I've already got a connection with you. Let's use that one!\"

    We need the ability to reuse a connection.

    "},{"location":"features/0434-outofband/#connection-establishment-versioning","title":"Connection Establishment Versioning","text":"

    In the existing Connections and DID Exchange invitation handling, the inviter dictates what connection establishment protocol all invitee's will use. A more sustainable approach is for the inviter to offer the invitee a list of supported protocols and allow the invitee to use one that it supports.

    "},{"location":"features/0434-outofband/#handling-of-all-out-of-band-messages","title":"Handling of all Out-of-Band Messages","text":"

    We currently have two sets of out-of-band messages that cannot be delivered via DIDComm because there is no channel. We'd like to align those messages into a single \"out-of-band\" protocol so that their handling can be harmonized inside an agent, and a common QR code handling mechanism can be used.

    "},{"location":"features/0434-outofband/#urls-and-qr-code-handling","title":"URLs and QR Code Handling","text":"

    We'd like to have the specification of QR handling harmonized into a single RFC (this one).

    "},{"location":"features/0434-outofband/#tutorial","title":"Tutorial","text":""},{"location":"features/0434-outofband/#key-concepts","title":"Key Concepts","text":"

    The Out-of-band protocol is used when an agent doesn't know if it has a connection with another agent. This could be because you are trying to establish a new connection with that agent, you have connections but don't know who the other party is, or if you want to have a connection-less interaction. Since there is no DIDComm connection to use for the messages of this protocol, the messages are plaintext and sent out-of-band, such as via a QR code, in an email message or any other available channel. Since the delivery of out-of-band messages will often be via QR codes, this RFC also covers the use of QR codes.

    Two well known use cases for using an out-of-band protocol are:

    In both cases, there is only a single out-of-band protocol message sent. The message responding to the out-of-band message is a DIDComm message from an appropriate protocol.

    Note that the website-to-agent model is not the only such interaction enabled by the out-of-band protocol, and a QR code is not the only delivery mechanism for out-of-band messages. However, they are useful as examples of the purpose of the protocol.

    "},{"location":"features/0434-outofband/#roles","title":"Roles","text":"

    The out-of-band protocol has two roles: sender and receiver.

    "},{"location":"features/0434-outofband/#sender","title":"sender","text":"

    The agent that generates the out-of-band message and makes it available to the other party.

    "},{"location":"features/0434-outofband/#receiver","title":"receiver","text":"

    The agent that receives the out-of-band message and decides how to respond. There is no out-of-band protocol message with which the receiver will respond. Rather, if they respond, they will use a message from another protocol that the sender understands.

    "},{"location":"features/0434-outofband/#states","title":"States","text":"

    The state machines for the sender and receiver are a bit odd for the out-of-band protocol because it consists of a single message that kicks off a co-protocol and ends when evidence of the co-protocol's launch is received, in the form of some response. In the following state machine diagrams we generically describe the response message from the receiver as being a DIDComm message.

    The sender state machine is as follows:

    Note the \"optional\" reference under the second event in the await-response state. That is to indicate that an out-of-band message might be a single use message with a transition to done, or reusable message (received by many receivers) with a transition back to await-response.

    The receiver state machine is as follows:

    Worth noting is the first event of the done state, where the receiver may receive the message multiple times. This represents, for example, an agent returning to the same website and being greeted with instances of the same QR code each time.

    "},{"location":"features/0434-outofband/#messages","title":"Messages","text":"

    The out-of-band protocol a single message that is sent by the sender.

    "},{"location":"features/0434-outofband/#invitation-httpsdidcommorgout-of-bandverinvitation","title":"Invitation: https://didcomm.org/out-of-band/%VER/invitation","text":"
    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/invitation\",\n  \"@id\": \"<id used for context as pthid>\",\n  \"label\": \"Faber College\",\n  \"goal_code\": \"issue-vc\",\n  \"goal\": \"To issue a Faber College Graduate credential\",\n  \"accept\": [\n    \"didcomm/aip2;env=rfc587\",\n    \"didcomm/aip2;env=rfc19\"\n  ],\n  \"handshake_protocols\": [\n    \"https://didcomm.org/didexchange/1.0\",\n    \"https://didcomm.org/connections/1.0\"\n  ],\n  \"requests~attach\": [\n    {\n      \"@id\": \"request-0\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"json\": \"<json of protocol message>\"\n      }\n    }\n  ],\n  \"services\": [\"did:sov:LjgpST2rjsoxYegQDRm7EL\"]\n}\n

    The items in the message are:

    If only the handshake_protocols item is included, the initial interaction will complete with the establishment (or reuse) of the connection. Either side may then use that connection for any purpose. A common use case (but not required) would be for the sender to initiate another protocol after the connection is established to accomplish some shared goal.

    If only the requests~attach item is included, no new connection is expected to be created, although one could be used if the receiver knows such a connection already exists. The receiver responds to one of the messages in the requests~attach array. The requests~attach item might include the first message of a protocol from the sender, or might be a please-play-the-role message requesting the receiver initiate a protocol. If the protocol requires a further response from the sender to the receiver, the receiver must include a ~service decorator for the sender to use in responding.

    If both the handshake_protocols and requests~attach items are included in the message, the receiver should first establish a connection and then respond (using that connection) to one of the messages in the requests~attach message. If a connection already exists between the parties, the receiver may respond immediately to the request-attach message using the established connection.

    "},{"location":"features/0434-outofband/#reuse-messages","title":"Reuse Messages","text":"

    While the receiver is expected to respond with an initiating message from a handshake_protocols or requests~attach item using an offered service, the receiver may be able to respond by reusing an existing connection. Specifically, if a connection they have was created from an out-of-band invitation from the same services DID of a new invitation message, the connection MAY be reused. The receiver may choose to not reuse the existing connection for privacy purposes and repeat a handshake protocol to receive a redundant connection.

    If a message has a service block instead of a DID in the services list, you may enable reuse by encoding the key and endpoint of the service block in a Peer DID numalgo 2 and using that DID instead of a service block.

    If the receiver desires to reuse the existing connection and a requests~attach item is included in the message, the receiver SHOULD respond to one of the attached messages using the existing connection.

    If the receiver desires to reuse the existing connection and no requests~attach item is included in the message, the receiver SHOULD attempt to do so with the reuse and reuse-accepted messages. This will notify the inviter that the existing connection should be used, along with the context that can be used for follow-on interactions.

    While the invitation message is passed unencrypted and out-of-band, both the handshake-reuse and handshake-reuse-accepted messages MUST be encrypted and transmitted as normal DIDComm messages.

    "},{"location":"features/0434-outofband/#reuse-httpsdidcommorgout-of-bandverhandshake-reuse","title":"Reuse: https://didcomm.org/out-of-band/%VER/handshake-reuse","text":"
    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/handshake-reuse\",\n  \"@id\": \"<id>\",\n  \"~thread\": {\n    \"thid\": \"<same as @id>\",\n    \"pthid\": \"<The @id of the Out-of-Band invitation>\"\n  }\n}\n

    The items in the message are:

    Sending or receiving this message does not change the state of the existing connection.

    When the inviter receives the handshake-reuse message, they MUST respond with a handshake-reuse-accepted message to notify that invitee that the request to reuse the existing connection is successful.

    "},{"location":"features/0434-outofband/#reuse-accepted-httpsdidcommorgout-of-bandverhandshake-reuse-accepted","title":"Reuse Accepted: https://didcomm.org/out-of-band/%VER/handshake-reuse-accepted","text":"
    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/handshake-reuse-accepted\",\n  \"@id\": \"<id>\",\n  \"~thread\": {\n    \"thid\": \"<The Message @id of the reuse message>\",\n    \"pthid\": \"<The @id of the Out-of-Band invitation>\"\n  }\n}\n

    The items in the message are:

    If this message is not received by the invitee, they should use the regular process. This message is a mechanism by which the invitee can detect a situation where the inviter no longer has a record of the connection and is unable to decrypt and process the handshake-reuse message.

    After sending this message, the inviter may continue any desired protocol interactions based on the context matched by the pthid present in the handshake-reuse message.

    "},{"location":"features/0434-outofband/#responses","title":"Responses","text":"

    The following table summarizes the different forms of the out-of-band invitation message depending on the presence (or not) of the handshake_protocols item, the requests~attach item and whether or not a connection between the agents already exists.

    handshake_protocols Present? requests~attach Present? Existing connection? Receiver action(s) No No No Impossible Yes No No Uses the first supported protocol from handshake_protocols to make a new connection using the first supported services entry. No Yes No Send a response to the first supported request message using the first supported services entry. Include a ~service decorator if the sender is expected to respond. No No Yes Impossible Yes Yes No Use the first supported protocol from handshake_protocols to make a new connection using the first supported services entry, and then send a response message to the first supported attachment message using the new connection. Yes No Yes Send a handshake-reuse message. No Yes Yes Send a response message to the first supported request message using the existing connection. Yes Yes Yes Send a response message to the first supported request message using the existing connection.

    Both the goal_code and goal fields SHOULD be used with the localization service decorator. The two fields are to enable both human and machine handling of the out-of-band message. goal_code is to specify a generic, protocol level outcome for sending the out-of-band message (e.g. issue verifiable credential, request proof, etc.) that is suitable for machine handling and possibly human display, while goal provides context specific guidance, targeting mainly a person controlling the receiver's agent. The list of goal_code values is provided in the Message Catalog section of this RFC.

    "},{"location":"features/0434-outofband/#the-services-item","title":"The services Item","text":"

    As mentioned in the description above, the services item array is intended to be analogous to the service block of a DIDDoc. When not reusing an existing connection, the receiver scans the array and selects (according to the rules described below) a service entry to use for the response to the out-of-band message.

    There are two forms of entries in the services item array:

    The following is an example of a two entry array, one of each form:

    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/invitation\",\n  \"@id\": \"<id used for context as pthid>\",\n  \"label\": \"Faber College\",\n  \"handshake_protocols\": [\"https://didcomm.org/didexchange/1.0\"],\n  \"services\": [\n    {\n      \"id\": \"#inline\",\n      \"type\": \"did-communication\",\n      \"recipientKeys\": [\"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\"],\n      \"routingKeys\": [],\n      \"serviceEndpoint\": \"https://example.com:5000\"\n    },\n    \"did:sov:LjgpST2rjsoxYegQDRm7EL\"\n  ]\n}\n

    The processing rules for the services block are:

    The attributes in the inline form parallel the attributes of a DID Document for increased meaning. The recipientKeys and routingKeys within the inline block decorator MUST be did:key references.

    As defined in the DIDComm Cross Domain Messaging RFC, if routingKeys is present and non-empty, additional forwarding wrapping are necessary in the response message.

    When considering routing and options for out-of-band messages, keep in mind that the more detail in the message, the longer the URL will be and (if used) the more dense (and harder to scan) the QR code will be.

    "},{"location":"features/0434-outofband/#service-endpoint","title":"Service Endpoint","text":"

    The service endpoint used to transmit the response is either present in the out-of-band message or available in the DID Document of a presented DID. If the endpoint is itself a DID, the serviceEndpoint in the DIDDoc of the resolved DID MUST be a URI, and the recipientKeys MUST contain a single key. That key is appended to the end of the list of routingKeys for processing. For more information about message forwarding and routing, see RFC 0094 Cross Domain Messaging.

    "},{"location":"features/0434-outofband/#adoption-messages","title":"Adoption Messages","text":"

    The problem_report message MAY be adopted by the out-of-band protocol if the agent wants to respond with problem reports to invalid messages, such as attempting to reuse a single-use invitation.

    "},{"location":"features/0434-outofband/#constraints","title":"Constraints","text":"

    An existing connection can only be reused based on a DID in the services list in an out-of-band message.

    "},{"location":"features/0434-outofband/#reference","title":"Reference","text":""},{"location":"features/0434-outofband/#messages-reference","title":"Messages Reference","text":"

    The full description of the message in this protocol can be found in the Tutorial section of this RFC.

    "},{"location":"features/0434-outofband/#localization","title":"Localization","text":"

    The goal_code and goal fields SHOULD have localization applied. See the purpose of those fields in the message type definitions section and the message catalog section (immediately below).

    "},{"location":"features/0434-outofband/#message-catalog","title":"Message Catalog","text":""},{"location":"features/0434-outofband/#goal_code","title":"goal_code","text":"

    The following values are defined for the goal_code field:

    Code (cd) English (en) issue-vc To issue a credential request-proof To request a proof create-account To create an account with a service p2p-messaging To establish a peer-to-peer messaging relationship"},{"location":"features/0434-outofband/#goal","title":"goal","text":"

    The goal localization values are use case specific and localization is left to the agent implementor to enable using the techniques defined in the ~l10n RFC.

    "},{"location":"features/0434-outofband/#roles-reference","title":"Roles Reference","text":"

    The roles are defined in the Tutorial section of this RFC.

    "},{"location":"features/0434-outofband/#states-reference","title":"States Reference","text":""},{"location":"features/0434-outofband/#initial","title":"initial","text":"

    No out-of-band messages have been sent.

    "},{"location":"features/0434-outofband/#await-response","title":"await-response","text":"

    The sender has shared an out-of-band message with the intended receiver(s), and the sender has not yet received all of the responses. For a single-use out-of-band message, there will be only one response; for a multi-use out-of-band message, there is no defined limit on the number of responses.

    "},{"location":"features/0434-outofband/#prepare-response","title":"prepare-response","text":"

    The receiver has received the out-of-band message and is preparing a response. The response will not be an out-of-band protocol message, but a message from another protocol chosen based on the contents of the out-of-band message.

    "},{"location":"features/0434-outofband/#done","title":"done","text":"

    The out-of-band protocol has been completed. Note that if the out-of-band message was intended to be available to many receivers (a multiple use message), the sender returns to the await-response state rather than going to the done state.

    "},{"location":"features/0434-outofband/#errors","title":"Errors","text":"

    There is an optional courtesy error message stemming from an out-of-band message that the sender could provide if they have sufficient recipient information. If the out-of-band message is a single use message and the sender receives multiple responses and each receiver's response includes a way for the sender to respond with a DIDComm message, all but the first MAY be answered with a problem_report.

    "},{"location":"features/0434-outofband/#error-message-example","title":"Error Message Example","text":"
    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/problem_report\",\n  \"@id\": \"5678876542345\",\n  \"~thread\": { \"pthid\": \"<@id of the OutofBand message>\" },\n  \"description\": {\n    \"en\": \"The invitation has expired.\",\n    \"code\": \"expired-invitation\"\n  },\n  \"impact\": \"thread\"\n}\n

    See the problem-report protocol for details on the items in the example.

    "},{"location":"features/0434-outofband/#flow-overview","title":"Flow Overview","text":"

    In an out-of-band message the sender gives information to the receiver about the kind of DIDComm protocol response messages it can handle and how to deliver the response. The receiver uses that information to determine what DIDComm protocol/message to use in responding to the sender, and (from the service item or an existing connection) how to deliver the response to the sender.

    The handling of the response is specified by the protocol used.

    To Do: Make sure that the following remains in the DID Exchange/Connections RFCs

    Any Published DID that expresses support for DIDComm by defining a service that follows the DIDComm conventions serves as an implicit invitation. If an invitee wishes to connect to any Published DID, they need not wait for an out-of-band invitation message. Rather, they can designate their own label and initiate the appropriate protocol (e.g. 0160-Connections or 0023-DID-Exchange) for establishing a connection.

    "},{"location":"features/0434-outofband/#standard-out-of-band-message-encoding","title":"Standard Out-of-Band Message Encoding","text":"

    Using a standard out-of-band message encoding allows for easier interoperability between multiple projects and software platforms. Using a URL for that standard encoding provides a built in fallback flow for users who are unable to automatically process the message. Those new users will load the URL in a browser as a default behavior, and may be presented with instructions on how to install software capable of processing the message. Already onboarded users will be able to process the message without loading in a browser via mobile app URL capture, or via capability detection after being loaded in a browser.

    The standard out-of-band message format is a URL with a Base64Url encoded json object as a query parameter.

    Please note the difference between Base64Url and Base64 encoding.

    The URL format is as follows, with some elements described below:

    https://<domain>/<path>?oob=<outofbandMessage>\n

    <domain> and <path> should be kept as short as possible, and the full URL SHOULD return human readable instructions when loaded in a browser. This is intended to aid new users. The oob query parameter is required and is reserved to contain the out-of-band message string. Additional path elements or query parameters are allowed, and can be leveraged to provide coupons or other promise of payment for new users.

    To do: We need to rationalize this approach https:// approach with the use of a special protocol (e.g. didcomm://) that will enable handling of the URL on mobile devices to automatically invoke an installed app on both Android and iOS. A user must be able to process the out-of-band message on the device of the agent (e.g. when the mobile device can't scan the QR code because it is on a web page on device).

    The <outofbandMessage> is an agent plaintext message (not a DIDComm message) that has been Base64Url encoded such that the resulting string can be safely used in a URL.

    outofband_message = base64UrlEncode(<outofbandMessage>)\n

    During Base64Url encoding, whitespace from the JSON string SHOULD be eliminated to keep the resulting out-of-band message string as short as possible.

    "},{"location":"features/0434-outofband/#example-out-of-band-message-encoding","title":"Example Out-of-Band Message Encoding","text":"

    Invitation:

    {\n  \"@type\": \"https://didcomm.org/out-of-band/1.0/invitation\",\n  \"@id\": \"69212a3a-d068-4f9d-a2dd-4741bca89af3\",\n  \"label\": \"Faber College\",\n  \"goal_code\": \"issue-vc\",\n  \"goal\": \"To issue a Faber College Graduate credential\",\n  \"handshake_protocols\": [\"https://didcomm.org/didexchange/1.0\", \"https://didcomm.org/connections/1.0\"],\n  \"services\": [\"did:sov:LjgpST2rjsoxYegQDRm7EL\"]\n}\n

    Whitespace removed:

    {\"@type\":\"https://didcomm.org/out-of-band/1.0/invitation\",\"@id\":\"69212a3a-d068-4f9d-a2dd-4741bca89af3\",\"label\":\"Faber College\",\"goal_code\":\"issue-vc\",\"goal\":\"To issue a Faber College Graduate credential\",\"handshake_protocols\":[\"https://didcomm.org/didexchange/1.0\",\"https://didcomm.org/connections/1.0\"],\"services\":[\"did:sov:LjgpST2rjsoxYegQDRm7EL\"]}\n

    Base64Url encoded:

    eyJAdHlwZSI6Imh0dHBzOi8vZGlkY29tbS5vcmcvb3V0LW9mLWJhbmQvMS4wL2ludml0YXRpb24iLCJAaWQiOiI2OTIxMmEzYS1kMDY4LTRmOWQtYTJkZC00NzQxYmNhODlhZjMiLCJsYWJlbCI6IkZhYmVyIENvbGxlZ2UiLCJnb2FsX2NvZGUiOiJpc3N1ZS12YyIsImdvYWwiOiJUbyBpc3N1ZSBhIEZhYmVyIENvbGxlZ2UgR3JhZHVhdGUgY3JlZGVudGlhbCIsImhhbmRzaGFrZV9wcm90b2NvbHMiOlsiaHR0cHM6Ly9kaWRjb21tLm9yZy9kaWRleGNoYW5nZS8xLjAiLCJodHRwczovL2RpZGNvbW0ub3JnL2Nvbm5lY3Rpb25zLzEuMCJdLCJzZXJ2aWNlcyI6WyJkaWQ6c292OkxqZ3BTVDJyanNveFllZ1FEUm03RUwiXX0=\n

    Example URL with Base64Url encoded message:

    http://example.com/ssi?oob=eyJAdHlwZSI6Imh0dHBzOi8vZGlkY29tbS5vcmcvb3V0LW9mLWJhbmQvMS4wL2ludml0YXRpb24iLCJAaWQiOiI2OTIxMmEzYS1kMDY4LTRmOWQtYTJkZC00NzQxYmNhODlhZjMiLCJsYWJlbCI6IkZhYmVyIENvbGxlZ2UiLCJnb2FsX2NvZGUiOiJpc3N1ZS12YyIsImdvYWwiOiJUbyBpc3N1ZSBhIEZhYmVyIENvbGxlZ2UgR3JhZHVhdGUgY3JlZGVudGlhbCIsImhhbmRzaGFrZV9wcm90b2NvbHMiOlsiaHR0cHM6Ly9kaWRjb21tLm9yZy9kaWRleGNoYW5nZS8xLjAiLCJodHRwczovL2RpZGNvbW0ub3JnL2Nvbm5lY3Rpb25zLzEuMCJdLCJzZXJ2aWNlcyI6WyJkaWQ6c292OkxqZ3BTVDJyanNveFllZ1FEUm03RUwiXX0=\n

    Out-of-band message URLs can be transferred via any method that can send text, including an email, SMS, posting on a website, or QR Code.

    Example URL encoded as a QR Code:

    Example Email Message:

    To: alice@alum.faber.edu\nFrom: studentrecords@faber.edu\nSubject: Your request to connect and receive your graduate verifiable credential\n\nDear Alice,\n\nTo receive your Faber College graduation certificate, click here to [connect](http://example.com/ssi?oob=eyJAdHlwZSI6Imh0dHBzOi8vZGlkY29tbS5vcmcvb3V0LW9mLWJhbmQvMS4wL2ludml0YXRpb24iLCJAaWQiOiI2OTIxMmEzYS1kMDY4LTRmOWQtYTJkZC00NzQxYmNhODlhZjMiLCJsYWJlbCI6IkZhYmVyIENvbGxlZ2UiLCJnb2FsX2NvZGUiOiJpc3N1ZS12YyIsImdvYWwiOiJUbyBpc3N1ZSBhIEZhYmVyIENvbGxlZ2UgR3JhZHVhdGUgY3JlZGVudGlhbCIsImhhbmRzaGFrZV9wcm90b2NvbHMiOlsiaHR0cHM6Ly9kaWRjb21tLm9yZy9kaWRleGNoYW5nZS8xLjAiLCJodHRwczovL2RpZGNvbW0ub3JnL2Nvbm5lY3Rpb25zLzEuMCJdLCJzZXJ2aWNlcyI6WyJkaWQ6c292OkxqZ3BTVDJyanNveFllZ1FEUm03RUwiXX0=) with us, or paste the following into your browser:\n\nhttp://example.com/ssi?oob=eyJAdHlwZSI6Imh0dHBzOi8vZGlkY29tbS5vcmcvb3V0LW9mLWJhbmQvMS4wL2ludml0YXRpb24iLCJAaWQiOiI2OTIxMmEzYS1kMDY4LTRmOWQtYTJkZC00NzQxYmNhODlhZjMiLCJsYWJlbCI6IkZhYmVyIENvbGxlZ2UiLCJnb2FsX2NvZGUiOiJpc3N1ZS12YyIsImdvYWwiOiJUbyBpc3N1ZSBhIEZhYmVyIENvbGxlZ2UgR3JhZHVhdGUgY3JlZGVudGlhbCIsImhhbmRzaGFrZV9wcm90b2NvbHMiOlsiaHR0cHM6Ly9kaWRjb21tLm9yZy9kaWRleGNoYW5nZS8xLjAiLCJodHRwczovL2RpZGNvbW0ub3JnL2Nvbm5lY3Rpb25zLzEuMCJdLCJzZXJ2aWNlcyI6WyJkaWQ6c292OkxqZ3BTVDJyanNveFllZ1FEUm03RUwiXX0=\n\nIf you don't have an identity agent for holding credentials, you will be given instructions on how you can get one.\n\nThanks,\n\nFaber College\nKnowledge is Good\n
    "},{"location":"features/0434-outofband/#url-shortening","title":"URL Shortening","text":"

    It seems inevitable that the length of some out-of-band message will be too long to produce a useable QR code. Techniques to avoid unusable QR codes have been presented above, including using attachment links for requests, minimizing the routing of the response and eliminating unnecessary whitespace in the JSON. However, at some point a sender may need generate a very long URL. In that case, a DIDComm specific URL shortener redirection should be implemented by the sender as follows:

    A usable QR code will always be able to be generated from the shortened form of the URL.

    "},{"location":"features/0434-outofband/#url-shortening-caveats","title":"URL Shortening Caveats","text":"

    Some HTTP libraries don't support stopping redirects from occuring on reception of a 301 or 302, in this instance the redirect is automatically followed and will result in a response that MAY have a status of 200 and MAY contain a URL that can be processed as a normal Out-of-Band message.

    If the agent performs a HTTP GET with the Accept header requesting application/json MIME type the response can either contain the message in json or result in a redirect, processing of the response should attempt to determine which response type is received and process the message accordingly.

    "},{"location":"features/0434-outofband/#out-of-band-message-publishing","title":"Out-of-Band Message Publishing","text":"

    The sender will publish or transmit the out-of-band message URL in a manner available to the intended receiver. After publishing, the sender is in the await-response state, will the receiver is in the prepare-response state.

    "},{"location":"features/0434-outofband/#out-of-band-message-processing","title":"Out-of-Band Message Processing","text":"

    If the receiver receives an out-of-band message in the form of a QR code, the receiver should attempt to decode the QR code to an out-of-band message URL for processing.

    When the receiver receives the out-of-band message URL, there are two possible user flows, depending on whether the individual has an Aries agent. If the individual is new to Aries, they will likely load the URL in a browser. The resulting page SHOULD contain instructions on how to get started by installing an Aries agent. That install flow will transfer the out-of-band message to the newly installed software.

    A user that already has those steps accomplished will have the URL received by software directly. That software will attempt to base64URL decode the string and can read the out-of-band message directly out of the oob query parameter, without loading the URL. If this process fails then the software should attempt the steps to process a shortened URL.

    NOTE: In receiving the out-of-band message, the base64url decode implementation used MUST correctly decode padded and unpadded base64URL encoded data.

    If the receiver wants to respond to the out-of-band message, they will use the information in the message to prepare the request, including:

    "},{"location":"features/0434-outofband/#correlating-responses-to-out-of-band-messages","title":"Correlating responses to Out-of-Band messages","text":"

    The response to an out-of-band message MUST set its ~thread.pthid equal to the @id property of the out-of-band message.

    Example referencing an explicit invitation:

    {\n  \"@id\": \"a46cdd0f-a2ca-4d12-afbf-2e78a6f1f3ef\",\n  \"@type\": \"https://didcomm.org/didexchange/1.0/request\",\n  \"~thread\": { \"pthid\": \"032fbd19-f6fd-48c5-9197-ba9a47040470\" },\n  \"label\": \"Bob\",\n  \"did\": \"B.did@B:A\",\n  \"did_doc~attach\": {\n    \"base64\": \"eyJ0eXAiOiJKV1Qi... (bytes omitted)\",\n    \"jws\": {\n      \"header\": {\n        \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n      },\n      \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n      \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n    }\n  }\n}\n
    "},{"location":"features/0434-outofband/#response-transmission","title":"Response Transmission","text":"

    The response message from the receiver is encoded according to the standards of the DIDComm encryption envelope, using the service block present in (or resolved from) the out-of-band invitation.

    "},{"location":"features/0434-outofband/#reusing-connections","title":"Reusing Connections","text":"

    If an out-of-band invitation has a DID in the services block, and the receiver determines it has previously established a connection with that DID, the receiver MAY send its response on the established connection. See Reuse Messages for details.

    "},{"location":"features/0434-outofband/#receiver-error-handling","title":"Receiver Error Handling","text":"

    If the receiver is unable to process the out-of-band message, the receiver may respond with a Problem Report identifying the problem using a DIDComm message. As with any response, the ~thread decorator of the pthid MUST be the @id of the out-of-band message. The problem report MUST be in the protocol of an expected response. An example of an error that might come up is that the receiver is not able to handle any of the proposed protocols in the out-of-band message. The receiver MAY include in the problem report a ~service decorator that allows the sender to respond to the out-of-band message with a DIDComm message.

    "},{"location":"features/0434-outofband/#response-processing","title":"Response processing","text":"

    The sender MAY look up the corresponding out-of-band message identified in the response's ~thread.pthid to determine whether it should accept the response. Information about the related out-of-band message protocol may be required to provide the sender with context about processing the response and what to do after the protocol completes.

    "},{"location":"features/0434-outofband/#sender-error-handling","title":"Sender Error Handling","text":"

    If the sender receives a Problem Report message from the receiver, the sender has several options for responding. The sender will receive the message as part of an offered protocol in the out-of-band message.

    If the receiver did not include a ~service decorator in the response, the sender can only respond if it is still in session with the receiver. For example, if the sender is a website that displayed a QR code for the receiver to scan, the sender could create a new, presumably adjusted, out-of-band message, encode it and present it to the user in the same way as before.

    If the receiver included a ~service decorator in the response, the sender can provide a new message to the receiver, even a new version of the original out-of-band message, and send it to the receiver. The new message MUST include a ~thread decorator with the thid set to the @id from the problem report message.

    "},{"location":"features/0434-outofband/#drawbacks","title":"Drawbacks","text":""},{"location":"features/0434-outofband/#prior-art","title":"Prior art","text":""},{"location":"features/0434-outofband/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0434-outofband/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0445-rich-schema-mapping/","title":"Aries RFC 0445: Aries Rich Schema Mapping","text":""},{"location":"features/0445-rich-schema-mapping/#summary","title":"Summary","text":"

    Mappings serve as a bridge between rich schemas and the flat array of signed integers. A mapping specifies the order in which attributes are transformed and signed. It consists of a set of graph paths and the encoding used for the attribute values specified by those graph paths. Each claim in a mapping has a reference to an encoding, and those encodings are defined in encoding objects.

    Mapping objects are processed in a generic way defined in Rich Schema Objects Common.

    "},{"location":"features/0445-rich-schema-mapping/#motivation","title":"Motivation","text":"

    Rich schemas are complex, hierarchical, and possibly nested objects. The Camenisch-Lysyanskaya signature scheme used by Indy requires the attributes to be represented by an array of 256-bit integers. Converting data specified by a rich schema into a flat array of integers requires a mapping object.

    "},{"location":"features/0445-rich-schema-mapping/#tutorial","title":"Tutorial","text":""},{"location":"features/0445-rich-schema-mapping/#intro-to-mappings","title":"Intro to Mappings","text":"

    Mappings are written to the ledger so they can be shared by multiple credential definitions. A Credential Definition may only reference a single Mapping.

    One or more Mappings can be referenced by a Presentation Definition. The mappings serve as a vital part of the verification process. The verifier, upon receipt of a presentation must not only check that the array of integers signed by the issuer is valid, but that the attribute values were transformed and ordered according to the mapping referenced in the credential definition.

    A Mapping references one and only one Rich Schema object. If there is no Schema Object a Mapping can reference, a new Schema must be created on the ledger. If a Mapping needs to map attributes from multiple Schemas, then a new Schema embedding the multiple Schemas must be created and stored on the ledger.

    Mappings need to be discoverable.

    Mapping is a JSON-LD object following the same structure (attributes and graph pathes) as the corresponding Rich Schema. A Mapping may contain only a subset of the original Rich Schema's attributes.

    Every Mapping must have two default attributes required by any W3C compatible credential (see W3C verifiable credential specification): issuer and issuanceDate. Additionally, any other attributes that are considered optional by the W3C verifiable credential specification that will be included in the issued credential must be included in the Mapping. For example, credentialStatus or expirationDate. This allows the holder to selectively disclose these attributes in the same way as other attributes from the schema.

    The value of every schema attribute in a Mapping object is an array of the following pairs: - encoding object (referenced by its id) to be used for representation of the attribute as an integer - rank of the attribute to define the order in which the attribute is signed by the Issuer.

    The value is an array as the same attribute may be used in Credential Definition multiple times with different encodings.

    Note: The anonymous credential signature scheme currently used by Indy is Camenisch-Lysyanskaya signatures. It is the use of this signature scheme in combination with rich schema objects that necessitates a mapping object. If another signature scheme is used which does not have the same requirements, a mapping object may not be necessary or a different mapping object may need to be defined.

    "},{"location":"features/0445-rich-schema-mapping/#properties","title":"Properties","text":"

    Mapping's properties follow the generic template defined in Rich Schema Common.

    Mapping's content field is a JSON-LD-serialized string with the following fields:

    "},{"location":"features/0445-rich-schema-mapping/#id","title":"@id","text":"

    A Mapping must have an @id property. The value of this property must be equal to the id field which is a DID (see Identification of Rich Schema Objects).

    "},{"location":"features/0445-rich-schema-mapping/#type","title":"@type","text":"

    A Mapping must have a @type property. The value of this property must be (or map to, via a context object) a URI.

    "},{"location":"features/0445-rich-schema-mapping/#context","title":"@context","text":"

    A Mapping may have a @context property. If present, the value of this property must be a context object or a URI which can be dereferenced to obtain a context object.

    "},{"location":"features/0445-rich-schema-mapping/#schema","title":"schema","text":"

    An id of the corresponding Rich Schema

    "},{"location":"features/0445-rich-schema-mapping/#attributes","title":"attributes","text":"

    A dict of all the schema attributes the Mapping object is going to map to encodings and use in credentials. An attribute may have nested attributes matching the schema structure.

    It must also contain the following default attributes required by any W3C compatible verifiable credential (plus any additional attributes that may have been included from the W3C verifiable credentials data model): - issuer - issuanceDate - any additional attributes

    Every leaf attribute's value (including the default issuer and issuanceDate ones) is an array of the following pairs:

    "},{"location":"features/0445-rich-schema-mapping/#example-mapping","title":"Example Mapping","text":"

    Let's consider a Rich Schema object with the following content:

        '@id': \"did:sov:4e9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n    '@context': \"did:sov:2f9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n    '@type': \"rdfs:Class\",\n    \"rdfs:comment\": \"ISO18013 International Driver License\",\n    \"rdfs:label\": \"Driver License\",\n    \"rdfs:subClassOf\": {\n        \"@id\": \"sch:Thing\"\n    },\n    \"driver\": \"Driver\",\n    \"dateOfIssue\": \"Date\",\n    \"dateOfExpiry\": \"Date\",\n    \"issuingAuthority\": \"Text\",\n    \"licenseNumber\": \"Text\",\n    \"categoriesOfVehicles\": {\n        \"vehicleType\": \"Text\",\n        \"dateOfIssue\": \"Date\",\n        \"dateOfExpiry\": \"Date\",\n        \"restrictions\": \"Text\",\n    },\n    \"administrativeNumber\": \"Text\"\n

    Then the corresponding Mapping object may have the following content. Please note that we used all attributes from the original Schema except dateOfExpiry, categoriesOfVehicles/dateOfExpiry and categoriesOfVehicles/restrictions. Also, the licenseNumber attribute is used twice, but with different encodings. It is important that no two rank values may be identical.

        '@id': \"did:sov:5e9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n    '@context': \"did:sov:2f9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n    '@type': \"rdfs:Class\",\n    \"schema\": \"did:sov:4e9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n    \"attributes\" : {\n        \"issuer\": [{\n            \"enc\": \"did:sov:9x9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n            \"rank\": 1\n        }],\n        \"issuanceDate\": [{\n            \"enc\": \"did:sov:119F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n            \"rank\": 2\n        }],\n        \"expirationDate\": [{\n            \"enc\": \"did:sov:119F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n            \"rank\": 11\n        }],        \n        \"driver\": [{\n            \"enc\": \"did:sov:1x9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n            \"rank\": 5\n        }],\n        \"dateOfIssue\": [{\n            \"enc\": \"did:sov:2x9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n            \"rank\": 4\n        }],\n        \"issuingAuthority\": [{\n            \"enc\": \"did:sov:3x9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n            \"rank\": 3\n        }],\n        \"licenseNumber\": [\n            {\n                \"enc\": \"did:sov:4x9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n                \"rank\": 9\n            },\n            {\n                \"enc\": \"did:sov:5x9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n                \"rank\": 10\n            },\n        ],\n        \"categoriesOfVehicles\": {\n            \"vehicleType\": [{\n                \"enc\": \"did:sov:6x9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n                \"rank\": 6\n            }],\n            \"dateOfIssue\": [{\n             \"enc\": \"did:sov:7x9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n                \"rank\": 7\n            }],\n        },\n        \"administrativeNumber\": [{\n            \"enc\": \"did:sov:8x9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n            \"rank\": 8\n        }]\n    }\n

    "},{"location":"features/0445-rich-schema-mapping/#data-registry-storage","title":"Data Registry Storage","text":"

    Aries will provide a means for writing contexts to and reading contexts from a verifiable data registry (such as a distributed ledger).

    A Mapping object will be written to the ledger in a generic way defined in Rich Schema Objects Common.

    "},{"location":"features/0445-rich-schema-mapping/#aries-data-registry-interface","title":"Aries Data Registry Interface","text":"

    Aries Data Registry Interface methods for adding and retrieving a Mapping object from the ledger comply with the generic approach described in Rich Schema Objects Common.

    This means the following methods can be used: - write_rich_schema_object - read_rich_schema_object_by_id - read_rich_schema_object_by_metadata

    "},{"location":"features/0445-rich-schema-mapping/#reference","title":"Reference","text":"

    The following is a reference implementation of various transformation algorithms.

    Here is the paper that defines Camenisch-Lysyanskaya signatures.

    "},{"location":"features/0445-rich-schema-mapping/#drawbacks","title":"Drawbacks","text":"

    This increases the complexity of issuing verifiable credentials and verifiying the accompanying verifiable presentations.

    "},{"location":"features/0445-rich-schema-mapping/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0445-rich-schema-mapping/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0445-rich-schema-mapping/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0446-rich-schema-cred-def/","title":"Aries RFC 0446: Aries Rich Schema Credential Definition","text":""},{"location":"features/0446-rich-schema-cred-def/#summary","title":"Summary","text":"

    Credential Definition can be used by the Issuer to set public keys for a particular Rich Schema and Mapping. The public keys can be used for signing the credentials by the Issuer according to the order and encoding of attributes defined by the referenced Mapping.

    Credential Definition objects are processed in a generic way defined in Rich Schema Objects Common.

    "},{"location":"features/0446-rich-schema-cred-def/#motivation","title":"Motivation","text":"

    The current format for Indy credential definitions provides a method for issuers to specify a schema and provide public key data for credentials they issue. This ties the schema and public key data values to the issuer's DID. The verifier uses the credential definition to check the validity of each signed credential attribute presented to the verifier.

    The new credential definition object that uses rich schemas is a minor modification of the current Indy credential definition. The new format has the same public key data. In addition to referencing a schema, the new credential definition can also reference a mapping object.

    "},{"location":"features/0446-rich-schema-cred-def/#tutorial","title":"Tutorial","text":""},{"location":"features/0446-rich-schema-cred-def/#intro-to-credential-definition","title":"Intro to Credential Definition","text":"

    Credential definitions are written to the ledger so they can be used by holders and verifiers in presentation protocol.

    A Credential Definition can reference a single Mapping and a single Rich Schema only.

    Credential Definition is a JSON object.

    Credential Definition should be immutable in most of the cases. Some application may consider it as a mutable object since the Issuer may rotate keys present there. However, rotation of Issuer's keys should be done carefully as it will invalidate all credentials issued for this key.

    "},{"location":"features/0446-rich-schema-cred-def/#properties","title":"Properties","text":"

    Credential definition's properties follow the generic template defined in Rich Schema Common.

    Credential Definition's content field is a JSON-serialized string with the following fields:

    "},{"location":"features/0446-rich-schema-cred-def/#signaturetype","title":"signatureType","text":"

    Type of the signature. ZKP scheme CL (Camenisch-Lysyanskaya) is the only type currently supported in Indy. Other signature types, even those that do not support ZKPs, may still make use of the credential definition to link the issuer's public keys with the rich schema against which the verifiable credential was constructed.

    "},{"location":"features/0446-rich-schema-cred-def/#mapping","title":"mapping","text":"

    An id of the corresponding Mapping

    "},{"location":"features/0446-rich-schema-cred-def/#schema","title":"schema","text":"

    An id of the corresponding Rich Schema. The mapping must reference the same Schema.

    "},{"location":"features/0446-rich-schema-cred-def/#publickey","title":"publicKey","text":"

    Issuer's public keys. Consists of primary and revocation keys.

    "},{"location":"features/0446-rich-schema-cred-def/#example-credential-definition","title":"Example Credential Definition","text":"

    An example of the content field of a Credential Definition object:

    \"signatureType\": \"CL\",\n\"mapping\": \"did:sov:UVj5w8DRzcmPVDpUMr4AZhJ\",\n\"schema\": \"did:sov:U5x5w8DRzcmPVDpUMr4AZhJ\",\n\"publicKey\": {\n    \"primary\": \"...\",\n    \"revocation\": \"...\"\n}\n

    "},{"location":"features/0446-rich-schema-cred-def/#use-in-verifiable-credentials","title":"Use in Verifiable Credentials","text":"

    A ZKP credential created according to the CL signature scheme must reference a Credential Definition used for signing. A Credential Definition is referenced in the credentialSchema property. A Credential Definition is referenced by its id.

    "},{"location":"features/0446-rich-schema-cred-def/#data-registry-storage","title":"Data Registry Storage","text":"

    Aries will provide a means for writing contexts to and reading contexts from a verifiable data registry (such as a distributed ledger).

    A Credential Definition object will be written to the ledger in a generic way defined in Rich Schema Objects Common.

    "},{"location":"features/0446-rich-schema-cred-def/#aries-data-registry-interface","title":"Aries Data Registry Interface","text":"

    Aries Data Registry Interface methods for adding and retrieving a Credential Definition object from the ledger comply with the generic approach described in Rich Schema Objects Common.

    This means the following methods can be used: - write_rich_schema_object - read_rich_schema_object_by_id - read_rich_schema_object_by_metadata

    "},{"location":"features/0446-rich-schema-cred-def/#reference","title":"Reference","text":"

    The following is a reference implementation of various transformation algorithms.

    Here is the paper that defines Camenisch-Lysyanskaya signatures.

    "},{"location":"features/0446-rich-schema-cred-def/#drawbacks","title":"Drawbacks","text":"

    This increases the complexity of issuing verifiable credentials and verifiying the accompanying verifiable presentations.

    "},{"location":"features/0446-rich-schema-cred-def/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0446-rich-schema-cred-def/#prior-art","title":"Prior art","text":"

    Indy already has a Credential Definition support.

    What the prior effort lacked was a corresponding enhancement of schema infrastructure which would have provided the necessary typing of attribute values.

    "},{"location":"features/0446-rich-schema-cred-def/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0446-rich-schema-cred-def/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0453-issue-credential-v2/","title":"Aries RFC 0453: Issue Credential Protocol 2.0","text":""},{"location":"features/0453-issue-credential-v2/#change-log","title":"Change Log","text":"

    For a period of time, versions 2.1 and 2.2 where defined in this RFC. Those definitions were added prior to any implementations, and to date, there are no known implementations available or planned. An attempt at implementing version 2.1 was not merged into the main branch of Aries Cloud Agent Python, deemed overly complicated and not worth the effort for what amounts to an edge case (issuing multiple credentials of the same type in a single protocol instance). Further, there is a version 3.0 of this protocol that has been specified and implemented that does not include these capabilities. Thus, a decision was made that versions 2.1 and 2.2 be removed as being not accepted by the community and overly complicated to both implement and migrate from. Those interested in seeing how those capabilities were specified can look at this protocol before they were removed.

    "},{"location":"features/0453-issue-credential-v2/#20propose-credential-and-identifiers","title":"2.0/propose-credential and identifiers","text":"

    Version 2.0 of the protocol is introduced because of a breaking changes in the propose-credential message, replacing the (indy-specific) filtration criteria with a generalized filter attachment to align with the rest of the messages in the protocol. The previous version is 1.1/propose-credential. Version 2.0 also uses <angle brackets> explicitly to mark all values that may vary between instances, such as identifiers and comments.

    The \"formats\" field is added to all the messages to enable the linking the specific attachment IDs with the the format (credential format and version) of the attachment.

    The details that are part of each message type about the different attachment formats serves as a registry of the known formats and versions.

    "},{"location":"features/0453-issue-credential-v2/#summary","title":"Summary","text":"

    Formalizes messages used to issue a credential--whether the credential is JWT-oriented, JSON-LD-oriented, or ZKP-oriented. The general flow is similar, and this protocol intends to handle all of them. If you are using a credential type that doesn't fit this protocol, please raise a Github issue.

    "},{"location":"features/0453-issue-credential-v2/#motivation","title":"Motivation","text":"

    We need a standard protocol for issuing credentials. This is the basis of interoperability between Issuers and Holders.

    "},{"location":"features/0453-issue-credential-v2/#tutorial","title":"Tutorial","text":""},{"location":"features/0453-issue-credential-v2/#name-and-version","title":"Name and Version","text":"

    issue-credential, version 2.0

    "},{"location":"features/0453-issue-credential-v2/#roles","title":"Roles","text":"

    There are two roles in this protocol: Issuer and Holder. Technically, the latter role is only potential until the protocol completes; that is, the second party becomes a Holder of a credential by completing the protocol. However, we will use the term Holder throughout, to keep things simple.

    Note: When a holder of credentials turns around and uses those credentials to prove something, they become a Prover. In the sister RFC to this one, 0454: Present Proof Protocol 2.0, the Holder is therefore renamed to Prover. Sometimes in casual conversation, the Holder role here might be called \"Prover\" as well, but more formally, \"Holder\" is the right term at this phase of the credential lifecycle.

    "},{"location":"features/0453-issue-credential-v2/#goals","title":"Goals","text":"

    When the goals of each role are not available because of context, goal codes may be specifically included in protocol messages. This is particularly helpful to differentiate between credentials passed between the same parties for several different reasons. A goal code included should be considered to apply to the entire thread and is not necessary to be repeated on each message. Changing the goal code may be done by including the new code in a message. All goal codes are optional, and without default.

    "},{"location":"features/0453-issue-credential-v2/#states","title":"States","text":"

    The choreography diagram below details how state evolves in this protocol, in a \"happy path.\" The states include

    "},{"location":"features/0453-issue-credential-v2/#issuer-states","title":"Issuer States","text":""},{"location":"features/0453-issue-credential-v2/#holder-states","title":"Holder States","text":"

    Errors might occur in various places. For example, an Issuer might offer a credential for a price that the Holder is unwilling to pay. All errors are modeled with a problem-report message. Easy-to-anticipate errors reset the flow as shown in the diagrams, and use the code issuance-abandoned; more exotic errors (e.g., server crashed at Issuer headquarters in the middle of a workflow) may have different codes but still cause the flow to be abandoned in the same way. That is, in this version of the protocol, all errors cause the state of both parties (the sender and the receiver of the problem-report) to revert to null (meaning it is no longer engaged in the protocol at all). Future versions of the protocol may allow more granular choices (e.g., requesting and receiving a (re-)send of the issue-credential message if the Holder times out while waiting in the request-sent state).

    The state table outlines the protocol states and transitions.

    "},{"location":"features/0453-issue-credential-v2/#messages","title":"Messages","text":"

    The Issue Credential protocol consists of these messages:

    In addition, the ack and problem-report messages are adopted into the protocol for confirmation and error handling.

    "},{"location":"features/0453-issue-credential-v2/#message-attachments","title":"Message Attachments","text":"

    This protocol is about the messages that must be exchanged to issue verifiable credentials, NOT about the specifics of particular verifiable credential schemes. DIDComm attachments are deliberately used in messages to isolate the protocol flow/semantics from the credential artifacts themselves as separate constructs. Attachments allow credential formats and this protocol to evolve through versioning milestones independently instead of in lockstep. Links are provided in the message descriptions below, to describe how the protocol adapts to specific verifiable credential implementations.

    The attachment items in the messages are arrays. The arrays are provided to support the issuing of different credential formats (e.g. ZKP, JSON-LD JWT, or other) containing the same data (claims). The arrays are not to be used for issuing credentials with different claims. The formats field of each message associates each attachment with the format (and version) of the attachment.

    A registry of attachment formats is provided in this RFC within the message type sections. A sub-section should be added for each attachment format type (and optionally, each version). Updates to the attachment type formats does NOT impact the versioning of the Issue Credential protocol. Formats are flexibly defined. For example, the first definitions are for hlindy/cred-abstract@v2.0 et al., assuming that all Hyperledger Indy implementations and ledgers will use a common format. However, if a specific instance of Indy uses a different format, another format value can be documented as a new registry entry.

    Any of the 0017-attachments RFC embedded inline attachments can be used. In the examples below, base64 is used in most cases, but implementations MUST expect any of the formats.

    "},{"location":"features/0453-issue-credential-v2/#choreography-diagram","title":"Choreography Diagram","text":"

    Note: This diagram was made in draw.io. To make changes:

    The protocol has 3 alternative beginnings:

    1. The Issuer can begin with an offer.
    2. The Holder can begin with a proposal.
    3. the Holder can begin with a request.

    The offer and proposal messages are part of an optional negotiation phase and may trigger back-and-forth counters. A request is not subject to negotiation; it can only be accepted or rejected.

    "},{"location":"features/0453-issue-credential-v2/#propose-credential","title":"Propose Credential","text":"

    An optional message sent by the potential Holder to the Issuer to initiate the protocol or in response to an offer-credential message when the Holder wants some adjustments made to the credential data offered by Issuer.

    Note: In Hyperledger Indy, where the `request-credential` message can **only** be sent in response to an `offer-credential` message, the `propose-credential` message is the only way for a potential Holder to initiate the workflow.

    Message format:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/propose-credential\",\n    \"@id\": \"<uuid of propose-message>\",\n    \"goal_code\": \"<goal-code>\",\n    \"comment\": \"<some comment>\",\n    \"credential_preview\": <json-ld object>,\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\"\n        }\n    ],\n    \"filters~attach\": [\n        {\n            \"@id\": \"<attachment identifier>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ]\n}\n

    Description of attributes:

    "},{"location":"features/0453-issue-credential-v2/#propose-attachment-registry","title":"Propose Attachment Registry","text":"Credential Format Format Value Link to Attachment Format Comment DIF Credential Manifest dif/credential-manifest@v1.0 propose-credential attachment format Linked Data Proof VC Detail aries/ld-proof-vc-detail@v1.0 ld-proof-vc-detail attachment format Hyperledger Indy Credential Filter hlindy/cred-filter@v2.0 cred filter format Hyperledger AnonCreds Credential Filter anoncreds/credential-filter@v1.0 Credential Filter format"},{"location":"features/0453-issue-credential-v2/#offer-credential","title":"Offer Credential","text":"

    A message sent by the Issuer to the potential Holder, describing the credential they intend to offer and possibly the price they expect to be paid. In Hyperledger Indy, this message is required, because it forces the Issuer to make a cryptographic commitment to the set of fields in the final credential and thus prevents Issuers from inserting spurious data. In credential implementations where this message is optional, an Issuer can use the message to negotiate the issuing following receipt of a request-credential message.

    Message Format:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/offer-credential\",\n    \"@id\": \"<uuid of offer message>\",\n    \"goal_code\": \"<goal-code>\",\n    \"replacement_id\": \"<issuer unique id>\",\n    \"comment\": \"<some comment>\",\n    \"credential_preview\": <json-ld object>,\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"offers~attach\": [\n        {\n            \"@id\": \"<attach@id value>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    It is possible for an Issuer to add a ~timing.expires_time decorator to this message to convey the idea that the offer will expire at a particular point in the future. Such behavior is not a special part of this protocol, and support for it is not a requirement of conforming implementations; the ~timing decorator is simply a general possibility for any DIDComm message. We mention it here just to note that the protocol can be enriched in composable ways.

    "},{"location":"features/0453-issue-credential-v2/#offer-attachment-registry","title":"Offer Attachment Registry","text":"Credential Format Format Value Link to Attachment Format Comment DIF Credential Manifest dif/credential-manifest@v1.0 offer-credential attachment format Hyperledger Indy Credential Abstract hlindy/cred-abstract@v2.0 cred abstract format Linked Data Proof VC Detail aries/ld-proof-vc-detail@v1.0 ld-proof-vc-detail attachment format Hyperledger AnonCreds Credential Offer anoncreds/credential-offer@v1.0 Credential Offer format W3C VC - Data Integrity Proof Credential Offer didcomm/w3c-di-vc-offer@v0.1 Credential Offer format"},{"location":"features/0453-issue-credential-v2/#request-credential","title":"Request Credential","text":"

    This is a message sent by the potential Holder to the Issuer, to request the issuance of a credential. Where circumstances do not require a preceding Offer Credential message (e.g., there is no cost to issuance that the Issuer needs to explain in advance, and there is no need for cryptographic negotiation), this message initiates the protocol. When using the Hyperledger Indy AnonCreds verifiable credential format, this message can only be sent in response to an offer-credential message.

    Message Format:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/request-credential\",\n    \"@id\": \"<uuid of request message>\",\n    \"goal_code\": \"<goal-code>\",\n    \"comment\": \"<some comment>\",\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"requests~attach\": [\n        {\n            \"@id\": \"<attachment identifier>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        },\n    ]\n}\n

    Description of Fields:

    "},{"location":"features/0453-issue-credential-v2/#request-attachment-registry","title":"Request Attachment Registry","text":"Credential Format Format Value Link to Attachment Format Comment DIF Credential Manifest dif/credential-manifest@v1.0 request-credential attachment format Hyperledger Indy Credential Request hlindy/cred-req@v2.0 cred request format Linked Data Proof VC Detail aries/ld-proof-vc-detail@v1.0 ld-proof-vc-detail attachment format Hyperledger AnonCreds Credential Request anoncreds/credential-request@v1.0 Credential Request format W3C VC - Data Integrity Proof Credential Request didcomm/w3c-di-vc-request@v0.1 Credential Request format"},{"location":"features/0453-issue-credential-v2/#issue-credential","title":"Issue Credential","text":"

    This message contains a verifiable credential being issued as an attached payload. It is sent in response to a valid Request Credential message.

    Message Format:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/issue-credential\",\n    \"@id\": \"<uuid of issue message>\",\n    \"goal_code\": \"<goal-code>\",\n    \"replacement_id\": \"<issuer unique id>\",\n    \"comment\": \"<some comment>\",\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"credentials~attach\": [\n        {\n            \"@id\": \"<attachment-id>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    "},{"location":"features/0453-issue-credential-v2/#credentials-attachment-registry","title":"Credentials Attachment Registry","text":"Credential Format Format Value Link to Attachment Format Comment Linked Data Proof VC aries/ld-proof-vc@v1.0 ld-proof-vc attachment format Hyperledger Indy Credential hlindy/cred@v2.0 credential format Hyperledger AnonCreds Credential anoncreds/credential@v1.0 Credential format W3C VC - Data Integrity Proof Credential didcomm/w3c-di-vc@v0.1 Credential format"},{"location":"features/0453-issue-credential-v2/#adopted-problem-report","title":"Adopted Problem Report","text":"

    The problem-report message is adopted by this protocol. problem-report messages can be used by either party to indicate an error in the protocol.

    "},{"location":"features/0453-issue-credential-v2/#preview-credential","title":"Preview Credential","text":"

    This is not a message but an inner object for other messages in this protocol. It is used construct a preview of the data for the credential that is to be issued. Its schema follows:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/credential-preview\",\n    \"attributes\": [\n        {\n            \"name\": \"<attribute name>\",\n            \"mime-type\": \"<type>\",\n            \"value\": \"<value>\"\n        },\n        // more attributes\n    ]\n}\n

    The main element is attributes. It is an array of (object) attribute specifications; the subsections below outline their semantics.

    "},{"location":"features/0453-issue-credential-v2/#attribute-name","title":"Attribute Name","text":"

    The mandatory \"name\" key maps to the attribute name as a string.

    "},{"location":"features/0453-issue-credential-v2/#mime-type-and-value","title":"MIME Type and Value","text":"

    The optional mime-type advises the issuer how to render a binary attribute, to judge its content for applicability before issuing a credential containing it. Its value parses case-insensitively in keeping with MIME type semantics of RFC 2045. If mime-type is missing, its value is null.

    The mandatory value holds the attribute value:

    "},{"location":"features/0453-issue-credential-v2/#threading","title":"Threading","text":"

    Threading can be used to initiate a sub-protocol during an issue credential protocol instance. For example, during credential issuance, the Issuer may initiate a child message thread to execute the Present Proof sub-protocol to have the potential Holder (now acting as a Prover) prove attributes about themselves before issuing the credential. Depending on circumstances, this might be a best practice for preventing credential fraud at issuance time.

    If threading were added to all of the above messages, a ~thread decorator would be present, and later messages in the flow would reference the @id of earlier messages to stitch the flow into a single coherent sequence. Details about threading can be found in the 0008: Message ID and Threading RFC.

    "},{"location":"features/0453-issue-credential-v2/#limitations","title":"Limitations","text":"

    Smart contracts may be missed in ecosystem, so operation \"issue credential after payment received\" is not atomic. It\u2019s possible case that malicious issuer will charge first and then will not issue credential in fact. But this situation should be easily detected and appropriate penalty should be applied in such type of networks.

    "},{"location":"features/0453-issue-credential-v2/#negotiation-and-preview","title":"Negotiation and Preview","text":"

    Negotiation prior to issuing the credential can be done using the offer-credential and propose-credential messages. A common negotiation use case would be about the data to go into the credential. For that, the credential_preview element is used.

    "},{"location":"features/0453-issue-credential-v2/#drawbacks","title":"Drawbacks","text":"

    None documented

    "},{"location":"features/0453-issue-credential-v2/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0453-issue-credential-v2/#prior-art","title":"Prior art","text":"

    See RFC 0036 Issue Credential, v1.x.

    "},{"location":"features/0453-issue-credential-v2/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0453-issue-credential-v2/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0454-present-proof-v2/","title":"Aries RFC 0454: Present Proof Protocol 2.0","text":""},{"location":"features/0454-present-proof-v2/#change-log","title":"Change Log","text":"

    For a period of time, versions 2.1 and 2.2 where defined in this RFC. Those definitions were added prior to any implementations, and to date, there are no known implementations available or planned. An attempt at implementing version 2.1 of the associated \"issue multiple credentials\" was not merged into the main branch of Aries Cloud Agent Python, deemed overly complicated and not worth the effort for what amounts to an edge case (presenting multiple presentations of the same type in a single protocol instance). Further, there is a version 3.0 of this protocol that has been specified and implemented that does not include these capabilities. Thus, a decision was made that versions 2.1 and 2.2 be removed as being not accepted by the community and overly complicated to both implement and migrate from. Those interested in seeing how those capabilities were specified can look at this protocol before they were removed.

    "},{"location":"features/0454-present-proof-v2/#20-alignment-with-rfc-0453-issue-credential","title":"2.0 - Alignment with RFC 0453 Issue Credential","text":""},{"location":"features/0454-present-proof-v2/#summary","title":"Summary","text":"

    A protocol supporting a general purpose verifiable presentation exchange regardless of the specifics of the underlying verifiable presentation request and verifiable presentation format.

    "},{"location":"features/0454-present-proof-v2/#motivation","title":"Motivation","text":"

    We need a standard protocol for a verifier to request a presentation from a prover, and for the prover to respond by presenting a proof to the verifier. When doing that exchange, we want to provide a mechanism for the participants to negotiate the underlying format and content of the proof.

    "},{"location":"features/0454-present-proof-v2/#tutorial","title":"Tutorial","text":""},{"location":"features/0454-present-proof-v2/#name-and-version","title":"Name and Version","text":"

    present-proof, version 2.0

    "},{"location":"features/0454-present-proof-v2/#key-concepts","title":"Key Concepts","text":"

    This protocol is about the messages to support the presentation of verifiable claims, not about the specifics of particular verifiable presentation formats. DIDComm attachments are deliberately used in messages to make the protocol agnostic to specific verifiable presentation format payloads. Links are provided in the message data element descriptions to details of specific verifiable presentation implementation data structures.

    Diagrams in this protocol were made in draw.io. To make changes:

    "},{"location":"features/0454-present-proof-v2/#roles","title":"Roles","text":"

    The roles are verifier and prover. The verifier requests the presentation of a proof and verifies the presentation, while the prover prepares the proof and presents it to the verifier. Optionally, although unlikely from a business sense, the prover may initiate an instance of the protocol using the propose-presentation message.

    "},{"location":"features/0454-present-proof-v2/#goals","title":"Goals","text":"

    When the goals of each role are not available because of context, goal codes may be specifically included in protocol messages. This is particularly helpful to differentiate between credentials passed between the same parties for several different reasons. A goal code included should be considered to apply to the entire thread and is not necessary to be repeated on each message. Changing the goal code may be done by including the new code in a message. All goal codes are optional, and without default.

    "},{"location":"features/0454-present-proof-v2/#states","title":"States","text":"

    The following states are defined and included in the state transition table below.

    "},{"location":"features/0454-present-proof-v2/#states-for-verifier","title":"States for Verifier","text":""},{"location":"features/0454-present-proof-v2/#states-for-prover","title":"States for Prover","text":"

    For the most part, these states map onto the transitions shown in both the state transition table above, and in the choreography diagram (below) in obvious ways. However, a few subtleties are worth highlighting:

    "},{"location":"features/0454-present-proof-v2/#choreography-diagram","title":"Choreography Diagram","text":""},{"location":"features/0454-present-proof-v2/#messages","title":"Messages","text":"

    The present proof protocol consists of these messages:

    In addition, the ack and problem-report messages are adopted into the protocol for confirmation and error handling.

    The messages that include ~attach attachments may use any form of the embedded attachment. In the examples below, the forms of the attachment are arbitrary.

    The ~attach array is to be used to enable a single presentation to be requested/delivered in different verifiable presentation formats. The ability to have multiple attachments must not be used to request/deliver multiple different presentations in a single instance of the protocol.

    "},{"location":"features/0454-present-proof-v2/#propose-presentation","title":"Propose Presentation","text":"

    An optional message sent by the prover to the verifier to initiate a proof presentation process, or in response to a request-presentation message when the prover wants to propose using a different presentation format or request. Schema:

    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/propose-presentation\",\n    \"@id\": \"<uuid-propose-presentation>\",\n    \"goal_code\": \"<goal-code>\",\n    \"comment\": \"some comment\",\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"proposals~attach\": [\n        {\n            \"@id\": \"<attachment identifier>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"json\": \"<json>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    If the proposals~attach is not provided, the attach_id item in the formats array should not be provided. That form of the propose-presentation message is to indicate the presentation formats supported by the prover, independent of the verifiable presentation request content.

    "},{"location":"features/0454-present-proof-v2/#negotiation-and-preview","title":"Negotiation and Preview","text":"

    Negotiation prior to the delivery of the presentation can be done using the propose-presentation and request-presentation messages. The common negotiation use cases would be about the claims to go into the presentation and the format of the verifiable presentation.

    "},{"location":"features/0454-present-proof-v2/#propose-attachment-registry","title":"Propose Attachment Registry","text":"Presentation Format Format Value Link to Attachment Format Comment Hyperledger Indy Proof Req hlindy/proof-req@v2.0 proof request format Used to propose as well as request proofs. DIF Presentation Exchange dif/presentation-exchange/definitions@v1.0 propose-presentation attachment format Hyperledger AnonCreds Proof Request anoncreds/proof-request@v1.0 Proof Request format Used to propose as well as request proofs."},{"location":"features/0454-present-proof-v2/#request-presentation","title":"Request Presentation","text":"

    From a verifier to a prover, the request-presentation message describes values that need to be revealed and predicates that need to be fulfilled. Schema:

    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/request-presentation\",\n    \"@id\": \"<uuid-request>\",\n    \"goal_code\": \"<goal-code>\",\n    \"comment\": \"some comment\",\n    \"will_confirm\": true,\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"request_presentations~attach\": [\n        {\n            \"@id\": \"<attachment identifier>\",\n            \"mime-type\": \"application/json\",\n            \"data\":  {\n                \"base64\": \"<base64 data>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    "},{"location":"features/0454-present-proof-v2/#presentation-request-attachment-registry","title":"Presentation Request Attachment Registry","text":"Presentation Format Format Value Link to Attachment Format Comment Hyperledger Indy Proof Req hlindy/proof-req@v2.0 proof request format Used to propose as well as request proofs. DIF Presentation Exchange dif/presentation-exchange/definitions@v1.0 propose-presentation attachment format Hyperledger AnonCreds Proof Request anoncreds/proof-request@v1.0 Proof Request format Used to propose as well as request proofs."},{"location":"features/0454-present-proof-v2/#presentation","title":"Presentation","text":"

    This message is a response to a Presentation Request message and contains signed presentations. Schema:

    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/presentation\",\n    \"@id\": \"<uuid-presentation>\",\n    \"goal_code\": \"<goal-code>\",\n    \"comment\": \"some comment\",\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"presentations~attach\": [\n        {\n            \"@id\": \"<attachment identifier>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"sha256\": \"f8dca1d901d18c802e6a8ce1956d4b0d17f03d9dc5e4e1f618b6a022153ef373\",\n                \"links\": [\"https://ibb.co/TtgKkZY\"]\n            }\n        }\n    ]\n}\n

    Description of fields:

    If the prover wants an acknowledgement that the presentation was accepted, this message may be decorated with the ~please-ack decorator using the OUTCOME acknowledgement request. This is not necessary if the verifier has indicated it will send an ack-presentation using the will_confirm property. Outcome in the context of this protocol is the definition of \"successful\" as described in Ack Presentation. Note that this is different from the default behavior as described in 0317: Please ACK Decorator. It is then best practice for the new Verifier to respond with an explicit ack message as described in the please ack decorator RFC.

    "},{"location":"features/0454-present-proof-v2/#presentations-attachment-registry","title":"Presentations Attachment Registry","text":"Presentation Format Format Value Link to Attachment Format Comment Hyperledger Indy Proof hlindy/proof@v2.0 proof format DIF Presentation Exchange dif/presentation-exchange/submission@v1.0 propose-presentation attachment format Hyperledger AnonCreds Proof anoncreds/proof@v1.0 Proof format"},{"location":"features/0454-present-proof-v2/#ack-presentation","title":"Ack Presentation","text":"

    A message from the verifier to the prover that the Present Proof protocol was completed successfully and is now in the done state. The message is an adopted ack from the RFC 0015 acks protocol. The definition of \"successful\" in this protocol means the acceptance of the presentation in whole, i.e. the proof is verified and the contents of the proof are acknowledged.

    "},{"location":"features/0454-present-proof-v2/#problem-report","title":"Problem Report","text":"

    A message from the verifier to the prover that follows the presentation message to indicate that the Present Proof protocol was completed unsuccessfully and is now in the abandoned state. The message is an adopted problem-report from the RFC 0015 report-problem protocol. The definition of \"unsuccessful\" from a business sense is up to the verifier. The elements of the problem-report message can provide information to the prover about why the protocol instance was unsuccessful.

    Either party may send a problem-report message earlier in the flow to terminate the protocol before its normal conclusion.

    "},{"location":"features/0454-present-proof-v2/#reference","title":"Reference","text":"

    Details are covered in the Tutorial section.

    "},{"location":"features/0454-present-proof-v2/#drawbacks","title":"Drawbacks","text":"

    The Indy format of the proposal attachment as proposed above does not allow nesting of logic along the lines of \"A and either B or C if D, otherwise A and B\", nor cross-credential options such as proposing a legal name issued by either (for example) a specific financial institution or government entity.

    The verifiable presentation standardization work being conducted in parallel to this in DIF and the W3C Credentials Community Group (CCG) should be included in at least the Registry tables of this document, and ideally used to eliminate the need for presentation format-specific options.

    "},{"location":"features/0454-present-proof-v2/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0454-present-proof-v2/#prior-art","title":"Prior art","text":"

    The previous major version of this protocol is RFC 0037 Present Proof protocol and implementations.

    "},{"location":"features/0454-present-proof-v2/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0454-present-proof-v2/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0482-coprotocol-protocol/","title":"Aries RFC 0482: Coprotocol Protocol 0.5","text":""},{"location":"features/0482-coprotocol-protocol/#summary","title":"Summary","text":"

    Allows coprotocols to interact with one another.

    "},{"location":"features/0482-coprotocol-protocol/#motivation","title":"Motivation","text":"

    We need a standard way for one protocol to invoke another, giving it input, getting its output, detaching, and debugging.

    "},{"location":"features/0482-coprotocol-protocol/#tutorial","title":"Tutorial","text":""},{"location":"features/0482-coprotocol-protocol/#name-and-version","title":"Name and Version","text":"

    The name of this protocol is \"Coprotocol Protocol 0.5\" It is identified by the PIURI \"https://didcomm.org/coprotocol/0.5\".

    "},{"location":"features/0482-coprotocol-protocol/#key-concepts","title":"Key Concepts","text":"

    Please make sure you are familiar with the general concept of coprotocols, as set forth in Aries RFC 0478. A working knowledge of the terminology and mental model explained there are foundational.

    "},{"location":"features/0482-coprotocol-protocol/#roles","title":"Roles","text":"

    The caller role is played by the entity giving input and getting output. The called is the entity getting input and giving output.

    "},{"location":"features/0482-coprotocol-protocol/#states","title":"States","text":"

    The caller's normal state progression is null -> detached -> attached -> done. It is also possible to return to a detached state without ever reaching done.

    The coprotocols normal state progression is null -> attached -> done.

    "},{"location":"features/0482-coprotocol-protocol/#messages","title":"Messages","text":"

    Note: the discussion below is about how to launch and interact with any coprotocol. However, for concreteness we frame the walkthru in terms of a co-protocol that makes a payment. You can see an example definition of such a coprotocol in RFC 0478.

    The protocol consists of 5 messages: bind, attach, input, output, detach and the adopted problem-report (for propagating errors).

    The protocol begins with a bind message sent from caller to called. This message basically says, \"I would like to interact with a new coprotocol instance having the following characteristics and the following mapping of identifiers to roles.\" It might look like this:

    {\n    \"@id\": \"4d116a88-1314-4af5-9b3c-848456b8b3dd\",\n    \"@type\": \"https://didcomm.org/coprotocol/1.0/bind\",\n    \"goal_code\": \"aries.buy.make-payment\",\n    \"co_binding_id\": null,\n    \"cast\": [\n        // Recipient of the bind message (id = null) should be payee.\n        {\"role\": \"payee\", \"id\": null},\n        // The payer will be did:peer:abc123.\n        {\"role\": \"payer\", \"id\": \"did:peer:abc123\" }\n    ]\n}\n

    When a called agent receives this message, it should discover what protocol implementations are available that match the criteria, and sort the candidates by preference. (Note that additional criteria can be added besides those shown here; see the Reference section.) This could involve enumerating not-yet-loaded plugins. It could also involve negotiating a protocol with the remote party (e.g., the DID playing the role of payer in the example above) by querying its capabilities using the Discover Features Protocol. Of course, the capabilities of remote parties could also be cached to avoid this delay, or they could be predicted without confirmation, if circumstances suggest that's the best tradeoff. Once the candidates are sorted by preference, the best match should be selected. The coprotocol is NOT launched, but it is awaiting launch. The called agent should now generate an attach message that acknowledges the request to bind and tells the caller how to interact:

    {\n    \"@id\": \"b3dd4d11-6a88-9b3c-4af5-848456b81314\",\n    \"@type\": \"https://didcomm.org/coprotocol/1.0/attach\",\n    \"~thread\": { \"pthid\": \"4d116a88-1314-4af5-9b3c-848456b8b3dd\"},\n    // This is the best match.\n    \"piuri\": \"https://didcomm.org/pay-with-venmo/1.3\"\n}\n

    The @id of the bind message (also the ~thread.pthid of the attach response) becomes a permanent identifier for the coprotocol binding. Both the caller and the coprotocol instance code can use it to lookup state as needed. The caller can now kick off/invoke the protocol with an input message:

    {\n    \"@id\": \"56b81314-6a88-9b3c-4af5-b3dd4d118484\",\n    \"@type\": \"https://didcomm.org/coprotocol/1.0/input\",\n    \"~thread\": { \"pthid\": \"4d116a88-1314-4af5-9b3c-848456b8b3dd\"},\n    \"interaction_point\": \"invoke\",\n    \"data\": [\n        \"amount\": 1.23,\n        \"currency\": \"INR\",\n        \"bill_of_sale\": {\n            // describes what's being purchased\n        }\n    ]\n}\n

    This allows the caller to invoke the bound coprotocol instance, and to pass it any number of named inputs.

    Later, when the coprotocol instance wants to emit an output from called to caller, it uses an output message (in this case, one matching the preauth interaction point declared in the sample coprotocol definition in RFC 0478):

    {\n    \"@id\": \"9b3c56b8-6a88-f513-4a14-4d118484b3dd\",\n    \"@type\": \"https://didcomm.org/coprotocol/1.0/output\",\n    \"~thread\": { \"pthid\": \"4d116a88-1314-4af5-9b3c-848456b8b3dd\"},\n    \"interaction_point\": \"preauth\",\n    \"data\": [\n        \"code\": \"6a884d11-13149b3c\",\n    ]\n}\n

    If a caller wants to detach, it uses a detach message. This leaves the coprotocol running on called; all inputs that it emits are sent to the bitbucket, and it advances on its normal state trajectory as if it were a wholly independent protocol:

    {\n    \"@id\": \"7a3c56b8-5b88-d413-4a14-ca118484b3ee\",\n    \"@type\": \"https://didcomm.org/coprotocol/1.0/detach\",\n    \"~thread\": { \"pthid\": \"4d116a88-1314-4af5-9b3c-848456b8b3dd\"}\n}\n

    A caller can re-attach by sending a new bind message; this time, the co_binding_id field should have the coprotocol binding id from the original attach message. Other fields in the message are optional; if present, they constitute a check that the binding in question has the properties the caller expects. The reattachment is confrimed by a new attach message.

    "},{"location":"features/0482-coprotocol-protocol/#reference","title":"Reference","text":""},{"location":"features/0482-coprotocol-protocol/#bind","title":"bind","text":"
    {\n    \"@id\": \"4d116a88-1314-4af5-9b3c-848456b8b3dd\",\n    \"@type\": \"https://didcomm.org/coprotocol/1.0/bind\",\n    // I'd like to be bound to a coprotocol that achieves this goal.\n    \"goal_code\": \"aries.buy.make-payment\",\n    \"co_binding_id\": \n    // What is the intent about who plays which roles?\n    \"cast\": [\n        // Recipient of the bind message (id = null) should be payee.\n        {\"role\": \"payee\", \"id\": null},\n        // The payer will be did:peer:abc123.\n        {\"role\": \"payer\", \"id\": \"did:peer:abc123\" }\n    ],\n    // Optional and preferably omitted as it creates tight coupling;\n    // constrains bound coprotocol to just those that have a PIURI\n    // matching this wildcarded expression. \n    \"piuri_pat\": \"*/pay*\",\n    // If multiple matches are found, tells how to sort them to pick\n    // best match. \n    \"prefer\": [\n        // First prefer to bind a protocol that's often successful.\n        { \"attribute\": \"success_ratio\", \"direction\": \"d\" },\n        // Tie break by binding a protocol that's been run recently.\n        { \"attribute\": \"last_run_date\", \"direction\": \"d\" },\n        // Tie break by binding a protocol that's newer.\n        { \"attribute\": \"release_date\", \"direction\": \"d\" }\n        // Tie break by selecting protocols already running (false\n        // sorts before true).\n        { \"attribute\": \"running\", \"direction\": \"d\" }\n    ]\n}\n
    "},{"location":"features/0482-coprotocol-protocol/#attach","title":"attach","text":"
    {\n    \"@id\": \"b3dd4d11-6a88-9b3c-4af5-848456b81314\",\n    \"@type\": \"https://didcomm.org/coprotocol/1.0/attach\",\n    \"~thread\": { \"pthid\": \"4d116a88-1314-4af5-9b3c-848456b8b3dd\"},\n    // This is the best match.\n    \"piuri\": \"https://didcomm.org/pay-with-venmo/1.3\",\n    // Optional. Tells how long the caller has to take the next\n    // step binding will be held in an\n    // inactive state before being abandoned.\n    \"~timing.expires_time\": \"2020-06-23T18:42:07.124\"\n}\n
    "},{"location":"features/0482-coprotocol-protocol/#collateral","title":"Collateral","text":"

    This section is optional. It could be used to reference files, code, relevant standards, oracles, test suites, or other artifacts that would be useful to an implementer. In general, collateral should be checked in with the RFC.

    "},{"location":"features/0482-coprotocol-protocol/#drawbacks","title":"Drawbacks","text":"

    Why should we not do this?

    "},{"location":"features/0482-coprotocol-protocol/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0482-coprotocol-protocol/#prior-art","title":"Prior art","text":"

    Discuss prior art, both the good and the bad, in relation to this proposal. A few examples of what this can include are:

    This section is intended to encourage you as an author to think about the lessons from other implementers, provide readers of your proposal with a fuller picture. If there is no prior art, that is fine - your ideas are interesting to us whether they are brand new or if they are an adaptation from other communities.

    Note that while precedent set by other communities is some motivation, it does not on its own motivate an enhancement proposal here. Please also take into consideration that Aries sometimes intentionally diverges from common identity ../../features.

    "},{"location":"features/0482-coprotocol-protocol/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0482-coprotocol-protocol/#implementations","title":"Implementations","text":"

    NOTE: This section should remain in the RFC as is on first release. Remove this note and leave the rest of the text as is. Template text in all other sections should be replace.

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"features/0496-transition-to-oob-and-did-exchange/","title":"Aries RFC 0496: Transition to the Out of Band and DID Exchange Protocols","text":""},{"location":"features/0496-transition-to-oob-and-did-exchange/#summary","title":"Summary","text":"

    The Aries community has agreed to transition from using the invitation messages in RFC 0160 Connections and RFC 0023 DID Exchange to using the plaintext invitation message in RFC 0434 Out of Band and from using RFC 0160 to RFC 0023 for establishing agent-to-agent connections. As well, the community has agreed to transition from using RFC 0056 Service Decorator to execute connection-less instances of the RFC 0037 Present Proof protocol to using the out-of-band invitation message.

    This RFC follows the guidance in RFC 0345 about community-coordinated updates to (try to) ensure that independently deployed, interoperable agents remain interoperable throughout this transition.

    The transition from the old to new messages will occur in four steps:

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#step-1-out-of-band-messages","title":"Step 1 Out-of-Band Messages","text":"

    The definition of Step 1 has been deliberately defined to limit the impact of the changes on existing code bases. An implementation may be able to do as little as convert an incoming out-of-band protocol message into its \"current format\" equivalent and process the message, thus deferring larger changes to the message handling code. The following examples show the equivalence between out-of-band and current messages and the constraints on the out-of-band invitations used in Step 2.

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#connection-invitationinline-diddoc-service-entry","title":"Connection Invitation\u2014Inline DIDDoc Service Entry","text":"

    The following is the out-of-band invitation message equivalent to an RFC 0160 Connections invitation message that may be used in Step 2.

    {\n  \"@type\": \"https://didcomm.org/out-of-band/1.0/invitation\",\n  \"@id\": \"1234-1234-1234-1234\",\n  \"label\": \"Faber College\",\n  \"goal_code\": \"establish-connection\",\n  \"goal\": \"To establish a connection\",\n  \"handshake_protocols\": [\"did:sov:BzCbsNYhMrjHiqZDTUASHg;spec/connections/1.0/invitation\"],\n  \"service\": [\n      {\n        \"id\": \"#inline\"\n        \"type\": \"did-communication\",\n        \"recipientKeys\": [\"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\"],\n        \"routingKeys\": [],\n        \"serviceEndpoint\": \"https://example.com:5000\"\n      }\n  ]\n}\n

    The constraints on this form of the out-of-band invitation sent during Step 2 are:

    This out-of-band message can be transformed to the following RFC 0160 Connection invitation message.

    {\n  \"@type\": \"did:sov:BzCbsNYhMrjHiqZDTUASHg;spec/connections/1.0/invitation\",\n  \"@id\": \"1234-1234-1234-1234\",\n  \"label\": \"Faber College\",\n  \"recipientKeys\": [\"6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\"],\n  \"serviceEndpoint\": \"https://example.com:5000\",\n  \"routingKeys\": []\n}\n

    Note the use of did:key in the out-of-band message and the \"naked\" public key in the connection message. Ideally, full support for did:key will be added during Step 1. However, if there is not time for an agent builder to add full support, the transformation can be accomplished using simple text transformations between the did:key format and the (only) public key format used in current Aries agents.

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#connection-invitationdid-service-entry","title":"Connection Invitation\u2014DID Service Entry","text":"

    If the out-of-band message service item is a single DID, the resulting transformed message is comparably different. For example, this out-of-band invitation message:

    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/invitation\",\n  \"@id\": \"<id used for context as pthid>\",\n  \"label\": \"Faber College\",\n  \"goal_code\": \"issue-vc\",\n  \"goal\": \"To issue a Faber College Graduate credential\",\n  \"handshake_protocols\": [\"https://didcomm.org/connections/1.0\"],\n  \"service\": [\"did:sov:LjgpST2rjsoxYegQDRm7EL\"]\n}\n

    The did form of the connection invitation is implied, as shown here:

    {\n  \"@type\": \"did:sov:BzCbsNYhMrjHiqZDTUASHg;spec/connections/1.0/invitation\",\n  \"@id\": \"1234-1234-1234-1234\",\n  \"label\": \"Faber College\",\n  \"did\": [\"did:sov:LjgpST2rjsoxYegQDRm7EL\"]\n}\n
    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#connection-less-present-proof-request","title":"Connection-less Present Proof Request","text":"

    The most common connection-less form being used in production is the request-presentation message from the RFC 0037 Present Proof protocol. The out-of-band invitation for that request looks like this, using the inline form of the service entry.

    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/invitation\",\n  \"@id\": \"1234-1234-1234-1234\",\n  \"label\": \"Faber College\",\n  \"goal_code\": \"present-proof\",\n  \"goal\": \"Request proof of some claims from verified credentials\",\n  \"request~attach\": [\n    {\n        \"@id\": \"request-0\",\n        \"mime-type\": \"application/json\",\n        \"data\": {\n            \"json\": {\n                \"@type\": \"did:sov:BzCbsNYhMrjHiqZDTUASHg;spec/present-proof/1.0/request-presentation\",\n                \"@id\": \"<uuid-request>\",\n                \"comment\": \"some comment\",\n                \"request_presentations~attach\": [\n                    {\n                        \"@id\": \"libindy-request-presentation-0\",\n                        \"mime-type\": \"application/json\",\n                        \"data\":  {\n                            \"base64\": \"<bytes for base64>\"\n                        }\n                    }\n                ]\n            }\n        }\n    }\n  ],\n  \"service\": [\n      {\n        \"id\": \"#inline\",\n        \"type\": \"did-communication\",\n        \"recipientKeys\": [\"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\"],\n        \"routingKeys\": [],\n        \"serviceEndpoint\": \"https://example.com:5000\"\n      }\n  ]\n}\n

    The constraints on this form of the out-of-band invitation sent during Step 2 are:

    This out-of-band message can be transformed to the following RFC 0037 Present Proof request-presentation message with an RFC 0056 Service Decorator item.

    {\n    \"@type\": \"did:sov:BzCbsNYhMrjHiqZDTUASHg;spec/present-proof/1.0/request-presentation\",\n    \"@id\": \"1234-1234-1234-1234\",\n    \"comment\": \"Request proof of some claims from verified credentials\",\n    \"~service\": {\n        \"recipientKeys\": [\"6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\"],\n        \"routingKeys\": [],\n        \"serviceEndpoint\": \"https://example.com:5000\"\n    },\n    \"request_presentations~attach\": [\n        {\n            \"@id\": \"libindy-request-presentation-0\",\n            \"mime-type\": \"application/json\",\n            \"data\":  {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ]\n}\n

    If the DID form of the out-of-band invitation message service entry was used, the ~service item would be comparably altered.

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#url-shortener-handling","title":"URL Shortener Handling","text":"

    During Step 2 URL Shortening as defined in RFC 0434 Out of Band must be supported.

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#between-step-triggers","title":"Between Step Triggers","text":"

    The community coordination triggers between the steps above will be as follows:

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#motivation","title":"Motivation","text":"

    To enable agent builders to independently update their code bases and deployed agents to support the out-of-band protocol while maintaining interoperability.

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#tutorial","title":"Tutorial","text":"

    The general mechanism for this type of transition is documented in RFC 0345 about community-coordinated updates.

    The specific sequence of events to make this particular transition is outlined in the summary section of this RFC.

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#reference","title":"Reference","text":"

    See the summary section of this RFC for the details of this transition.

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#drawbacks","title":"Drawbacks","text":"

    None identified.

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    This approach balances the speed of adoption with the need for independent deployment and ongoing interoperability.

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#prior-art","title":"Prior art","text":"

    The approach outlined in RFC 0345 about community-coordinated updates is a well-known pattern for using deprecation to make breaking changes in an ecosystem. That said, this is the first attempt to use this approach in Aries. Adjustments to the transition plan will be made as needed, and RFC 0345 will be updated based on lessons learned in executing this plan.

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0496-transition-to-oob-and-did-exchange/#implementations","title":"Implementations","text":"

    The following table lists the status of various agent code bases and deployments with respect to Step 1 of this transition. Agent builders MUST update this table as they complete steps of the transition.

    Name / Link Implementation Notes"},{"location":"features/0509-action-menu/","title":"Aries RFC 0509: Action Menu Protocol","text":""},{"location":"features/0509-action-menu/#summary","title":"Summary","text":"

    The action-menu protocol allows one Agent to present a set of heirarchical menus and actions to another user-facing Agent in a human friendly way. The protocol allows limited service discovery as well as simple data entry. While less flexible than HTML forms or a chat bot, it should be relatively easy to implement and provides a user interface which can be adapted for various platforms, including mobile agents.

    "},{"location":"features/0509-action-menu/#motivation","title":"Motivation","text":"

    Discovery of a peer Agent's capabilities or service offerings is currently reliant on knowledge obtained out-of-band. There is no in-band DIDComm supported protocol for querying a peer to obtain a human freindly menu of their capabilities or service offerings. Whilst this protocol doesn't offer ledger wide discovery capabilities, it will allow one User Agent connected to another, to present a navigable menu and request offered services. The protocol also provides an interface definition language to define action menu display, selection and request submission.

    "},{"location":"features/0509-action-menu/#tutorial","title":"Tutorial","text":""},{"location":"features/0509-action-menu/#name-and-version","title":"Name and Version","text":"

    action-menu, version 1.0

    "},{"location":"features/0509-action-menu/#key-concepts","title":"Key Concepts","text":"

    The action-menu protocol requires an active DIDComm connection before it can proceed. One Agent behaves as a requester in the protocol whilst the other Agent represents a responder. Conceptually the responder presents a list of actions which can be initiated by the requester. Actions are contained within a menu structure. Individual Actions may result in traversal to another menu or initiation of other Aries protocols such as a presentation request, an introduction proposal, a credential offer, an acknowledgement, or a problem report.

    The protocol can be initiated by either the requester asking for the root menu or the responder sending an unsolicited root menu. The protocol ends when the requester issues a perform operation or an internal timeout on the responder causes it to discard menu context. At any time a requester can reset the protocol by requesting the root menu from a responder.

    Whilst the protocol is defined here as uni-directional (i.e requester to responder), both Agents may support both requester and responder roles simultaneously. Such cases would result in two instances of the action-menu protocol operating in parrallel.

    "},{"location":"features/0509-action-menu/#roles","title":"Roles","text":"

    There are two roles in the action-menu protocol: requester and responder.

    The requester asks the responder for menu definitions, presents them to a user, and initiates subsequent action items from the menu through further requests to the responder.

    The responder presents an initial menu definition containing actionable elements to a requestor and then responds to subsequent action requests from the menu.

    "},{"location":"features/0509-action-menu/#states","title":"States","text":""},{"location":"features/0509-action-menu/#states-for-requester","title":"States for Requester","text":"State\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003 Description null No menu has been requested or received awaiting-root-menu menu-request message has been sent and awaiting root menu response preparing-selection menu message has been received and a user selection is pending done perform message has been sent and protocol has finished. Perform actions can include requesting a new menu which will re-enter the state machine with the receive-menu event from the null state."},{"location":"features/0509-action-menu/#states-for-responder","title":"States for Responder","text":"State\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003 Description null No menu has been requested or sent preparing-root-menu menu-request message has been received and preparing menu response for root menu awaiting-selection menu message has been sent and are awaiting a perform request done perform message has been received and protocol has finished. Perform actions can include requesting a new menu which will re-enter the state machine with the send-menu event from the null state."},{"location":"features/0509-action-menu/#messages","title":"Messages","text":""},{"location":"features/0509-action-menu/#menu","title":"menu","text":"

    A requestor is expected to display only one active menu per connection when action menus are employed by the responder. A newly received menu is not expected to interrupt a user, but rather be made available for the user to inspect possible actions related to the responder.

    {\n  \"@type\": \"https://didcomm.org/action-menu/%VER/menu\",\n  \"@id\": \"5678876542344\",\n  \"title\": \"Welcome to IIWBook\",\n  \"description\": \"IIWBook facilitates connections between attendees by verifying attendance and distributing connection invitations.\",\n  \"errormsg\": \"No IIWBook names were found.\",\n  \"options\": [\n    {\n      \"name\": \"obtain-email-cred\",\n      \"title\": \"Obtain a verified email credential\",\n      \"description\": \"Connect with the BC email verification service to obtain a verified email credential\"\n    },\n    {\n      \"name\": \"verify-email-cred\",\n      \"title\": \"Verify your participation\",\n      \"description\": \"Present a verified email credential to identify yourself\"\n    },\n    {\n      \"name\": \"search-introductions\",\n      \"title\": \"Search introductions\",\n      \"description\": \"Your email address must be verified to perform a search\",\n      \"disabled\": true\n    }\n  ]\n}\n
    "},{"location":"features/0509-action-menu/#description-of-attributes","title":"Description of attributes","text":""},{"location":"features/0509-action-menu/#quick-forms","title":"Quick forms","text":"

    Menu options may define a form property, which would direct the requester user to a client-generated form when the menu option is selected. The menu title should be shown at the top of the form, followed by the form description text if defined, followed by the list of form params in sequence. The form should also include a Cancel button to return to the menu, a Submit button (with an optional custom label defined by submit-label), and optionally a Clear button to reset the parameters to their default values.

    {\n  \"@type\": \"https://didcomm.org/action-menu/%VER/menu\",\n  \"@id\": \"5678876542347\",\n  \"~thread\": {\n    \"thid\": \"5678876542344\"\n  },\n  \"title\": \"Attendance Verified\",\n  \"description\": \"\",\n  \"options\": [\n    {\n      \"name\": \"submit-invitation\",\n      \"title\": \"Submit an invitation\",\n      \"description\": \"Send an invitation for IIWBook to share with another participant\"\n    },\n    {\n      \"name\": \"search-introductions\",\n      \"title\": \"Search introductions\",\n      \"form\": {\n        \"description\": \"Enter a participant name below to perform a search.\",\n        \"params\": [\n          {\n            \"name\": \"query\",\n            \"title\": \"Participant name\",\n            \"default\": \"\",\n            \"description\": \"\",\n            \"required\": true,\n            \"type\": \"text\"\n          }\n        ],\n        \"submit-label\": \"Search\"\n      }\n    }\n  ]\n}\n

    When the form is submitted, a perform message is generated containing values entered in the form. The form block may have an empty or missing params property in which case it acts as a simple confirmation dialog.

    Each entry in the params list must define a name and title. The description is optional (should be displayed as help text below the field) and the type defaults to \u2018text\u2019 if not provided (only the \u2018text\u2019 type is supported at this time). Parameters should default to required true, if not specified. Parameters may also define a default value (used when rendering or clearing the form).

    "},{"location":"features/0509-action-menu/#menu-request","title":"menu-request","text":"

    In addition to menus being pushed by the responder, the root menu can be re-requested at any time by the requestor sending a menu-request.

    {\n  \"@type\": \"https://didcomm.org/action-menu/%VER/menu-request\",\n  \"@id\": \"5678876542345\"\n}\n
    "},{"location":"features/0509-action-menu/#perform","title":"perform","text":"

    When the requestor user actions a menu option, a perform message is generated. It should be attached to the same thread as the menu. The active menu should close when an option is selected.

    The response to a perform message can be any type of agent message, including another menu message, a presentation request, an introduction proposal, a credential offer, an acknowledgement, or a problem report. Whatever the message type, it should normally reference the same message thread as the perform message.

    {\n  \"@type\": \"https://didcomm.org/action-menu/%VER/perform\",\n  \"@id\": \"5678876542346\",\n  \"~thread\": {\n    \"thid\": \"5678876542344\"\n  },\n  \"name\": \"obtain-email-cred\",\n  \"params\": {}\n}\n
    "},{"location":"features/0509-action-menu/#description-of-attributes_1","title":"Description of attributes","text":""},{"location":"features/0509-action-menu/#drawbacks","title":"Drawbacks","text":"

    N/A

    "},{"location":"features/0509-action-menu/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    N/A

    "},{"location":"features/0509-action-menu/#prior-art","title":"Prior art","text":"

    There are several existing RFCs that relate to the general problem of \"Discovery\"

    "},{"location":"features/0509-action-menu/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0509-action-menu/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Cloud Agent - Python MISSING test results"},{"location":"features/0510-dif-pres-exch-attach/","title":"Aries RFC 0510: Presentation-Exchange Attachment format for requesting and presenting proofs","text":""},{"location":"features/0510-dif-pres-exch-attach/#summary","title":"Summary","text":"

    This RFC registers three attachment formats for use in the present-proof V2 protocol based on the Decentralized Identity Foundation's (DIF) Presentation Exchange specification (P-E). Two of these formats define containers for a presentation-exchange request object and another options object carrying additional parameters, while the third format is just a vessel for the final presentation_submission verifiable presentation transferred from the Prover to the Verifier.

    Presentation Exchange defines a data format capable of articulating a rich set of proof requirements from Verifiers, and also provides a means of describing the formats in which Provers must submit those proofs.

    A Verifier's defines their requirements in a presentation_definition containing input_descriptors that describe the credential(s) the proof(s) must be derived from as well as a rich set of operators that place constraints on those proofs (eg. \"must be issued from issuer X\" or \"age over X\", etc.).

    The Verifiable Presentation format of Presentation Submissions is used as opposed to OIDC tokens or CHAPI objects. For an alternative on how to tunnel OIDC messages over DIDComm, see HTTP-Over-DIDComm. CHAPI is an alternative transport to DIDComm.

    "},{"location":"features/0510-dif-pres-exch-attach/#motivation","title":"Motivation","text":"

    The Presentation Exchange specification (P-E) possesses a rich language for expressing a Verifier's criterion.

    P-E lends itself well to several transport mediums due to its limited scope as a data format, and is easily transported over DIDComm.

    It is furthermore desirable to make use of specifications developed in an open standards body.

    "},{"location":"features/0510-dif-pres-exch-attach/#tutorial","title":"Tutorial","text":"

    Complete examples of messages are provided in the reference section.

    The Verifier sends a request-presentation to the Prover containing a presentation_definition, along with a domain and challenge the Prover must sign over in the proof.

    The Prover can optionally respond to the Verifier's request-presentation with a propose-presentation message containing \"Input Descriptors\" that describe the proofs they can provide. The contents of the attachment is just the input_descriptors attribute of the presentation_definition object.

    The Prover responds with a presentation message containing a presentation_submission.

    "},{"location":"features/0510-dif-pres-exch-attach/#reference","title":"Reference","text":""},{"location":"features/0510-dif-pres-exch-attach/#propose-presentation-attachment-format","title":"propose-presentation attachment format","text":"

    Format identifier: dif/presentation-exchange/definitions@v1.0

    "},{"location":"features/0510-dif-pres-exch-attach/#examples-propose-presentation","title":"Examples: propose-presentation","text":"Complete message example
    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/propose-presentation\",\n    \"@id\": \"fce30ed1-96f8-44c9-95cf-b274288009dc\",\n    \"comment\": \"some comment\",\n    \"formats\" : [{\n        \"attach_id\" : \"143c458d-1b1c-40c7-ab85-4d16808ddf0a\",\n        \"format\" : \"dif/presentation-exchange/definitions@v1.0\"\n    }],\n    \"proposal~attach\": [{\n        \"@id\": \"143c458d-1b1c-40c7-ab85-4d16808ddf0a\",\n        \"mime-type\": \"application/json\",\n        \"data\": {\n            \"json\": {\n                \"input_descriptors\": [{\n                    \"id\": \"citizenship_input\",\n                    \"name\": \"US Passport\",\n                    \"group\": [\"A\"],\n                    \"schema\": [{\n                        \"uri\": \"hub://did:foo:123/Collections/schema.us.gov/passport.json\"\n                    }],\n                    \"constraints\": {\n                        \"fields\": [{\n                            \"path\": [\"$.credentialSubject.birth_date\", \"$.vc.credentialSubject.birth_date\", \"$.birth_date\"],\n                            \"filter\": {\n                                \"type\": \"date\",\n                                \"minimum\": \"1999-5-16\"\n                            }\n                        }]\n                    }\n                }]\n            }\n        }\n    }]\n}\n
    "},{"location":"features/0510-dif-pres-exch-attach/#request-presentation-attachment-format","title":"request-presentation attachment format","text":"

    Format identifier: dif/presentation-exchange/definitions@v1.0

    Since the format identifier defined above is the same as the one used in the propose-presentation message, it's recommended to consider both the message @type and the format to accuarately understand the contents of the attachment.

    The contents of the attachment is a JSON object containing the Verifier's presentation definition and an options object with proof options:

    {\n    \"options\": {\n        \"challenge\": \"...\",\n        \"domain\": \"...\",\n    },\n    \"presentation_definition\": {\n        // presentation definition object\n    }\n}\n
    "},{"location":"features/0510-dif-pres-exch-attach/#the-options-object","title":"The options object","text":"

    options is a container of additional parameters required for the Prover to fulfill the Verifier's request.

    Available options are:

    Name Status Description challenge RECOMMENDED (for LD proofs) Random seed provided by the Verifier for LD Proofs. domain RECOMMENDED (for LD proofs) The operational domain of the requested LD proof."},{"location":"features/0510-dif-pres-exch-attach/#examples-request-presentation","title":"Examples: request-presentation","text":"Complete message example requesting a verifiable presentation with proof type Ed25519Signature2018
    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/request-presentation\",\n    \"@id\": \"0ac534c8-98ed-4fe3-8a41-3600775e1e92\",\n    \"comment\": \"some comment\",\n    \"formats\" : [{\n        \"attach_id\" : \"ed7d9b1f-9eed-4bde-b81c-3aa7485cf947\",\n        \"format\" : \"dif/presentation-exchange/definitions@v1.0\"\n    }],\n    \"request_presentations~attach\": [{\n        \"@id\": \"ed7d9b1f-9eed-4bde-b81c-3aa7485cf947\",\n        \"mime-type\": \"application/json\",\n        \"data\":  {\n            \"json\": {\n                \"options\": {\n                    \"challenge\": \"23516943-1d79-4ebd-8981-623f036365ef\",\n                    \"domain\": \"us.gov/DriversLicense\"\n                },\n                \"presentation_definition\": {\n                    \"input_descriptors\": [{\n                        \"id\": \"citizenship_input\",\n                        \"name\": \"US Passport\",\n                        \"group\": [\"A\"],\n                        \"schema\": [{\n                            \"uri\": \"hub://did:foo:123/Collections/schema.us.gov/passport.json\"\n                        }],\n                        \"constraints\": {\n                            \"fields\": [{\n                                \"path\": [\"$.credentialSubject.birth_date\", \"$.birth_date\"],\n                                \"filter\": {\n                                    \"type\": \"date\",\n                                    \"minimum\": \"1999-5-16\"\n                                }\n                            }]\n                        }\n                    }],\n                    \"format\": {\n                        \"ldp_vp\": {\n                            \"proof_type\": [\"Ed25519Signature2018\"]\n                        }\n                    }\n                }\n            }\n        }\n    }]\n}\n
    The same example but requesting the verifiable presentation with proof type BbsBlsSignatureProof2020 instead
    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/request-presentation\",\n    \"@id\": \"0ac534c8-98ed-4fe3-8a41-3600775e1e92\",\n    \"comment\": \"some comment\",\n    \"formats\" : [{\n        \"attach_id\" : \"ed7d9b1f-9eed-4bde-b81c-3aa7485cf947\",\n        \"format\" : \"dif/presentation-exchange/definitions@v1.0\"\n    }],\n    \"request_presentations~attach\": [{\n        \"@id\": \"ed7d9b1f-9eed-4bde-b81c-3aa7485cf947\",\n        \"mime-type\": \"application/json\",\n        \"data\":  {\n            \"json\": {\n                \"options\": {\n                    \"challenge\": \"23516943-1d79-4ebd-8981-623f036365ef\",\n                    \"domain\": \"us.gov/DriversLicense\"\n                },\n                \"presentation_definition\": {\n                    \"input_descriptors\": [{\n                        \"id\": \"citizenship_input\",\n                        \"name\": \"US Passport\",\n                        \"group\": [\"A\"],\n                        \"schema\": [{\n                            \"uri\": \"hub://did:foo:123/Collections/schema.us.gov/passport.json\"\n                        }],\n                        \"constraints\": {\n                            \"fields\": [{\n                                \"path\": [\"$.credentialSubject.birth_date\", \"$.vc.credentialSubject.birth_date\", \"$.birth_date\"],\n                                \"filter\": {\n                                    \"type\": \"date\",\n                                    \"minimum\": \"1999-5-16\"\n                                }\n                            }],\n                            \"limit_disclosure\": \"required\"\n                        }\n                    }],\n                    \"format\": {\n                        \"ldp_vc\": {\n                            \"proof_type\": [\"BbsBlsSignatureProof2020\"]\n                        }\n                    }\n                }\n            }\n        }\n    }]\n}\n
    "},{"location":"features/0510-dif-pres-exch-attach/#presentation-attachment-format","title":"presentation attachment format","text":"

    Format identifier: dif/presentation-exchange/submission@v1.0

    The contents of the attachment is a Presentation Submission in a standard Verifiable Presentation format containing the proofs requested.

    "},{"location":"features/0510-dif-pres-exch-attach/#examples-presentation","title":"Examples: presentation","text":"Complete message example
    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/presentation\",\n    \"@id\": \"f1ca8245-ab2d-4d9c-8d7d-94bf310314ef\",\n    \"comment\": \"some comment\",\n    \"formats\" : [{\n        \"attach_id\" : \"2a3f1c4c-623c-44e6-b159-179048c51260\",\n        \"format\" : \"dif/presentation-exchange/submission@v1.0\"\n    }],\n    \"presentations~attach\": [{\n        \"@id\": \"2a3f1c4c-623c-44e6-b159-179048c51260\",\n        \"mime-type\": \"application/ld+json\",\n        \"data\": {\n            \"json\": {\n                \"@context\": [\n                    \"https://www.w3.org/2018/credentials/v1\",\n                    \"https://identity.foundation/presentation-exchange/submission/v1\"\n                ],\n                \"type\": [\n                    \"VerifiablePresentation\",\n                    \"PresentationSubmission\"\n                ],\n                \"presentation_submission\": {\n                    \"descriptor_map\": [{\n                        \"id\": \"citizenship_input\",\n                        \"path\": \"$.verifiableCredential.[0]\"\n                    }]\n                },\n                \"verifiableCredential\": [{\n                    \"@context\": \"https://www.w3.org/2018/credentials/v1\",\n                    \"id\": \"https://eu.com/claims/DriversLicense\",\n                    \"type\": [\"EUDriversLicense\"],\n                    \"issuer\": \"did:foo:123\",\n                    \"issuanceDate\": \"2010-01-01T19:73:24Z\",\n                    \"credentialSubject\": {\n                        \"id\": \"did:example:ebfeb1f712ebc6f1c276e12ec21\",\n                        \"license\": {\n                            \"number\": \"34DGE352\",\n                            \"dob\": \"07/13/80\"\n                        }\n                    },\n                    \"proof\": {\n                        \"type\": \"RsaSignature2018\",\n                        \"created\": \"2017-06-18T21:19:10Z\",\n                        \"proofPurpose\": \"assertionMethod\",\n                        \"verificationMethod\": \"https://example.edu/issuers/keys/1\",\n                        \"jws\": \"...\"\n                    }\n                }],\n                \"proof\": {\n                    \"type\": \"RsaSignature2018\",\n                    \"created\": \"2018-09-14T21:19:10Z\",\n                    \"proofPurpose\": \"authentication\",\n                    \"verificationMethod\": \"did:example:ebfeb1f712ebc6f1c276e12ec21#keys-1\",\n                    \"challenge\": \"1f44d55f-f161-4938-a659-f8026467f126\",\n                    \"domain\": \"4jt78h47fh47\",\n                    \"jws\": \"...\"\n                }\n            }\n        }\n    }]\n}\n
    "},{"location":"features/0510-dif-pres-exch-attach/#supported-features-of-presentation-exchange","title":"Supported Features of Presentation-Exchange","text":"

    Level of support for Presentation-Exchange ../../features:

    Feature Notes presentation_definition.input_descriptors.id presentation_definition.input_descriptors.name presentation_definition.input_descriptors.purpose presentation_definition.input_descriptors.schema.uri URI for the credential's schema. presentation_definition.input_descriptors.constraints.fields.path Array of JSONPath string expressions as defined in section 8. REQUIRED as per the spec. presentation_definition.input_descriptors.constraints.fields.filter JSONSchema descriptor. presentation_definition.input_descriptors.constraints.limit_disclosure preferred or required as defined in the spec and as supported by the Holder and Verifier proof mechanisms.Note that the Holder MUST have credentials with cryptographic proof suites that are capable of selective disclosure in order to respond to a request with limit_disclosure: \"required\".See RFC0593 for appropriate crypto suites. presentation_definition.input_descriptors.constraints.is_holder preferred or required as defined in the spec.Note that this feature allows the Holder to present credentials with a different subject identifier than the DID used to establish the DIDComm connection with the Verifier. presentation_definition.format For JSONLD-based credentials: ldp_vc and ldp_vp. presentation_definition.format.proof_type For JSONLD-based credentials: Ed25519Signature2018, BbsBlsSignature2020, and JsonWebSignature2020. When specifying ldp_vc, BbsBlsSignatureProof2020 may also be used."},{"location":"features/0510-dif-pres-exch-attach/#proof-formats","title":"Proof Formats","text":""},{"location":"features/0510-dif-pres-exch-attach/#constraints","title":"Constraints","text":"

    Verifiable Presentations MUST be produced and consumed using the JSON-LD syntax.

    The proof types defined below MUST be registered in the Linked Data Cryptographic Suite Registry.

    The value of any credentialSubject.id in a credential MUST be a Dentralized Identifier (DID) conforming to the DID Syntax if present. This allows the Holder to authenticate as the credential's subject if required by the Verifier (see the is_holder property above). The Holder authenticates as the credential's subject by attaching an LD Proof on the enclosing Verifiable Presentation.

    "},{"location":"features/0510-dif-pres-exch-attach/#proof-formats-on-credentials","title":"Proof Formats on Credentials","text":"

    Aries agents implementing this RFC MUST support the formats outlined in RFC0593 for proofs on Verifiable Credentials.

    "},{"location":"features/0510-dif-pres-exch-attach/#proof-formats-on-presentations","title":"Proof Formats on Presentations","text":"

    Aries agents implementing this RFC MUST support the formats outlined below for proofs on Verifiable Presentations.

    "},{"location":"features/0510-dif-pres-exch-attach/#ed25519signature2018","title":"Ed25519Signature2018","text":"

    Specification.

    Request Parameters:

    Result:

    A Verifiable Presentation of type Presentation Submission containing the credentials requested under the verifiableCredential property and a proof property of type Ed25519Signature2018.

    Example
    {\n    \"@context\": [\n        \"https://www.w3.org/2018/credentials/v1\",\n        \"https://identity.foundation/presentation-exchange/submission/v1\"\n    ],\n    \"type\": [\n        \"VerifiablePresentation\",\n        \"PresentationSubmission\"\n    ],\n    \"presentation_submission\": {\n        \"descriptor_map\": [{\n           \"id\": \"citizenship_input\",\n           \"path\": \"$.verifiableCredential.[0]\"\n        }]\n    },\n    \"verifiableCredential\": [{\n        \"@context\": \"https://www.w3.org/2018/credentials/v1\",\n        \"id\": \"https://eu.com/claims/DriversLicense\",\n        \"type\": [\n            \"EUDriversLicense\"\n        ],\n        \"issuer\": \"did:foo:123\",\n        \"issuanceDate\": \"2010-01-01T19:73:24Z\",\n        \"credentialSubject\": {\n            \"id\": \"did:example:ebfeb1f712ebc6f1c276e12ec21\",\n            \"license\": {\n            \"number\": \"34DGE352\",\n            \"dob\": \"07/13/80\"\n          }\n        },\n        \"proof\": {\n            \"type\": \"RsaSignature2018\",\n            \"created\": \"2017-06-18T21:19:10Z\",\n            \"proofPurpose\": \"assertionMethod\",\n            \"verificationMethod\": \"https://example.edu/issuers/keys/1\",\n            \"jws\": \"...\"\n        }\n    }],\n    \"proof\": {\n      \"type\": \"Ed25519Signature2018\",\n      \"proofPurpose\": \"authentication\",\n      \"created\": \"2017-09-23T20:21:34Z\",\n      \"verificationMethod\": \"did:example:123456#key1\",\n      \"challenge\": \"2bbgh3dgjg2302d-d2b3gi423d42\",\n      \"domain\": \"example.org\",\n      \"jws\": \"eyJ0eXAiOiJK...gFWFOEjXk\"\n  }\n}\n
    "},{"location":"features/0510-dif-pres-exch-attach/#bbsblssignature2020","title":"BbsBlsSignature2020","text":"

    Specification.

    Associated RFC: RFC0646.

    Request Parameters: * presentation_definition.format: ldp_vp * presentation_definition.format.proof_type: BbsBlsSignature2020 * options.challenge: (Optional) a random string value generated by the Verifier * options.domain: (Optional) a string value specified set by the Verifier

    Result:

    A Verifiable Presentation of type Presentation Submission containing the credentials requested under the verifiableCredential property and a proof property of type BbsBlsSignature2020.

    Example
    {\n    \"@context\": [\n        \"https://www.w3.org/2018/credentials/v1\",\n        \"https://w3id.org/security/v2\",\n        \"https://w3id.org/security/bbs/v1\",\n        \"https://identity.foundation/presentation-exchange/submission/v1\"\n    ],\n    \"type\": [\n        \"VerifiablePresentation\",\n        \"PresentationSubmission\"\n    ],\n    \"presentation_submission\": {\n        \"descriptor_map\": [{\n            \"id\": \"citizenship_input\",\n            \"path\": \"$.verifiableCredential.[0]\"\n        }]\n    },\n    \"verifiableCredential\": [{\n        \"@context\": \"https://www.w3.org/2018/credentials/v1\",\n        \"id\": \"https://eu.com/claims/DriversLicense\",\n        \"type\": [\"EUDriversLicense\"],\n        \"issuer\": \"did:foo:123\",\n        \"issuanceDate\": \"2010-01-01T19:73:24Z\",\n        \"credentialSubject\": {\n            \"id\": \"did:example:ebfeb1f712ebc6f1c276e12ec21\",\n            \"license\": {\n                \"number\": \"34DGE352\",\n                \"dob\": \"07/13/80\"\n            }\n       },\n       \"proof\": {\n           \"type\": \"BbsBlsSignatureProof2020\",\n           \"created\": \"2020-04-25\",\n           \"verificationMethod\": \"did:example:489398593#test\",\n           \"proofPurpose\": \"assertionMethod\",\n           \"signature\": \"F9uMuJzNBqj4j+HPTvWjUN/MNoe6KRH0818WkvDn2Sf7kg1P17YpNyzSB+CH57AWDFunU13tL8oTBDpBhODckelTxHIaEfG0rNmqmjK6DOs0/ObksTZh7W3OTbqfD2h4C/wqqMQHSWdXXnojwyFDEg==\"\n       }\n    }],\n    \"proof\": {\n        \"type\": \"BbsBlsSignature2020\",\n        \"created\": \"2020-04-25\",\n        \"verificationMethod\": \"did:example:489398593#test\",\n        \"proofPurpose\": \"authentication\",\n        \"proofValue\": \"F9uMuJzNBqj4j+HPTvWjUN/MNoe6KRH0818WkvDn2Sf7kg1P17YpNyzSB+CH57AWDFunU13tL8oTBDpBhODckelTxHIaEfG0rNmqmjK6DOs0/ObksTZh7W3OTbqfD2h4C/wqqMQHSWdXXnojwyFDEg==\",\n        \"requiredRevealStatements\": [ 4, 5 ]\n    }\n}\n

    Note: The above example is for illustrative purposes. In particular, note that whether a Verifier requests a proof_type of BbsBlsSignature2020 has no bearing on whether the Holder is required to present credentials with proofs of type BbsBlsSignatureProof2020. The choice of proof types on the credentials is constrained by a) the available types registered in RFC0593 and b) additional constraints placed on them due to other aspects of the proof requested by the Verifier, such as requiring limited disclosure with the limit_disclosure property. In such a case, a proof type of Ed25519Signature2018 in the credentials is not appropriate whereas BbsBlsSignatureProof2020 is capable of selective disclosure.

    "},{"location":"features/0510-dif-pres-exch-attach/#jsonwebsignature2020","title":"JsonWebSignature2020","text":"

    Specification.

    Request Parameters:

    Result:

    A Verifiable Presentation of type Presentation Submission containing the credentials requested under the verifiableCredential property and a proof property of type JsonWebSignature2020.

    Example
    {\n    \"@context\": [\n        \"https://www.w3.org/2018/credentials/v1\",\n        \"https://identity.foundation/presentation-exchange/submission/v1\"\n    ],\n    \"type\": [\n        \"VerifiablePresentation\",\n        \"PresentationSubmission\"\n    ],\n    \"presentation_submission\": {\n        \"descriptor_map\": [{\n           \"id\": \"citizenship_input\",\n           \"path\": \"$.verifiableCredential.[0]\"\n        }]\n    },\n    \"verifiableCredential\": [{\n        \"@context\": \"https://www.w3.org/2018/credentials/v1\",\n        \"id\": \"https://eu.com/claims/DriversLicense\",\n        \"type\": [\n            \"EUDriversLicense\"\n        ],\n        \"issuer\": \"did:foo:123\",\n        \"issuanceDate\": \"2010-01-01T19:73:24Z\",\n        \"credentialSubject\": {\n            \"id\": \"did:example:ebfeb1f712ebc6f1c276e12ec21\",\n            \"license\": {\n            \"number\": \"34DGE352\",\n            \"dob\": \"07/13/80\"\n          }\n        },\n        \"proof\": {\n            \"type\": \"RsaSignature2018\",\n            \"created\": \"2017-06-18T21:19:10Z\",\n            \"proofPurpose\": \"assertionMethod\",\n            \"verificationMethod\": \"https://example.edu/issuers/keys/1\",\n            \"jws\": \"...\"\n        }\n    }],\n    \"proof\": {\n      \"type\": \"JsonWebSignature2020\",\n      \"proofPurpose\": \"authentication\",\n      \"created\": \"2017-09-23T20:21:34Z\",\n      \"verificationMethod\": \"did:example:123456#key1\",\n      \"challenge\": \"2bbgh3dgjg2302d-d2b3gi423d42\",\n      \"domain\": \"example.org\",\n      \"jws\": \"eyJ0eXAiOiJK...gFWFOEjXk\"\n  }\n}\n

    Available JOSE key types are:

    kty crv signature EC P-256 ES256 EC P-384 ES384"},{"location":"features/0510-dif-pres-exch-attach/#drawbacks","title":"Drawbacks","text":"

    N/A

    "},{"location":"features/0510-dif-pres-exch-attach/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0510-dif-pres-exch-attach/#prior-art","title":"Prior art","text":""},{"location":"features/0510-dif-pres-exch-attach/#unresolved-questions","title":"Unresolved questions","text":"

    TODO it is assumed the Verifier will initiate the protocol if they can transmit their presentation definition via an out-of-band channel (eg. it is published on their website) with a request-presentation message, possibly delivered via an Out-of-Band invitation (see RFC0434). For now, the Prover sends propose-presentation as a response to request-presentation.

    "},{"location":"features/0510-dif-pres-exch-attach/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0511-dif-cred-manifest-attach/","title":"Aries RFC 0511: Credential-Manifest Attachment format for requesting and presenting credentials","text":""},{"location":"features/0511-dif-cred-manifest-attach/#summary","title":"Summary","text":"

    This RFC registers an attachment format for use in the issue-credential V2 based on the Decentralized Identity Foundation's (DIF) Credential Manifest specification. Credental Manifest describes a data format that specifies the inputs an Issuer requires for issuance of a credential. It relies on the closely-related Presentation Exchange specification to describe the required inputs and the format in which the Holder submits those inputs (a verifiable presentation).

    "},{"location":"features/0511-dif-cred-manifest-attach/#motivation","title":"Motivation","text":"

    The Credential Manifest specification lends itself well to several transport mediums due to its limited scope as a data format, and is easily transported over DIDComm.

    It is furthermore desirable to make use of specifications developed in an open standards body.

    "},{"location":"features/0511-dif-cred-manifest-attach/#tutorial","title":"Tutorial","text":"

    Complete examples of messages are provided in the reference section.

    Credential Manifests MAY be acquired by the Holder via out of band means, such as from a well-known location on the Issuer's website. This allows the Holder to initiate the issue-credential protocol with a request-message providing they also possess the requisite challenge and domain values. If they do not possess these values then the Issuer MAY respond with an offer-credential message.

    Otherwise the Holder MAY initiate the protocol with propose-credential in order to discover the Issuer's requirements.

    "},{"location":"features/0511-dif-cred-manifest-attach/#reference","title":"Reference","text":""},{"location":"features/0511-dif-cred-manifest-attach/#propose-credential-attachment-format","title":"propose-credential attachment format","text":"

    Format identifier: dif/credential-manifest@v1.0

    The contents of the attachment is the minimal form of the Issuer's credential manifest describing the credential the Holder desires. It SHOULD contain the issuer and credential properties and no more.

    Complete message example:

    {\n    \"@id\": \"8639505e-4ec5-41b9-bb31-ac6a7b800fe7\",\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/propose-credential\",\n    \"comment\": \"<some comment>\",\n    \"formats\" : [{\n        \"attach_id\": \"b45ca1bc-5b3c-4672-a300-84ddf6fbbaea\",\n        \"format\": \"dif/credential-manifest@v1.0\"\n    }],\n    \"filters~attach\": [{\n        \"@id\": \"b45ca1bc-5b3c-4672-a300-84ddf6fbbaea\",\n        \"mime-type\": \"application/json\",\n        \"data\": {\n            \"json\": {\n                \"issuer\": \"did:example:123\",\n                \"credential\": {\n                    \"name\": \"Washington State Class A Commercial Driver License\",\n                    \"schema\": \"ipfs:QmPXME1oRtoT627YKaDPDQ3PwA8tdP9rWuAAweLzqSwAWT\"\n                }\n            }\n        }\n    }]\n}\n
    "},{"location":"features/0511-dif-cred-manifest-attach/#offer-credential-attachment-format","title":"offer-credential attachment format","text":"

    Format identifier: dif/credential-manifest@v1.0

    The contents of the attachment is a JSON object containing the Issuer's credential manifest, a challenge and domain. All three attributes are REQUIRED.

    Example:

    {\n    \"@id\": \"dfedaad3-bd7a-4c33-8337-fa94a547c0e2\",\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/offer-credential\",\n    \"comment\": \"<some comment>\",\n    \"formats\" : [{\n        \"attach_id\" : \"76cd0d94-8eb6-4ef3-a094-af45d81e9528\",\n        \"format\" : \"dif/credential-manifest@v1.0\"\n    }],\n    \"offers~attach\": [{\n        \"@id\": \"76cd0d94-8eb6-4ef3-a094-af45d81e9528\",\n        \"mime-type\": \"application/json\",\n        \"data\": {\n            \"json\": {\n                \"challenge\": \"1f44d55f-f161-4938-a659-f8026467f126\",\n                \"domain\": \"us.gov/DriverLicense\",\n                \"credential_manifest\": {\n                    // credential manifest object\n                }\n            }\n        }\n    }]\n}\n
    "},{"location":"features/0511-dif-cred-manifest-attach/#request-credential-attachment-format","title":"request-credential attachment format","text":"

    Format identifier: dif/credential-manifest@v1.0

    The contents of the attachment is a JSON object that describes the credential requested and provides the inputs the Issuer requires from the Holder before proceeding with issuance:

    {\n    \"credential-manifest\": {\n        \"issuer\": \"did:example:123\",\n        \"credential\": {\n            \"name\": \"Washington State Class A Commercial Driver License\",\n            \"schema\": \"ipfs:QmPXME1oRtoT627YKaDPDQ3PwA8tdP9rWuAAweLzqSwAWT\"\n        }\n    },\n    \"presentation-submission\": {\n        // presentation submission object\n    }\n}\n

    If the Issuer's credential manifest does not include the presentation_definition attribute, and the Holder has initiated the protocol with propose-credential, then this attachment MAY be omitted entirely as the message thread provides sufficient context for this request.

    Implementors are STRONGLY discouraged from allowing BOTH credential-manifest and presentation-submission. The latter requires the Holder's knowledge of the necessary challenge and domain, both of which SHOULD provide sufficient context to the Issuer as to which credential is being requested.

    The following example shows a request-credential with a presentation submission. Notice the presentation's proof includes the challenge and domain acquired either through out-of-band means or via an offer-credential message.:

    {\n    \"@id\": \"cf3a9301-6d4a-430f-ae02-b4a79ddc9706\",\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/request-credential\",\n    \"comment\": \"<some comment>\",\n    \"formats\": [{\n        \"attach_id\": \"7cd11894-838a-45c0-a9ec-13e2d9d125a1\",\n        \"format\": \"dif/credential-manifest@v1.0\"\n    }],\n    \"requests~attach\": [{\n        \"@id\": \"7cd11894-838a-45c0-a9ec-13e2d9d125a1\",\n        \"mime-type\": \"application/json\",\n        \"data\": {\n            \"json\": {\n                \"presentation-submission\": {\n                    \"@context\": [\n                        \"https://www.w3.org/2018/credentials/v1\",\n                        \"https://identity.foundation/presentation-exchange/submission/v1\"\n                    ],\n                    \"type\": [\n                        \"VerifiablePresentation\",\n                        \"PresentationSubmission\"\n                    ],\n                    \"presentation_submission\": {\n                        \"descriptor_map\": [{\n                            \"id\": \"citizenship_input\",\n                            \"path\": \"$.verifiableCredential.[0]\"\n                        }]\n                    },\n                    \"verifiableCredential\": [{\n                        \"@context\": \"https://www.w3.org/2018/credentials/v1\",\n                        \"id\": \"https://us.gov/claims/Passport/723c62ab-f2f0-4976-9ec1-39992e20c9b1\",\n                        \"type\": [\"USPassport\"],\n                        \"issuer\": \"did:foo:123\",\n                        \"issuanceDate\": \"2010-01-01T19:73:24Z\",\n                        \"credentialSubject\": {\n                            \"id\": \"did:example:ebfeb1f712ebc6f1c276e12ec21\",\n                            \"birth_date\": \"2000-08-14\"\n                        },\n                        \"proof\": {\n                            \"type\": \"EcdsaSecp256k1VerificationKey2019\",\n                            \"created\": \"2017-06-18T21:19:10Z\",\n                            \"proofPurpose\": \"assertionMethod\",\n                            \"verificationMethod\": \"https://example.edu/issuers/keys/1\",\n                            \"jws\": \"...\"\n                        }\n                    }],\n                    \"proof\": {\n                        \"type\": \"RsaSignature2018\",\n                        \"created\": \"2018-09-14T21:19:10Z\",\n                        \"proofPurpose\": \"authentication\",\n                        \"verificationMethod\": \"did:example:ebfeb1f712ebc6f1c276e12ec21#keys-1\",\n                        \"challenge\": \"1f44d55f-f161-4938-a659-f8026467f126\",\n                        \"domain\": \"us.gov/DriverLicense\",\n                        \"jws\": \"...\"\n                    }\n                }\n            }\n        }\n    }]\n}\n
    "},{"location":"features/0511-dif-cred-manifest-attach/#issue-credential-attachment-format","title":"issue-credential attachment format","text":"

    This specification does not register any format identifier for the issue-credential message. The Issuer SHOULD set the format to the value that corresponds to the format the credentials are issued in.

    "},{"location":"features/0511-dif-cred-manifest-attach/#drawbacks","title":"Drawbacks","text":"

    N/A

    "},{"location":"features/0511-dif-cred-manifest-attach/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0511-dif-cred-manifest-attach/#prior-art","title":"Prior art","text":"

    N/A

    "},{"location":"features/0511-dif-cred-manifest-attach/#unresolved-questions","title":"Unresolved questions","text":"

    N/A

    "},{"location":"features/0511-dif-cred-manifest-attach/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0557-discover-features-v2/","title":"Aries RFC 0557: Discover Features Protocol v2.x","text":""},{"location":"features/0557-discover-features-v2/#summary","title":"Summary","text":"

    Describes how one agent can query another to discover which ../../features it supports, and to what extent.

    "},{"location":"features/0557-discover-features-v2/#motivation","title":"Motivation","text":"

    Though some agents will support just one feature and will be statically configured to interact with just one other party, many exciting uses of agents are more dynamic and unpredictable. When Alice and Bob meet, they won't know in advance which ../../features are supported by one another's agents. They need a way to find out.

    "},{"location":"features/0557-discover-features-v2/#tutorial","title":"Tutorial","text":"

    This is version 2.0 of the Discover Features protocol, and its fully qualified PIURI for the Discover Features protocol is:

    https://didcomm.org/discover-features/2.0\n

    This version is conceptually similar to version 1.0 of this protocol. It differs in its ability to ask about multiple feature types, and to ask multiple questions and receive multiple answers in a single round trip.

    "},{"location":"features/0557-discover-features-v2/#roles","title":"Roles","text":"

    There are two roles in the discover-features protocol: requester and responder. Normally, the requester asks the responder about the ../../features it supports, and the responder answers. Each role uses a single message type.

    It is also possible to proactively disclose ../../features; in this case a requester receives a response without asking for it. This may eliminate some chattiness in certain use cases (e.g., where two-way connectivity is limited).

    "},{"location":"features/0557-discover-features-v2/#states","title":"States","text":"

    The state progression is very simple. In the normal case, it is simple request-response; in a proactive disclosure, it's a simple one-way notification.

    "},{"location":"features/0557-discover-features-v2/#requester","title":"Requester","text":""},{"location":"features/0557-discover-features-v2/#responder","title":"Responder","text":""},{"location":"features/0557-discover-features-v2/#messages","title":"Messages","text":""},{"location":"features/0557-discover-features-v2/#queries-message-type","title":"queries Message Type","text":"

    A discover-features/queries message looks like this:

    {\n  \"@type\": \"https://didcomm.org/discover-features/2.0/queries\",\n  \"@id\": \"yWd8wfYzhmuXX3hmLNaV5bVbAjbWaU\",\n  \"queries\": [\n    { \"feature-type\": \"protocol\", \"match\": \"https://didcomm.org/tictactoe/1.*\" },\n    { \"feature-type\": \"goal-code\", \"match\": \"aries.*\" }\n  ]\n}\n

    Queries messages contain one or more query objects in the queries array. Each query essentially says, \"Please tell me what ../../features of type X you support, where the feature identifiers match this (potentially wildcarded) string.\" This particular example asks an agent if it supports any 1.x versions of the tictactoe protocol, and if it supports any goal codes that begin with \"aries.\".

    Implementations of this protocol must recognize the following values for feature-type: protocol, goal-code, gov-fw, didcomm-version, and decorator/header. (The concept known as decorator in DIDComm v1 approximately maps to the concept known as header in DIDComm v2. The two values should be considered synonyms and must both be recognized.) Additional values of feature-type may be standardized by raising a PR against this RFC that defines the new type and increments the minor protocol version number; non-standardized values are also valid, but there is no guarantee that their semantics will be recognized.

    Identifiers for feature types vary. For protocols, identifiers are PIURIs. For goal codes, identifiers are goal code values. For governance frameworks, identifiers are URIs where the framework is published (typically the data_uri field if machine-readable. For DIDComm versions, identifiers are the URIs where DIDComm versions are developed (https://github.com/hyperledger/aries-rfcs for V1 and https://github.com/decentralized-identity/didcomm-messaging for V2; see \"Detecting DIDComm Versions\" in RFC 0044 for more details).

    The match field of a query descriptor may use the * wildcard. By itself, a match with just the wildcard says, \"I'm interested in anything you want to share with me.\" But usually, this wildcard will be to match a prefix that's a little more specific, as in the example that matches any 1.x version.

    Any agent may send another agent this message type at any time. Implementers of agents that intend to support dynamic relationships and rich ../../features are strongly encouraged to implement support for this message, as it is likely to be among the first messages exchanged with a stranger.

    "},{"location":"features/0557-discover-features-v2/#disclosures-message-type","title":"disclosures Message Type","text":"

    A discover-features/disclosures message looks like this:

    {\n  \"@type\": \"https://didcomm.org/discover-features/2.0/disclosures\",\n  \"~thread\": { \"thid\": \"yWd8wfYzhmuXX3hmLNaV5bVbAjbWaU\" },\n  \"disclosures\": [\n    {\n      \"feature-type\": \"protocol\",\n      \"id\": \"https://didcomm.org/tictactoe/1.0\",\n      \"roles\": [\"player\"]\n    },\n    {\n      \"feature-type\": \"goal-code\",\n      \"id\": \"aries.sell.goods.consumer\"\n    }\n  ]\n}\n

    The disclosures field is a JSON array of zero or more disclosure objects that describe a feature. Each descriptor has a feature-type field that contains data corresponding to feature-type in a query object, and an id field that unambiguously identifies a single item of that feature type. When the item is a protocol, the disclosure object may also contain a roles array that enumerates the roles the responding agent can play in the associated protocol. Future feature types may add additional optional fields, though no other fields are being standardized with this version of the RFC.

    Disclosures messages say, \"Here are some ../../features I support (that matched your queries).\"

    "},{"location":"features/0557-discover-features-v2/#sparse-disclosures","title":"Sparse Disclosures","text":"

    Disclosures do not have to contain exhaustive detail. For example, the following response omits the optional roles field but may be just as useful as one that includes it:

    {\n  \"@type\": \"https://didcomm.org/discover-features/2.0/disclosures\",\n  \"~thread\": { \"thid\": \"yWd8wfYzhmuXX3hmLNaV5bVbAjbWaU\" },\n  \"disclosures\": [\n    {\"feature-type\": \"protocol\", \"id\": \"https://didcomm.org/tictactoe/1.0\"}\n  ]\n}\n

    Less detail probably suffices because agents do not need to know everything about one another's implementations in order to start an interaction--usually the flow will organically reveal what's needed. For example, the outcome message in the tictactoe protocol isn't needed until the end, and is optional anyway. Alice can start a tictactoe game with Bob and will eventually see whether he has the right idea about outcome messages.

    The missing roles in this disclosure does not say, \"I support no roles in this protocol.\" It says, \"I support the protocol but I'm providing no detail about specific roles.\" Similar logic applies to any other omitted fields.

    An empty disclosures array does not say, \"I support no ../../features that match your query.\" It says, \"I'm not disclosing to you that I support any ../../features (that match your query).\" An agent might not tell another that it supports a feature for various reasons, including: the trust that it imputes to the other party based on cumulative interactions so far, whether it's in the middle of upgrading a plugin, whether it's currently under high load, and so forth. And responses to a discover-features query are not guaranteed to be true forever; agents can be upgraded or downgraded, although they probably won't churn in their feature profiles from moment to moment.

    "},{"location":"features/0557-discover-features-v2/#privacy-considerations","title":"Privacy Considerations","text":"

    Because the wildcards in a queries message can be very inclusive, the discover-features protocol could be used to mine information suitable for agent fingerprinting, in much the same way that browser fingerprinting works. This is antithetical to the ethos of our ecosystem, and represents bad behavior. Agents should use discover-features to answer legitimate questions, and not to build detailed profiles of one another. However, fingerprinting may be attempted anyway.

    For agents that want to maintain privacy, several best practices are recommended:

    "},{"location":"features/0557-discover-features-v2/#follow-selective-disclosure","title":"Follow selective disclosure.","text":"

    Only reveal supported ../../features based on trust in the relationship. Even if you support a protocol, you may not wish to use it in every relationship. Don't tell others about ../../features you do not plan to use with them.

    Patterns are easier to see in larger data samples. However, a pattern of ultra-minimal data is also a problem, so use good judgment about how forthcoming to be.

    "},{"location":"features/0557-discover-features-v2/#vary-the-format-of-responses","title":"Vary the format of responses.","text":"

    Sometimes, you might prettify your agent plaintext message one way, sometimes another.

    "},{"location":"features/0557-discover-features-v2/#vary-the-order-of-items-in-the-disclosures-array","title":"Vary the order of items in the disclosures array.","text":"

    If more than one key matches a query, do not always return them in alphabetical order or version order. If you do return them in order, do not always return them in ascending order.

    "},{"location":"features/0557-discover-features-v2/#consider-adding-some-spurious-details","title":"Consider adding some spurious details.","text":"

    If a query could match multiple ../../features, then occasionally you might add some made-up ../../features as matches. If a wildcard allows multiple versions of a protocol, then sometimes you might use some made-up versions. And sometimes not. (Doing this too aggressively might reveal your agent implementation, so use sparingly.)

    "},{"location":"features/0557-discover-features-v2/#vary-how-you-query-too","title":"Vary how you query, too.","text":"

    How you ask questions may also be fingerprintable.

    "},{"location":"features/0557-discover-features-v2/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0587-encryption-envelope-v2/","title":"Aries RFC 0587: Encryption Envelope v2","text":""},{"location":"features/0587-encryption-envelope-v2/#summary","title":"Summary","text":"

    This RFC proposes that we support the definition of envelopes from DIDComm Messaging.

    "},{"location":"features/0587-encryption-envelope-v2/#motivation","title":"Motivation","text":"

    This RFC defines ciphersuites for envelopes such that we can achieve better compatability with DIDComm Messaging being specified at DIF. The ciphersuites defined in this RFC are a subset of the definitions in Aries RFC 0334-jwe-envelope.

    "},{"location":"features/0587-encryption-envelope-v2/#encryption-algorithms","title":"Encryption Algorithms","text":"

    DIDComm defines both the concept of authenticated sender encryption (aka Authcrypt) and anonymous sender encryption (aka Anoncrypt). In general, Aries RFCs and protocols use Authcrypt to exchange messages. In some limited scenarios (e.g., mediator and relays), an Aries RFC or protocol may define usage of Anoncrypt.

    ECDH-1PU draft 04 defines the JWE structure for Authcrypt. ECDH-ES from RFC 7518 defines the JWE structure for Anoncrypt. The following sections summarize the supported algorithms.

    "},{"location":"features/0587-encryption-envelope-v2/#curves","title":"Curves","text":"

    DIDComm Messaging (and this RFC) requires support for X25519, P-256, and P-384.

    "},{"location":"features/0587-encryption-envelope-v2/#content-encryption-algorithms","title":"Content Encryption Algorithms","text":"

    DIDComm Messaging (and this RFC) requires support for both XC20P and A256GCM for Anoncrypt only and A256CBC-HS512 for both Authcrypt and Anoncrypt.

    "},{"location":"features/0587-encryption-envelope-v2/#key-wrapping-algorithms","title":"Key Wrapping Algorithms","text":"

    DIDComm Messaging (and this RFC) requires support for ECDH-1PU+A256KW and ECDH-ES+A256KW.

    "},{"location":"features/0587-encryption-envelope-v2/#key-ids-kid-and-skid-headers-references-in-the-did-document","title":"Key IDs kid and skid headers references in the DID document","text":"

    Keys used by DIDComm envelopes MUST be sourced from the DIDs exchanged between two agents. Specifically, both sender and recipients keys MUST be retrieved from the DID document's KeyAgreement verification section as per the DID Document Keys definition.

    When Alice is preparing an envelope intended for Bob, the packing process should use a key from both hers and Bob's DID document's KeyAgreement section.

    Assuming Alice has a DID Doc with the following KeyAgreement definition (source: DID V1 Example 17):

    {\n  \"@context\": \"https://www.w3.org/ns/did/v1\",\n  \"id\": \"did:example:123456789abcdefghi\",\n  ...\n  \"keyAgreement\": [\n    // this method can be used to perform key agreement as did:...fghi\n    \"did:example:123456789abcdefghi#keys-1\",\n    // this method is *only* approved for key agreement usage, it will not\n    // be used for any other verification relationship, so its full description is\n    // embedded here rather than using only a reference\n    {\n      \"id\": \"did:example:123#zC9ByQ8aJs8vrNXyDhPHHNNMSHPcaSgNpjjsBYpMMjsTdS\",\n      \"type\": \"X25519KeyAgreementKey2019\", // external (property value)\n      \"controller\": \"did:example:123\",\n      \"publicKeyBase58\": \"9hFgmPVfmBZwRvFEyniQDBkz9LmV7gDEqytWyGZLmDXE\"\n    }\n  ],\n  ...\n}\n

    The envelope packing process should set the skid header with value did:example:123456789abcdefghi#keys-1 in the envelope's protected headers and fetch the underlying key to execute ECDH-1PU key derivation for content key wrapping.

    Assuming she also has Bob's DID document which happens to include the following KeyAgreement section:

    {\n  \"@context\": \"https://www.w3.org/ns/did/v1\",\n  \"id\": \"did:example:jklmnopqrstuvwxyz1\",\n  ...\n  \"keyAgreement\": [\n    {\n      \"id\": \"did:example:jklmnopqrstuvwxyz1#key-1\",\n      \"type\": \"X25519KeyAgreementKey2019\", // external (property value)\n      \"controller\": \"did:example:jklmnopqrstuvwxyz1\",\n      \"publicKeyBase58\": \"9hFgmPVfmBZwRvFEyniQDBkz9LmV7gDEqytWyGZLmDXE\"\n    }\n  ],\n  ...\n}\n

    There should be only 1 entry in the recipients of the envelope, representing Bob. The corresponding kid header for this recipient MUST have did:example:jklmnopqrstuvwxyz1#key-1 as value. The packing process MUST extract the public key bytes found in publicKeyBase58 of Bob's DID Doc KeyAgreement[0] to execute the ECDH-1PU key derivation for content key wrapping.

    When Bob receives the envelope, the unpacking process on his end MUST resolve the skid protected header value using Alice's DID doc's KeyAgreement[0] in order to extract her public key. In Alice's DID Doc example above, KeyAgreement[0] is a reference id, it MUST be resolved from the main VerificationMethod[] of Alice's DID document (not shown in the example).

    Once resolved, the unpacker will then execute ECDH-1PU key derivation using this key and Bob's own recipient key found in the envelope's recipients[0] to unwrap the content encryption key.

    "},{"location":"features/0587-encryption-envelope-v2/#protecting-the-skid-header","title":"Protecting the skid header","text":"

    When the skid cannot be revealed in a plain-text JWE header (to avoid potentially leaking sender's key id), the skid MAY be encrypted for each recipient. In this case, instead of having a skid protected header in the envelope, each recipient MAY include an encrypted_skid header with a value based on the encryption of skid using ECDH-ES Z computation of the epk and the recipient's key as the encryption key.

    For applications that don't require this protection, they MAY use skid protected header directly without any additional recipient headers.

    Applications MUST use either skid protected header or encrypted_skid recipients header but not both in the same envelope.

    "},{"location":"features/0587-encryption-envelope-v2/#ecdh-1pu-key-wrapping-and-common-protected-headers","title":"ECDH-1PU key wrapping and common protected headers","text":"

    When using authcrypt, the 1PU draft requires mandates the use of AES_CBC_HMAC_SHA family of content encryption algorithms. To meet this requirement, JWE messages MUST use common epk, apu, apv and alg headers for all recipients. They MUST be set in the protected headers JWE section.

    As per this requirement, the JWE building must first encrypt the payload then use the resulting tag as part of the key derivation process when wrapping the cek.

    To meet this requirement, the above headers must be defined as follows: * epk: generated once for all recipients. It MUST be of the same type and curve as all recipient keys since kdf with the sender key must be on the same curve. - Example: \"epk\": {\"kty\": \"EC\",\"crv\": \"P-256\",\"x\": \"BVDo69QfyXAdl6fbK6-QBYIsxv0CsNMtuDDVpMKgDYs\",\"y\": \"G6bdoO2xblPHrKsAhef1dumrc0sChwyg7yTtTcfygHA\"} * apu: similar to skid, this is the producer (sender) identifier, it MUST contain the skid value base64 RawURL (no padding) encoded. Note: this is base64URL(skid value). - Example for skid mentioned in an earlier section above: ZGlkOmV4YW1wbGU6MTIzNDU2Nzg5YWJjZGVmZ2hpI2tleXMtMQ * apv: this represents the recipients' kid list. The list must be alphanumerically sorted, kid values will then be concatenated with a . and the final result MUST be base64 URL (no padding) encoding of the SHA256 hash of concatenated list. * alg: this is the key wrapping algorithm, ie: ECDH-1PU+A256KW.

    A final note about skid header: since the 1PU draft does not require this header, authcrypt implementations MUST be able to resolve the sender kid from the APU header if skid is not set.

    "},{"location":"features/0587-encryption-envelope-v2/#media-type","title":"Media Type","text":"

    The media type associated to this envelope is application/didcomm-encrypted+json. RFC 0044 provides a general discussion of media (aka mime) types.

    The media type of the envelope MUST be set in the typ property of the JWE and the media type of the payload MUST be set in the cty property of the JWE.

    For example, following the guidelines of RFC 0044, an encrypted envelope with a plaintext DIDComm v1 payload contains the typ property with the value application/didcomm-encrypted+json and cty property with the value application/json;flavor=didcomm-msg.

    As specified in IETF RFC 7515 and referenced in IETF RFC 7516, implementations MUST also support media types that omit application/. For example, didcomm-encrypted+json and application/didcomm-encrypted+json are treated as equivalent media types.

    As discussed in RFC 0434 and RFC 0067, the accept property is used to advertise supported media types. The accept property may contain an envelope media type or a combination of the envelope media type and the content media type. In cases where the content media type is not present, the expectation is that the appropriate content media type can be inferred. For example, application/didcomm-envelope-enc indicates both Envelope v1 and DIDComm v1 and application/didcomm-encrypted+json indicates both Envelope v2 and DIDComm v2. However, some agents may choose to support Envelope v2 with a DIDComm v1 message payload.

    In case the accept property is set in both the DID service block and the out-of-band message, the out-of-band property takes precedence.

    "},{"location":"features/0587-encryption-envelope-v2/#didcomm-v2-transition","title":"DIDComm v2 Transition","text":"

    As this RFC specifies the same envelope format as will be used in DIDComm v2, an implementor should detect if the payload contains DIDComm v1 content or the JWM from DIDComm v2. These payloads can be distinguished based on the cty property of the JWE.

    As discussed in RFC 0044, the content type for the plaintext DIDComm v1 message is application/json;flavor=didcomm-msg. When the cty property contains application/json;flavor=didcomm-msg, the payload is treated as DIDComm v1. DIDComm Messaging will specify appropriate media types for DIDComm v2. To advertise the combination of Envelope v2 with a DIDComm v1 message, the media type is application/didcomm-encrypted+json;cty=application/json.

    "},{"location":"features/0587-encryption-envelope-v2/#additional-aip-impacts","title":"Additional AIP impacts","text":"

    Implementors supporting an AIP sub-target that contains this RFC (e.g., DIDCOMMV2PREP) MAY choose to only support Envelope v2 without support for the original envelope declared in RFC 0019. In these cases, the accept property will not contain didcomm/aip2;env=rfc19 media type.

    "},{"location":"features/0587-encryption-envelope-v2/#drawbacks","title":"Drawbacks","text":"

    The DIDComm v2 specification is a draft. However, the aries-framework-go project has already implemented the new envelope format.

    "},{"location":"features/0587-encryption-envelope-v2/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Our approach for Authcrypt compliance is to use the NIST approved One-Pass Unified Model for ECDH scheme described in SP 800-56A Rev. 3. The JOSE version is defined as ECDH-1PU in this IETF draft.

    Aries agents currently use the envelope described in RFC0019. This envelope uses libsodium (NaCl) encryption/decryption, which is based on Salsa20Poly1305 algorithm.

    "},{"location":"features/0587-encryption-envelope-v2/#prior-art","title":"Prior art","text":""},{"location":"features/0587-encryption-envelope-v2/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0587-encryption-envelope-v2/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0592-indy-attachments/","title":"Aries RFC 0592: Indy Attachment Formats for Requesting and Presenting Credentials","text":""},{"location":"features/0592-indy-attachments/#summary","title":"Summary","text":"

    This RFC registers attachment formats used with Hyperledger Indy-style ZKP-oriented credentials in Issue Credential Protocol 2.0 and Present Proof Protocol 2.0. These formats are generally considered v2 formats, as they align with the \"anoncreds2\" work in Hyperledger Ursa and are a second generation implementation. They began to be used in production in 2018 and are in active deployment in 2021.

    "},{"location":"features/0592-indy-attachments/#motivation","title":"Motivation","text":"

    Allows Indy-style credentials to be used with credential-related protocols that take pluggable formats as payloads.

    "},{"location":"features/0592-indy-attachments/#reference","title":"Reference","text":""},{"location":"features/0592-indy-attachments/#cred-filter-format","title":"cred filter format","text":"

    The potential holder uses this format to propose criteria for a potential credential for the issuer to offer.

    The identifier for this format is hlindy/cred-filter@v2.0. It is a base64-encoded version of the data structure specifying zero or more criteria from the following (non-base64-encoded) structure:

    {\n    \"schema_issuer_did\": \"<schema_issuer_did>\",\n    \"schema_name\": \"<schema_name>\",\n    \"schema_version\": \"<schema_version>\",\n    \"schema_id\": \"<schema_identifier>\",\n    \"issuer_did\": \"<issuer_did>\",\n    \"cred_def_id\": \"<credential_definition_identifier>\"\n}\n

    The potential holder may not know, and need not specify, all of these criteria. For example, the holder might only know the schema name and the (credential) issuer DID. Recall that the potential holder may specify target attribute values and MIME types in the credential preview.

    For example, the JSON (non-base64-encoded) structure might look like this:

    {\n    \"schema_issuer_did\": \"did:sov:4RW6QK2HZhHxa2tg7t1jqt\",\n    \"schema_name\": \"bcgov-mines-act-permit.bcgov-mines-permitting\",\n    \"issuer_did\": \"did:sov:4RW6QK2HZhHxa2tg7t1jqt\"\n}\n

    A complete propose-credential message from the Issue Credential protocol 2.0 embeds this format at /filters~attach/data/base64:

    {\n    \"@id\": \"<uuid of propose message>\",\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/propose-credential\",\n    \"comment\": \"<some comment>\",\n    \"formats\" : [{\n        \"attach_id\": \"<attach@id value>\",\n        \"format\": \"hlindy/cred-filter@v2.0\"\n    }],\n    \"filters~attach\": [{\n        \"@id\": \"<attach@id value>\",\n        \"mime-type\": \"application/json\",\n        \"data\": {\n            \"base64\": \"ewogICAgInNjaGVtYV9pc3N1ZXJfZGlkIjogImRpZDpzb3Y... (clipped)... LMkhaaEh4YTJ0Zzd0MWpxdCIKfQ==\"\n        }\n    }]\n}\n
    "},{"location":"features/0592-indy-attachments/#cred-abstract-format","title":"cred abstract format","text":"

    This format is used to clarify the structure and semantics (but not the concrete data values) of a potential credential, in offers sent from issuer to potential holder.

    The identifier for this format is hlindy/cred-abstract@v2.0. It is a base64-encoded version of the data returned from indy_issuer_create_credential_offer().

    The JSON (non-base64-encoded) structure might look like this:

    {\n    \"schema_id\": \"4RW6QK2HZhHxa2tg7t1jqt:2:bcgov-mines-act-permit.bcgov-mines-permitting:0.2.0\",\n    \"cred_def_id\": \"4RW6QK2HZhHxa2tg7t1jqt:3:CL:58160:default\",\n    \"nonce\": \"57a62300-fbe2-4f08-ace0-6c329c5210e1\",\n    \"key_correctness_proof\" : <key_correctness_proof>\n}\n

    A complete offer-credential message from the Issue Credential protocol 2.0 embeds this format at /offers~attach/data/base64:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/offer-credential\",\n    \"@id\": \"<uuid of offer message>\",\n    \"replacement_id\": \"<issuer unique id>\",\n    \"comment\": \"<some comment>\",\n    \"credential_preview\": <json-ld object>,\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\": \"hlindy/cred-abstract@v2.0\"\n        }\n    ],\n    \"offers~attach\": [\n        {\n            \"@id\": \"<attach@id value>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"ewogICAgInNjaGVtYV9pZCI6ICI0Ulc2UUsySFpoS... (clipped)... jb3JyZWN0bmVzc19wcm9vZj4KfQ==\"\n            }\n        }\n    ]\n}\n

    The same structure can be embedded at /offers~attach/data/base64 in an offer-credential message.

    "},{"location":"features/0592-indy-attachments/#cred-request-format","title":"cred request format","text":"

    This format is used to formally request a credential. It differs from the credential abstract above in that it contains a cryptographic commitment to a link secret; an issuer can therefore use it to bind a concrete instance of an issued credential to the appropriate holder. (In contrast, the credential abstract describes the schema and cred def, but not enough information to actually issue to a specific holder.)

    The identifier for this format is hlindy/cred-req@v2.0. It is a base64-encoded version of the data returned from indy_prover_create_credential_req().

    The JSON (non-base64-encoded) structure might look like this:

    {\n    \"prover_did\" : \"did:sov:abcxyz123\",\n    \"cred_def_id\" : \"4RW6QK2HZhHxa2tg7t1jqt:3:CL:58160:default\",\n    // Fields below can depend on Cred Def type\n    \"blinded_ms\" : <blinded_master_secret>,\n    \"blinded_ms_correctness_proof\" : <blinded_ms_correctness_proof>,\n    \"nonce\": \"fbe22300-57a6-4f08-ace0-9c5210e16c32\"\n}\n

    A complete request-credential message from the Issue Credential protocol 2.0 embeds this format at /requests~attach/data/base64:

    {\n    \"@id\": \"cf3a9301-6d4a-430f-ae02-b4a79ddc9706\",\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/request-credential\",\n    \"comment\": \"<some comment>\",\n    \"formats\": [{\n        \"attach_id\": \"7cd11894-838a-45c0-a9ec-13e2d9d125a1\",\n        \"format\": \"hlindy/cred-req@v2.0\"\n    }],\n    \"requests~attach\": [{\n        \"@id\": \"7cd11894-838a-45c0-a9ec-13e2d9d125a1\",\n        \"mime-type\": \"application/json\",\n        \"data\": {\n            \"base64\": \"ewogICAgInByb3Zlcl9kaWQiIDogImRpZDpzb3Y6YWJjeHl.. (clipped)... DAtNTdhNi00ZjA4LWFjZTAtOWM1MjEwZTE2YzMyIgp9\"\n        }\n    }]\n}\n
    "},{"location":"features/0592-indy-attachments/#credential-format","title":"credential format","text":"

    A concrete, issued Indy credential may be transmitted over many protocols, but is specifically expected as the final message in Issuance Protocol 2.0. The identifier for its format is hlindy/cred@v2.0.

    This is a credential that's designed to be held but not shared directly. It is stored in the holder's wallet and used to derive a novel ZKP or W3C-compatible verifiable presentation just in time for each sharing of credential material.

    The encoded values of the credential MUST follow the encoding algorithm as described in Encoding Claims.

    This is the format emitted by libindy's indy_issuer_create_credential() function. It is JSON-based and might look like this:

    {\n    \"schema_id\": \"4RW6QK2HZhHxa2tg7t1jqt:2:bcgov-mines-act-permit.bcgov-mines-permitting:0.2.0\",\n    \"cred_def_id\": \"4RW6QK2HZhHxa2tg7t1jqt:3:CL:58160:default\",\n    \"rev_reg_id\", \"EyN78DDGHyok8qw6W96UBY:4:EyN78DDGHyok8qw6W96UBY:3:CL:56389:CardossierOrgPerson:CL_ACCUM:1-1000\",\n    \"values\": {\n        \"attr1\" : {\"raw\": \"value1\", \"encoded\": \"value1_as_int\" },\n        \"attr2\" : {\"raw\": \"value2\", \"encoded\": \"value2_as_int\" }\n    },\n    // Fields below can depend on Cred Def type\n    \"signature\": <signature>,\n    \"signature_correctness_proof\": <signature_correctness_proof>\n    \"rev_reg\": <revocation registry state>\n    \"witness\": <witness>\n}\n

    An exhaustive description of the format is out of scope here; it is more completely documented in white papers, source code, and other Indy materials.

    "},{"location":"features/0592-indy-attachments/#proof-request-format","title":"proof request format","text":"

    This format is used to formally request a verifiable presenation (proof) derived from an Indy-style ZKP-oriented credential. It can also be used by a holder to propose a presentation.

    The identifier for this format is hlindy/proof-req@v2.0. It is a base64-encoded version of the data returned from indy_prover_search_credentials_for_proof_req().

    Here is a sample proof request that embodies the following: \"Using a government-issued ID, disclose the credential holder\u2019s name and height, hide the credential holder\u2019s sex, get them to self-attest their phone number, and prove that their age is at least 18\":

    {\n    \"nonce\": \u201c2934823091873049823740198370q23984710239847\u201d, \n    \"name\":\"proof_req_1\",\n    \"version\":\"0.1\",\n    \"requested_attributes\":{\n        \"attr1_referent\": {\"name\":\"sex\"},\n        \"attr2_referent\": {\"name\":\"phone\"},\n        \"attr3_referent\": {\"names\": [\"name\", \"height\"], \"restrictions\": <restrictions specifying government-issued ID>}\n    },\n    \"requested_predicates\":{\n        \"predicate1_referent\":{\"name\":\"age\",\"p_type\":\">=\",\"p_value\":18}\n    }\n}\n
    "},{"location":"features/0592-indy-attachments/#proof-format","title":"proof format","text":"

    This is the format of an Indy-style ZKP. It plays the same role as a W3C-style verifiable presentation (VP) and can be mapped to one.

    The raw values encoded in the presentation SHOULD be verified against the encoded values using the encoding algorithm as described below in Encoding Claims.

    The identifier for this format is hlindy/proof@v2.0. It is a version of the (JSON-based) data emitted by libindy's indy_prover_create_proof()) function. A proof that responds to the previous proof request sample looks like this:

    {\n  \"proof\":{\n    \"proofs\":[\n      {\n        \"primary_proof\":{\n          \"eq_proof\":{\n            \"revealed_attrs\":{\n              \"height\":\"175\",\n              \"name\":\"1139481716457488690172217916278103335\"\n            },\n            \"a_prime\":\"5817705...096889\",\n            \"e\":\"1270938...756380\",\n            \"v\":\"1138...39984052\",\n            \"m\":{\n              \"master_secret\":\"375275...0939395\",\n              \"sex\":\"3511483...897083518\",\n              \"age\":\"13430...63372249\"\n            },\n            \"m2\":\"1444497...2278453\"\n          },\n          \"ge_proofs\":[\n            {\n              \"u\":{\n                \"1\":\"152500...3999140\",\n                \"2\":\"147748...2005753\",\n                \"0\":\"8806...77968\",\n                \"3\":\"10403...8538260\"\n              },\n              \"r\":{\n                \"2\":\"15706...781609\",\n                \"3\":\"343...4378642\",\n                \"0\":\"59003...702140\",\n                \"DELTA\":\"9607...28201020\",\n                \"1\":\"180097...96766\"\n              },\n              \"mj\":\"134300...249\",\n              \"alpha\":\"827896...52261\",\n              \"t\":{\n                \"2\":\"7132...47794\",\n                \"3\":\"38051...27372\",\n                \"DELTA\":\"68025...508719\",\n                \"1\":\"32924...41082\",\n                \"0\":\"74906...07857\"\n              },\n              \"predicate\":{\n                \"attr_name\":\"age\",\n                \"p_type\":\"GE\",\n                \"value\":18\n              }\n            }\n          ]\n        },\n        \"non_revoc_proof\":null\n      }\n    ],\n    \"aggregated_proof\":{\n      \"c_hash\":\"108743...92564\",\n      \"c_list\":[ 6 arrays of 257 numbers between 0 and 255]\n    }\n  },\n  \"requested_proof\":{\n    \"revealed_attrs\":{\n      \"attr1_referent\":{\n        \"sub_proof_index\":0,\n        \"raw\":\"Alex\",\n        \"encoded\":\"1139481716457488690172217916278103335\"\n      }\n    },\n    \"revealed_attr_groups\":{\n      \"attr4_referent\":{\n        \"sub_proof_index\":0,\n        \"values\":{\n          \"name\":{\n            \"raw\":\"Alex\",\n            \"encoded\":\"1139481716457488690172217916278103335\"\n          },\n          \"height\":{\n            \"raw\":\"175\",\n            \"encoded\":\"175\"\n          }\n        }\n      }\n    },\n    \"self_attested_attrs\":{\n      \"attr3_referent\":\"8-800-300\"\n    },\n    \"unrevealed_attrs\":{\n      \"attr2_referent\":{\n        \"sub_proof_index\":0\n      }\n    },\n    \"predicates\":{\n      \"predicate1_referent\":{\n        \"sub_proof_index\":0\n      }\n    }\n  },\n  \"identifiers\":[\n    {\n      \"schema_id\":\"NcYxiDXkpYi6ov5FcYDi1e:2:gvt:1.0\",\n      \"cred_def_id\":\"NcYxi...cYDi1e:2:gvt:1.0:TAG_1\",\n      \"rev_reg_id\":null,\n      \"timestamp\":null\n    }\n  ]\n}\n
    "},{"location":"features/0592-indy-attachments/#unrevealed-attributes","title":"Unrevealed Attributes","text":"

    AnonCreds supports a holder responding to a proof request with some of the requested claims included in an unrevealed_attrs array, as seen in the example above, with attr2_referent. Assuming the rest of the proof is valid, AnonCreds will indicate that a proof with unrevealed attributes has been successfully verified. It is the responsibility of the verifier to determine if the purpose of the verification has been met if some of the attributes are not revealed.

    There are at least a few valid use cases for this approach:

    "},{"location":"features/0592-indy-attachments/#encoding-claims","title":"Encoding Claims","text":"

    Claims in AnonCreds-based verifiable credentials are put into the credential in two forms, raw and encoded. raw is the actual data value, and encoded is the (possibly derived) integer value that is used in presentations. At this time, AnonCreds does not take an opinion on the method used for encoding the raw value.

    AnonCreds issuers and verifiers must agree on the encoding method so that the verifier can check that the raw value returned in a presentation corresponds to the proven encoded value. The following is the encoding algorithm that MUST be used by Issuers when creating credentials and SHOULD be verified by Verifiers receiving presentations:

    An example implementation in Python can be found here.

    A gist of test value pairs can be found here.

    "},{"location":"features/0592-indy-attachments/#notes-on-encoding-claims","title":"Notes on Encoding Claims","text":""},{"location":"features/0592-indy-attachments/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0593-json-ld-cred-attach/","title":"Aries RFC 0593: JSON-LD Credential Attachment format for requesting and issuing credentials","text":""},{"location":"features/0593-json-ld-cred-attach/#summary","title":"Summary","text":"

    This RFC registers an attachment format for use in the issue-credential V2 protocol based on JSON-LD credentials with Linked Data Proofs from the VC Data Model.

    It defines a minimal set of parameters needed to create a common understanding of the verifiable credential to issue. It is based on version 1.0 of the Verifiable Credentials Data Model which is a W3C recommendation since 19 November 2019.

    "},{"location":"features/0593-json-ld-cred-attach/#motivation","title":"Motivation","text":"

    The Issue Credential protocol needs an attachment format to be able to exchange JSON-LD credentials with Linked Data Proofs. It is desirable to make use of specifications developed in an open standards body, such as the Credential Manifest for which the attachment format is described in RFC 0511: Credential-Manifest Attachment format. However, the Credential Manifest is not finished and ready yet, and therefore there is a need to bridge the gap between standards.

    "},{"location":"features/0593-json-ld-cred-attach/#tutorial","title":"Tutorial","text":"

    Complete examples of messages are provided in the reference section.

    "},{"location":"features/0593-json-ld-cred-attach/#reference","title":"Reference","text":""},{"location":"features/0593-json-ld-cred-attach/#ld-proof-vc-detail-attachment-format","title":"ld-proof-vc-detail attachment format","text":"

    Format identifier: aries/ld-proof-vc-detail@v1.0

    This format is used to formally propose, offer, or request a credential. The credential property should contain the credential as it is going to be issued, without the proof and credentialStatus properties. Options for these properties are specified in the options object.

    The JSON structure might look like this:

    {\n  \"credential\": {\n    \"@context\": [\n      \"https://www.w3.org/2018/credentials/v1\",\n      \"https://www.w3.org/2018/credentials/examples/v1\"\n    ],\n    \"id\": \"urn:uuid:3978344f-8596-4c3a-a978-8fcaba3903c5\",\n    \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n    \"issuer\": \"did:key:z6MkodKV3mnjQQMB9jhMZtKD9Sm75ajiYq51JDLuRSPZTXrr\",\n    \"issuanceDate\": \"2020-01-01T19:23:24Z\",\n    \"expirationDate\": \"2021-01-01T19:23:24Z\",\n    \"credentialSubject\": {\n      \"id\": \"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\",\n      \"degree\": {\n        \"type\": \"BachelorDegree\",\n        \"name\": \"Bachelor of Science and Arts\"\n      }\n    }\n  },\n  \"options\": {\n    \"proofPurpose\": \"assertionMethod\",\n    \"created\": \"2020-04-02T18:48:36Z\",\n    \"domain\": \"example.com\",\n    \"challenge\": \"9450a9c1-4db5-4ab9-bc0c-b7a9b2edac38\",\n    \"credentialStatus\": {\n      \"type\": \"CredentialStatusList2017\"\n    },\n    \"proofType\": \"Ed25519Signature2018\"\n  }\n}\n

    A complete request credential message form the Issue Credential protocol 2.0 might look like this:

    {\n  \"@id\": \"7293daf0-ed47-4295-8cc4-5beb513e500f\",\n  \"@type\": \"https://didcomm.org/issue-credential/%VER/request-credential\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"13a3f100-38ce-4e96-96b4-ea8f30250df9\",\n      \"format\": \"aries/ld-proof-vc-detail@v1.0\"\n    }\n  ],\n  \"requests~attach\": [\n    {\n      \"@id\": \"13a3f100-38ce-4e96-96b4-ea8f30250df9\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"base64\": \"ewogICJjcmVkZW50aWFsIjogewogICAgIkBjb250...(clipped)...IkVkMjU1MTlTaWduYXR1cmUyMDE4IgogIH0KfQ==\"\n      }\n    }\n  ]\n}\n

    The format is closely related to the Verifiable Credentials HTTP API, but diverts on some places. The main differences are:

    "},{"location":"features/0593-json-ld-cred-attach/#ld-proof-vc-attachment-format","title":"ld-proof-vc attachment format","text":"

    Format identifier: aries/ld-proof-vc@v1.0

    This format is used to transmit a verifiable credential with linked data proof. The contents of the attachment is a standard JSON-LD Verifiable Credential object with linked data proof as defined by the Verifiable Credentials Data Model and the Linked Data Proofs specification.

    The JSON structure might look like this:

    {\n  \"@context\": [\n    \"https://www.w3.org/2018/credentials/v1\",\n    \"https://www.w3.org/2018/credentials/examples/v1\"\n  ],\n  \"id\": \"http://example.gov/credentials/3732\",\n  \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n  \"issuer\": {\n    \"id\": \"did:web:vc.transmute.world\"\n  },\n  \"issuanceDate\": \"2020-03-10T04:24:12.164Z\",\n  \"credentialSubject\": {\n    \"id\": \"did:example:ebfeb1f712ebc6f1c276e12ec21\",\n    \"degree\": {\n      \"type\": \"BachelorDegree\",\n      \"name\": \"Bachelor of Science and Arts\"\n    }\n  },\n  \"proof\": {\n    \"type\": \"JsonWebSignature2020\",\n    \"created\": \"2020-03-21T17:51:48Z\",\n    \"verificationMethod\": \"did:web:vc.transmute.world#_Qq0UL2Fq651Q0Fjd6TvnYE-faHiOpRlPVQcY_-tA4A\",\n    \"proofPurpose\": \"assertionMethod\",\n    \"jws\": \"eyJiNjQiOmZhbHNlLCJjcml0IjpbImI2NCJdLCJhbGciOiJFZERTQSJ9..OPxskX37SK0FhmYygDk-S4csY_gNhCUgSOAaXFXDTZx86CmI5nU9xkqtLWg-f4cqkigKDdMVdtIqWAvaYx2JBA\"\n  }\n}\n

    A complete issue-credential message from the Issue Credential protocol 2.0 might look like this:

    {\n  \"@id\": \"284d3996-ba85-45d9-964b-9fd5805517b6\",\n  \"@type\": \"https://didcomm.org/issue-credential/%VER/issue-credential\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"format\": \"aries/ld-proof-vc@v1.0\"\n    }\n  ],\n  \"credentials~attach\": [\n    {\n      \"@id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"mime-type\": \"application/ld+json\",\n      \"data\": {\n        \"base64\": \"ewogICAgICAgICAgIkBjb250ZXogWwogICAgICAg...(clipped)...RNVmR0SXFXZhWXgySkJBIgAgfQogICAgICAgIH0=\"\n      }\n    }\n  ]\n}\n
    "},{"location":"features/0593-json-ld-cred-attach/#supported-proof-types","title":"Supported Proof Types","text":"

    Following are the Linked Data proof types on Verifiable Credentials that MUST be supported for compliance with this RFC. All suites listed in the following table MUST be registered in the Linked Data Cryptographic Suite Registry:

    Suite Spec Enables Selective disclosure? Enables Zero-knowledge proofs? Optional Ed25519Signature2018 Link No No No BbsBlsSignature2020** Link Yes No No JsonWebSignature2020*** Link No No Yes

    ** Note: see RFC0646 for details on how BBS+ signatures are to be produced and consumed by Aries agents.

    *** Note: P-256 and P-384 curves are supported.

    "},{"location":"features/0593-json-ld-cred-attach/#drawbacks","title":"Drawbacks","text":"

    N/A

    "},{"location":"features/0593-json-ld-cred-attach/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0593-json-ld-cred-attach/#prior-art","title":"Prior art","text":"

    N/A

    "},{"location":"features/0593-json-ld-cred-attach/#unresolved-questions","title":"Unresolved questions","text":"

    N/A

    "},{"location":"features/0593-json-ld-cred-attach/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0627-static-peer-dids/","title":"Aries RFC 0627: Static Peer DIDs","text":""},{"location":"features/0627-static-peer-dids/#summary","title":"Summary","text":"

    Formally documents a very crisp profile of peer DID functionality that can be referenced in Aries Interop Profiles.

    "},{"location":"features/0627-static-peer-dids/#motivation","title":"Motivation","text":"

    The Peer DID Spec includes a number of advanced ../../features that are still evolving. However, a subset of its functionality is easy to implement and would be helpful to freeze for the purpose of Aries interop.

    "},{"location":"features/0627-static-peer-dids/#tutorial","title":"Tutorial","text":""},{"location":"features/0627-static-peer-dids/#spec-version","title":"Spec version","text":"

    The Peer DID method spec is still undergoing minor evolution. However, it is relatively stable, particularly in the simpler ../../features.

    This Aries RFC targets the version of the spec that is dated April 2, 2021 in its rendered form, or github commit 202a913 in its source form. Note that the rendered form of the spec may update without warning, so the github commit is the better reference.

    "},{"location":"features/0627-static-peer-dids/#targeted-layers","title":"Targeted layers","text":"

    Support for peer DIDs is imagined to target configurable \"layers\" of interoperability:

    For a careful definition of what these layers entail, please see https://identity.foundation/peer-did-method-spec/#layers-of-support.

    This Aries RFC targets Layers 1 and 2. That is, code that complies with this RFC would satisfy the required behaviors for Layer 1 and for Layer 2. Note, however, that Layer 2 is broken into accepting and giving static peer DIDs. An RFC-compliant implementation may choose to implement either side, or both.

    Support for Layer 3 (dynamic peer DIDs that have updatable state and that synchronize that state using Sync Connection Protocol as documented in Aries RFC 0030) is NOT required by this RFC. However, if there is an intent to support dynamic updates in the future, use of numalgo Method 1 is encouraged, as this allows static peer DIDs to acquire new state when dynamic support is added. (See next section.)

    "},{"location":"features/0627-static-peer-dids/#targeted-generation-methods-numalgo","title":"Targeted Generation Methods (numalgo)","text":"

    Peer DIDs can use several different algorithms to generate the entropy that constitutes their numeric basis. See https://identity.foundation/peer-did-method-spec/#generation-method for details.

    This RFC targets Method 0 (inception key without doc), Method 1 (genesis doc), and Method 2 (multiple inception keys). Code that complies with this RFC, and that intends to accept static DIDs at Layer 2a, MUST accept peer DIDs that use any of these methods. Code that intends to give peer DIDs (Layer 2b) MUST give peer DIDs that use at least one of these three methods.

    "},{"location":"features/0627-static-peer-dids/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"features/0641-linking-binary-objects-to-credentials/","title":"0641: Linking binary objects to credentials using hash based references","text":""},{"location":"features/0641-linking-binary-objects-to-credentials/#summary","title":"Summary","text":"

    This RFC provides a solution for issuing and presenting credentials with external binary objects, after referred to as attachments. It is compatible with 0036: Issue Credential Protocol V1, 0453: Issue Credential Protocol V2, 0037: Present Proof V1 protocol and 0454: Present Proof V2 Protocol. These external attachments could consist of images, PDFs, zip files, movies, etc. Through the use of DIDComm attachments, 0017: Attachments, the data can be embedded directly into the attachment or externally hosted. In order to maintain integrity over these attachments, hashlinks are used as the checksum.

    "},{"location":"features/0641-linking-binary-objects-to-credentials/#motivation","title":"Motivation","text":"

    Many use cases, such as a rental agreement or medical data in a verifiable credential, rely on attachments, small or large. At this moment, it is possible to issue credentials with accompanying attachments. When the attachment is rather small, this will work fine. However, larger attachments cause inconsistent timing issues and are resource intensive.

    "},{"location":"features/0641-linking-binary-objects-to-credentials/#tutorial","title":"Tutorial","text":"

    It is already possible to issue and verify base64-encoded attachments in credentials. When a credential is getting larger and larger, it becomes more and more impractical as it has to be signed, which is time consuming and resource intensive. A solution for this is to use the attachments decorator. This decorator creates a way to externalize the attachment from the credential attributes. By allowing this, the signing will be faster and more consistent. However, DIDComm messages SHOULD stay small, like with SMTP or Bluetooth, as specified in 0017: Attachments. In the attachments decorator it is also possible to specify a list of URLs where the attachment might be located for download. This list of URLs is accompanied by a sha256 tag that is a checksum over the file to maintain integrity. This sha256 tag can only contain a sha256 hash and if another algorithm is preferred then the hashlink MUST be used as the checksum.

    When issuing and verifying a credential, messages have to be sent between the holder, issuer and verifier. In order to circumvent additional complexity, such as looking at previously sent credentials for the attachment, the attachments decorator, when containing an attachment, MUST be sent at all of the following steps:

    Issue Credential V1 & V2

    1. Credential Proposal
    2. Credential Offer
    3. Credential Request
    4. Credential

    Present Proof V1 & V2

    1. Presentation Proposal
    2. Presentation Request
    3. Presentation
    "},{"location":"features/0641-linking-binary-objects-to-credentials/#linking","title":"Linking","text":"

    When a credential is issued with an attachment in the attachments decorator, be it a base64-encoded file or a hosted file, the link has to be made between the credential and the attachment. The link MUST be made with the attribute.value of the credential and the @id tag of the attachment in the attachments decorator.

    "},{"location":"features/0641-linking-binary-objects-to-credentials/#hashlink","title":"Hashlink","text":"

    A hashlink, as specified in IETF: Cryptographic Hyperlinks, is a formatted hash that has a prefix of hl: and an optional suffix of metadata. The hash in the hashlink is a multihash, which means that according to the prefix of the hash it is possible to see which hashing algorithm and encoding algorithm has been chosen. An example of a hashlink would be:

    hl:zQmcWyBPyedDzHFytTX6CAjjpvqQAyhzURziwiBKDKgqx6R

    This example shows the prefix of hl: indicating that it is a hashlink and the hash after the prefix is a multihash.

    The hashlink also allows for opional metadata, such as; a list of URLs where the attachment is hosted and a MIME-type. These metadata values are encoded in the CBOR data format using the specified algortihm from section 3.1.2 in the IETF: Cryptographic Hyperlinks.

    When a holder receives a credential with hosted attachments, the holder MAY rehost these attachments. A holder would do this in order to prevent the phone-home problem. If a holder does not care about this issue, this is use case specific, this can be left out but should be considered.

    "},{"location":"features/0641-linking-binary-objects-to-credentials/#inlined-attachments-as-a-credential-attribute","title":"Inlined Attachments as a Credential Attribute","text":"

    Attachments can be inlined in the credential attribute as a base64-encoded string. With this, there is no need for the attachment decorator. Below is an example of embedding a base64-encoded file as a string in a credential attribute.

    {\n  \"name\": \"Picture of a cat\",\n  \"mime-type\": \"image/png\",\n  \"value\": \"VGhpcyBpc ... (many bytes omitted) ... C4gSG93IG5pY2U=\"\n}\n
    "},{"location":"features/0641-linking-binary-objects-to-credentials/#attachments-inlined-in-the-attachment-decorator","title":"Attachments inlined in the Attachment Decorator","text":"

    When the attachments decorator is used to issue a credential with a binary object, a link has to be made between the credential value and the corresponding attachment. This link MUST be a hash, specifically a hashlink based on the checksum of the attachment.

    As stated in 0008: message id and threading, the @id tag of the attachment MUST NOT contain a colon and MUST NOT be longer than 64 characters. because of this, the @id can not contain a hashlink and MUST contain the multihash with a maximum length of 64 characters. When a hash is longer than 64 character, use the first 64 characters.

    {\n  \"@type\": \"https://didcomm.org/issue-credential/%VER/issue-credential\",\n  \"@id\": \"<uuid of issue message>\",\n  \"goal_code\": \"<goal-code>\",\n  \"replacement_id\": \"<issuer unique id>\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"<attach@id value>\",\n      \"format\": \"hlindy/cred@v2.0\"\n    }\n  ],\n  \"credentials~attach\": [\n    {\n      \"@id\": \"<attachment-id>\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"json\": {\n          \"schema_id\": \"4RW6QK2HZhHxa2tg7t1jqt:2:catSchema:0.3.0\",\n          \"cred_def_id\": \"4RW6QK2HZhHxa2tg7t1jqt:3:CL:58161:default\",\n          \"values\": {\n            \"pictureOfACat\": {\n              \"raw\": \"hl:zQmcWyBPyedDzHFytTX6CAjjpvqQAyhzURziwiBKDKgqx6R\",\n              \"encoded\": \"hl:zQmcWyBPyedDzHFytTX6CAjjpvqQAyhzURziwiBKDKgqx6R\"\n            }\n          },\n          \"signature\": \"<signature>\",\n          \"signature_correctness_proof\": \"<signature_correctness_proof>\"\n        }\n      }\n    }\n  ],\n  \"~attach\": [\n    {\n      \"@id\": \"zQmcWyBPyedDzHFytTX6CAjjpvqQAyhzURziwiBKDKgqx6R\",\n      \"mime-type\": \"image/png\",\n      \"filename\": \"cat.png\",\n      \"byte_count\": 2181,\n      \"lastmod_time\": \"2021-04-20 19:38:07Z\",\n      \"description\": \"Cute picture of a cat\",\n      \"data\": {\n        \"base64\": \"VGhpcyBpcyBhIGNv ... (many bytes omitted) ... R0ZXIgU0hJQkEgSU5VLg==\"\n      }\n    }\n  ]\n}\n
    "},{"location":"features/0641-linking-binary-objects-to-credentials/#hosted-attachments","title":"Hosted attachments","text":"

    The last method of adding a binary object in a credential is by using the attachments decorator in combination with external hosting. In the example below the attachment is hosted at two locations. These two URLs MUST point to the same file and match the integrity check with the sha256 value. It is important to note that when an issuer hosts an attachment, and issues a credential with this attachment, that the holder rehosts this attachment to prevent the phone-home assosiation.

    {\n  \"@type\": \"https://didcomm.org/issue-credential/%VER/issue-credential\",\n  \"@id\": \"<uuid of issue message>\",\n  \"goal_code\": \"<goal-code>\",\n  \"replacement_id\": \"<issuer unique id>\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"<attach@id value>\",\n      \"format\": \"hlindy/cred@v2.0\"\n    }\n  ],\n  \"credentials~attach\": [\n    {\n      \"@id\": \"<attachment-id>\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"json\": {\n          \"schema_id\": \"4RW6QK2HZhHxa2tg7t1jqt:2:catSchema:0.3.0\",\n          \"cred_def_id\": \"4RW6QK2HZhHxa2tg7t1jqt:3:CL:58161:default\",\n          \"values\": {\n            \"pictureOfACat\": {\n              \"raw\": \"hl:zQmcWyBPyedDzHFytTX6CAjjpvqQAyhzURziwiBKDKgqx6R\",\n              \"encoded\": \"hl:zQmcWyBPyedDzHFytTX6CAjjpvqQAyhzURziwiBKDKgqx6R\"\n            }\n          },\n          \"signature\": \"<signature>\",\n          \"signature_correctness_proof\": \"<signature_correctness_proof>\"\n        }\n      }\n    }\n  ],\n  \"~attach\": [\n    {\n      \"@id\": \"zQmcWyBPyedDzHFytTX6CAjjpvqQAyhzURziwiBKDKgqx6R\",\n      \"mime-type\": \"application/zip\",\n      \"filename\": \"cat.zip\",\n      \"byte_count\": 218187322,\n      \"lastmod_time\": \"2021-04-20 19:38:07Z\",\n      \"description\": \"Cute pictures of multiple cats\",\n      \"data\": {\n        \"links\": [\n          \"https://drive.google.com/kitty/cats.zip\",\n          \"s3://bucket/cats.zip\"\n        ]\n      }\n    }\n  ]\n}\n
    "},{"location":"features/0641-linking-binary-objects-to-credentials/#matching","title":"Matching","text":"

    Now that a link has been made between the attachment in the attachments decorator, it is possible to match the two together. When a credential is received and a value of an attribute starts with hl: it means that there is a linked attachment. To find the linked attachment to the credential attribute to following steps SHOULD be done:

    1. Extract the multihash from the credential attribute value
    2. Extract the first 64 characters of this multihash
    3. Loop over the @id tag of all the attachments in the attachment decorator
    4. Compare the value of the @id tag with the multihash
    5. If the @id tag matches with the multihash, then there is a link
    6. An integrity check can be done with the original, complete hashlink
    "},{"location":"features/0641-linking-binary-objects-to-credentials/#reference","title":"Reference","text":"

    When an issuer creates a value in a credential attribute with a prefix of hl:, but there is no attachment, a warning SHOULD be thrown.

    When DIDcomm V2 is implemented the attachment decorator will not contain the sha256 tag anymore and it will be replaced by hash to allow for any algorithm. DIDcomm messaging Attachments

    "},{"location":"features/0641-linking-binary-objects-to-credentials/#drawbacks","title":"Drawbacks","text":""},{"location":"features/0641-linking-binary-objects-to-credentials/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    The findings that large credentials are inconsistent and resource intensive are derived from issuing and verifying credentials of 100 kilobytes to 50 megabytes in Aries Framework JavaScript and Aries Cloudagent Python.

    The Identity Foundation is currently working on confidential storage, a way to allow access to your files based on DIDs. This storage would be a sleek fix for the last drawback.

    "},{"location":"features/0641-linking-binary-objects-to-credentials/#prior-art","title":"Prior art","text":""},{"location":"features/0641-linking-binary-objects-to-credentials/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0641-linking-binary-objects-to-credentials/#implementations","title":"Implementations","text":"Name / Link Implementation Notes"},{"location":"features/0646-bbs-credentials/","title":"0646: W3C Credential Exchange using BBS+ Signatures","text":""},{"location":"features/0646-bbs-credentials/#summary","title":"Summary","text":"

    This RFC describes how the Hyperledger Aries community should use BBS+ Signatures that conform with the Linked-Data Proofs Specification to perform exchange of credentials that comply with the W3C Verifiable Credential specification.

    Key ../../features include:

    This RFC sets guidelines for their safe usage and describes privacy-enabling ../../features that should be incorporated.

    The usage of zero-knowledge proofs, selective disclosure and signature blinding are already supported using the specifications as described in this document. Support for private holder binding and privacy preserving revocation will be added in the future.

    "},{"location":"features/0646-bbs-credentials/#motivation","title":"Motivation","text":"

    Aries currently supports credential formats used by Indy (Anoncreds based on JSON) and Aries-Framework-Go. BBS+ signatures with JSON-LD Proofs provide a unified credential format that includes strong privacy protecting anti-correlation ../../features and wide interoperability with verifiable credentials outside the Aries ecosystem.

    "},{"location":"features/0646-bbs-credentials/#tutorial","title":"Tutorial","text":""},{"location":"features/0646-bbs-credentials/#issuing-credentials","title":"Issuing Credentials","text":"

    This section highlights the process of issuing credentials with BBS+ signatures. The first section (Creating BBS+ Credentials) highlights the process of creating credentials with BBS+ signatures, while the next section focusses on the the process of exchanging credentials with BBS+ signatures (Exchanging BBS+ Credentials).

    "},{"location":"features/0646-bbs-credentials/#creating-bbs-credentials","title":"Creating BBS+ Credentials","text":"

    The process to create verifiable credentials with BBS+ signatures is mostly covered by the VC Data Model and BBS+ LD-Proofs specifications. At the date of writing this RFC, the BBS+ LD-Proofs specification still has some unresolved issues. The issues are documented in the Issues with the BBS+ LD-Proofs specification section below.

    Aries implementations MUST use the BBS+ Signature Suite 2020 to create verifiable credentials with BBS+ signatures, identified by the BbsBlsSignature2020 proof type.

    NOTE: Once the signature suites for bound signatures (private holder binding) are defined in the BBS+ LD-Proofs spec, the use of the BbsBlsSignature2020 suite will be deprecated and superseded by the BbsBlsBoundSignature2020 signature suite. See Private Holder Binding below for more information.

    "},{"location":"features/0646-bbs-credentials/#identifiers-in-issued-credentials","title":"Identifiers in Issued Credentials","text":"

    It is important to note that due to limitations of the underlying RDF canonicalization scheme, which is used by BBS+ LD-Proofs, issued credentials SHOULD NOT have any id properties, as the value of these properties will be revealed during the RDF canonicalization process, regardless of whether or not the holder chooses to disclose them.

    Credentials can make use of other identifier properties to create selectively disclosable identifiers. An example of this is the identifier property from the Citizenship Vocabulary

    "},{"location":"features/0646-bbs-credentials/#private-holder-binding","title":"Private Holder Binding","text":"

    A private holder binding allows the holder of a credential to authenticate itself without disclosing a correlating identifier (such as a DID) to the verifier. The current BBS+ LD-Proofs specification does not describe a mechanism yet to do private holder binding, but it is expected this will be done using two new signature suites: BbsBlsBoundSignature2020 and BbsBlsBoundSignatureProof2020. Both suites feature a commitment to a private key held by the credential holder, for which they prove knowledge of when deriving proofs without ever directly revealing the private key, nor a unique identifier linked to the private key (e.g its complementary public pair).

    "},{"location":"features/0646-bbs-credentials/#usage-of-credential-schema","title":"Usage of Credential Schema","text":"

    The zero-knowledge proof section of the VC Data Model requires verifiable credentials used in zero-knowledge proof systems to include a credential definition using the credentialSchema property. Due to the nature of how BBS+ LD proofs work, it is NOT required to include the credentialSchema property. See Issue 726 in the VC Data Model.

    "},{"location":"features/0646-bbs-credentials/#example-bbs-credential","title":"Example BBS+ Credential","text":"

    Below is a complete example of a Verifiable Credential with BBS+ linked data proof.

    {\n  \"@context\": [\n    \"https://www.w3.org/2018/credentials/v1\",\n    \"https://w3id.org/citizenship/v1\",\n    \"https://w3id.org/security/bbs/v1\" // <-- BBS+ context\n  ],\n  \"id\": \"https://issuer.oidp.uscis.gov/credentials/83627465\",\n  \"type\": [\"VerifiableCredential\", \"PermanentResidentCard\"],\n  \"issuer\": \"did:example:489398593\",\n  \"identifier\": \"83627465\", // <-- `identifier` property allows for seletively disclosable id property\n  \"name\": \"Permanent Resident Card\",\n  \"description\": \"Government of Example Permanent Resident Card.\",\n  \"issuanceDate\": \"2019-12-03T12:19:52Z\",\n  \"expirationDate\": \"2029-12-03T12:19:52Z\",\n  \"credentialSubject\": {\n    \"id\": \"did:example:b34ca6cd37bbf23\",\n    \"type\": [\"PermanentResident\", \"Person\"],\n    \"givenName\": \"JOHN\",\n    \"familyName\": \"SMITH\",\n    \"gender\": \"Male\",\n    \"image\": \"data:image/png;base64,iVBORw0KGgokJggg==\",\n    \"residentSince\": \"2015-01-01\",\n    \"lprCategory\": \"C09\",\n    \"lprNumber\": \"999-999-999\",\n    \"commuterClassification\": \"C1\",\n    \"birthCountry\": \"Bahamas\",\n    \"birthDate\": \"1958-07-17\"\n  },\n  \"proof\": {\n    \"type\": \"BbsBlsSignature2020\", // <-- type must be `BbsBlsSignature2020`\n    \"created\": \"2020-10-16T23:59:31Z\",\n    \"proofPurpose\": \"assertionMethod\",\n    \"proofValue\": \"kAkloZSlK79ARnlx54tPqmQyy6G7/36xU/LZgrdVmCqqI9M0muKLxkaHNsgVDBBvYp85VT3uouLFSXPMr7Stjgq62+OCunba7bNdGfhM/FUsx9zpfRtw7jeE182CN1cZakOoSVsQz61c16zQikXM3w==\",\n    \"verificationMethod\": \"did:example:489398593#test\"\n  }\n}\n
    "},{"location":"features/0646-bbs-credentials/#exchanging-bbs-credentials","title":"Exchanging BBS+ Credentials","text":"

    While the process of creating credentials with BBS+ signatures is defined in specifications outside of Aries, the process of exchanging credentials with BBS+ signatures is defined within Aries.

    Credentials with BBS+ signatures can be exchanged by following RFC 0453: Issue Credential Protocol 2.0. The Issue Credential 2.0 provides a registry of attachment formats that can be used for credential exchange. Currently, agents are expected to use the format as described in RFC 0593 (see below).

    NOTE: Once Credential Manifest v1.0 is released, RFC 0593 is expected to be deprecated and replaced by an updated version of RFC 0511: Credential-Manifest Attachment format

    "},{"location":"features/0646-bbs-credentials/#0593-json-ld-credential-attachment-format","title":"0593: JSON-LD Credential Attachment format","text":"

    RFC 0593: JSON-LD Credential Attachment format for requesting and issuing credentials defines a very simple, feature-poor attachment format for issuing JSON-LD credentials.

    The only requirement for exchanging BBS+ credentials, in addition to the requirements as specified in Creating BBS+ Credentials and RFC 0593, is the options.proofType in the ld-proof-vc-detail MUST be BbsBlsSignature2020.

    "},{"location":"features/0646-bbs-credentials/#presenting-derived-credentials","title":"Presenting Derived Credentials","text":"

    This section highlights the process of creating and presenting derived BBS+ credentials containing a BBS+ proof of knowledge.

    "},{"location":"features/0646-bbs-credentials/#deriving-credentials","title":"Deriving Credentials","text":"

    Deriving credentials should be done according to the BBS+ Signature Proof Suite 2020

    "},{"location":"features/0646-bbs-credentials/#disclosing-required-properties","title":"Disclosing Required Properties","text":"

    A verifiable presentation MUST NOT leak information that would enable the verifier to correlate the holder across multiple verifiable presentations.

    The above section from the VC Data Model may give the impression that it is allowed to omit required properties from a derived credential if this prevents correlation. However things the holder chooses to reveal are in a different category than things the holder MUST reveal. Derived credentials MUST disclose required properties, even if it can correlate them.

    E.g. a credential with issuanceDate of 2017-12-05T14:27:42Z could create a correlating factor. However it is against the VC Data Model to not include the property. Take this into account when issuing credentials.

    "},{"location":"features/0646-bbs-credentials/#transforming-blank-node-identifiers","title":"Transforming Blank Node Identifiers","text":"

    This section will be removed once Issue 10 in the LD Proof BBS+ spec is resolved.

    For the verifier to be able to verify the signature of a derived credential it should be able to deterministically normalize the credentials statements for verification. RDF Dataset Canonicalization defines a way in which to allocate identifiers for blank nodes deterministically for normalization. However, the algorithm does not guarantee that the same blank node identifiers will be allocated in the event of modifications to the graph. Because selective disclosure of signed statements modifies the graph as presented to the verifier, the blank node identifiers must be transformed into actual node identifiers when presented to the verifier.

    The BBS+ LD-Proofs specification does not define a mechanism to transform blank node identifiers into actual identifiers. Current implementations use the mechanism as described in this Issue Comment. Some reference implementations:

    "},{"location":"features/0646-bbs-credentials/#verifying-presented-derived-credentials","title":"Verifying Presented Derived Credentials","text":""},{"location":"features/0646-bbs-credentials/#transforming-back-into-blank-node-identifiers","title":"Transforming Back into Blank Node Identifiers","text":"

    This section will be removed once Issue 10 in the LD Proof BBS+ spec is resolved.

    Transforming the blank node identifiers into actual node identifiers in the derived credential means the verification data will be different from the verification data at issuance, invalidating the signature. Therefore the blank node identifier placeholders should be transformed back into blank node identifiers before verification.

    Same as with Transforming Blank Node Identifiers, current implementations use the mechanism as described in this Issue Comment. Some reference implementations:

    "},{"location":"features/0646-bbs-credentials/#exchanging-derived-credentials","title":"Exchanging Derived Credentials","text":"

    The presentation of credentials with BBS+ signatures can be exchanged by following RFC 0454: Present Proof Protocol 2.0. The Present Proof Protocol 2.0 provides a registry of attachment formats that can be used for presentation exchange. Although agents can use any attachment format they want, agents are expected to use the format as described in RFC 0510 (see below).

    "},{"location":"features/0646-bbs-credentials/#0510-presentation-exchange-attachment-format","title":"0510: Presentation-Exchange Attachment format","text":"

    RFC 0510: Presentation-Exchange Attachment format for requesting and presenting proofs defines an attachment format based on the DIF Presentation Exchange specification.

    The following part of this section describes the requirements of exchanging derived credentials using the Presentation Exchange Attachment format, in addition to the requirements as specified above and in RFC 0510.

    The Presentation Exchange MUST include the ldp_vp Claim Format Designation. In turn the proof_type property of the ldp_vp claim format designation MUST include the BbsBlsSignatureProof2020 proof type.

    "},{"location":"features/0646-bbs-credentials/#example-bbs-derived-credential","title":"Example BBS+ Derived Credential","text":"
    {\n  \"@context\": [\n    \"https://www.w3.org/2018/credentials/v1\",\n    \"https://w3id.org/citizenship/v1\",\n    \"https://w3id.org/security/bbs/v1\" // BBS + Context\n  ],\n  \"id\": \"https://issuer.oidp.uscis.gov/credentials/83627465\",\n  \"type\": [\"PermanentResidentCard\", \"VerifiableCredential\"],\n  \"description\": \"Government of Example Permanent Resident Card.\",\n  \"identifier\": \"83627465\",\n  \"name\": \"Permanent Resident Card\",\n  \"credentialSubject\": {\n    \"id\": \"did:example:b34ca6cd37bbf23\",\n    \"type\": [\"Person\", \"PermanentResident\"],\n    \"familyName\": \"SMITH\",\n    \"gender\": \"Male\",\n    \"givenName\": \"JOHN\"\n  },\n  \"expirationDate\": \"2029-12-03T12:19:52Z\",\n  \"issuanceDate\": \"2019-12-03T12:19:52Z\",\n  \"issuer\": \"did:example:489398593\",\n  \"proof\": {\n    \"type\": \"BbsBlsSignatureProof2020\", // <-- type must be `BbsBlsSignatureProof2020`\n    \"nonce\": \"wrmPiSRm+iBqnGBXz+/37LLYRZWirGgIORKHIkrgWVnHtb4fDe/4ZPZaZ+/RwGVJYYY=\",\n    \"proofValue\": \"ABkB/wbvt6213E9eJ+aRGbdG1IIQtx+IdAXALLNg2a5ENSGOIBxRGSoArKXwD/diieDWG6+0q8CWh7CViUqOOdEhYp/DonzmjoWbWECalE6x/qtyBeE7W9TJTXyK/yW6JKSKPz2ht4J0XLV84DZrxMF4HMrY7rFHvdE4xV7ULeC9vNmAmwYAqJfNwY94FG2erg2K2cg0AAAAdLfutjMuBO0JnrlRW6O6TheATv0xZZHP9kf1AYqPaxsYg0bq2XYzkp+tzMBq1rH3tgAAAAIDTzuPazvFHijdzuAgYg+Sg0ziF+Gw5Bz8r2cuvuSg1yKWqW1dM5GhGn6SZUpczTXuZuKGlo4cZrwbIg9wf4lBs3kQwWULRtQUXki9izmznt4Go98X/ElOguLLum4S78Gehe1ql6CXD1zS5PiDXjDzAAAACWz/sbigWpPmUqNA8YUczOuzBUvzmkpjVyL9aqf1e7rSZmN8CNa6dTGOzgKYgDGoIbSQR8EN8Ld7kpTIAdi4YvNZwEYlda/BR6oSrFCquafz7s/jeXyOYMsiVC53Zls9KEg64tG7n90XuZOyMk9RAdcxYRGligbFuG2Ap+rQ+rrELJaW7DWwFEI6cRnitZo6aS0hHmiOKKtJyA7KFbx27nBGd2y3JCvgYO6VUROQ//t3F4aRVI1U53e5N3MU+lt9GmFeL+Kv+2zV1WssScO0ZImDGDOvjDs1shnNSjIJ0RBNAo2YzhFKh3ExWd9WbiZ2/USSyomaSK4EzdTDqi2JCGdqS7IpooKSX/1Dp4K+d8HhPLGNLX4yfMoG9SnRfRQZZQ==\",\n    \"verificationMethod\": \"did:example:489398593#test\",\n    \"proofPurpose\": \"assertionMethod\",\n    \"created\": \"2020-10-16T23:59:31Z\"\n  }\n}\n
    "},{"location":"features/0646-bbs-credentials/#privacy-considerations","title":"Privacy Considerations","text":"

    Private Holder Binding is an evolution of CL Signatures Linked Secrets.

    "},{"location":"features/0646-bbs-credentials/#reference","title":"Reference","text":""},{"location":"features/0646-bbs-credentials/#interoperability-with-existing-credential-formats","title":"Interoperability with Existing Credential Formats","text":"

    We expect that many issuers will choose to shift exclusively to BBS+ credentials for the benefits described here. Accessing these benefits will require reissuing credentials that were previously in a different format.

    An issuer can issue duplicate credentials with both signature formats.

    A holder can hold both types of credentials. The holder wallet could display the two credentials as a single entry in their credential list if the data is the same (it\u2019s \u201cenhanced\u201d with both credential formats).

    A verifier can send a proof request for the formats that they choose to support.

    "},{"location":"features/0646-bbs-credentials/#issues-with-the-bbs-ld-proofs-specification","title":"Issues with the BBS+ LD-Proofs specification","text":""},{"location":"features/0646-bbs-credentials/#drawbacks","title":"Drawbacks","text":"

    Existing implementations of BBS+ Signatures do not support ZKP proof predicates, but it is theoretically possible to support numeric date predicates. ZKP proof predicates are considered a key feature of CL signatures, and a migration to BBS+ LD-Proofs will lose this capability. The Indy maintainers consider this a reasonable trade-off to get the other benefits of BBS+ LD-Proofs. A mechanism to support predicates can hopefully be added in future work.

    As mentioned in the Private Holder Binding section, the BBS+ LD-Proofs specification does not define a mechanism for private holder binding yet. This means implementing this RFC does not provide all privacy-enabling ../../features that should be incorporated until the BbsBlsBoundSignature2020 and BbsBlsBoundSignatureProof2020 signature suites are formally defined.

    "},{"location":"features/0646-bbs-credentials/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    BBS+ LD-Proofs is a reasonable evolution of CL Signatures, as it supports most of the same ../../features (with the exception of ZKP Proof Predicates), while producing smaller credentials that require less computation resources to validate (a key requirement for mobile use cases).

    BBS+ LD-Proofs are receiving broad support across the verifiable credentials implementation community, so supporting this signature format will be strategic for interoperability and allow Aries to promote the privacy preserving capabilities such as zero knowledge proofs and private holder binding.

    "},{"location":"features/0646-bbs-credentials/#prior-art","title":"Prior art","text":"

    Indy Anoncreds used CL Signatures to meet many of the use cases currently envisioned for BBS+ LD-Proofs.

    BBS+ Signatures were originally proposed by Boneh, Boyen, and Shacham in 2004.

    The approach was improved by Au, Susilo, and Mu in 2006.

    It was then further refined by Camenisch, Drijvers, and Lehmann in section 4.3 of this paper from 2016.

    In 2019, Evernym and Sovrin proposed BBS+ Signatures as the foundation for Indy Anoncreds 2.0, which in conjunction with Rich Schemas addressed a similar set of goals and capabilities as those addressed here, but were ultimately too heavy a solution.

    In 2020, Mattr provided a draft specification for BBS+ LD-Proofs that comply with the Linked Data proof specification in the W3C Credentials Community Group. The authors acknowledged that their approach did not support two key Anoncreds ../../features: proof predicates and link secrets.

    Aries RFC 593 describes the JSON-LD credential format.

    "},{"location":"features/0646-bbs-credentials/#unresolved-questions","title":"Unresolved questions","text":"

    See the above note in the Drawbacks Section about ZKP predicates.

    "},{"location":"features/0646-bbs-credentials/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"features/0685-pickup-v2/","title":"0685: Pickup Protocol 2.0","text":""},{"location":"features/0685-pickup-v2/#summary","title":"Summary","text":"

    A protocol to facilitate an agent picking up messages held at a mediator.

    "},{"location":"features/0685-pickup-v2/#motivation","title":"Motivation","text":"

    Messages can be picked up simply by sending a message to the Mediator with a return_route decorator specified. This mechanism is implicit, and lacks some desired behavior made possible by more explicit messages.

    This protocol is the explicit companion to the implicit method of picking up messages.

    "},{"location":"features/0685-pickup-v2/#tutorial","title":"Tutorial","text":""},{"location":"features/0685-pickup-v2/#roles","title":"Roles","text":"

    Mediator - The agent that has messages waiting for pickup by the Recipient.

    Recipient - The agent who is picking up messages.

    "},{"location":"features/0685-pickup-v2/#flow","title":"Flow","text":"

    The status-request message is sent by the Recipient to the Mediator to query how many messages are pending.

    The status message is the response to status-request to communicate the state of the message queue.

    The delivery-request message is sent by the Recipient to request delivery of pending messages.

    The message-received message is sent by the Recipient to confirm receipt of delivered messages, prompting the Mediator to clear messages from the queue.

    The live-delivery-change message is used to set the state of live_delivery.

    "},{"location":"features/0685-pickup-v2/#reference","title":"Reference","text":"

    Each message sent MUST use the ~transport decorator as follows, which has been adopted from RFC 0092 transport return route protocol. This has been omitted from the examples for brevity.

    ```json= \"~transport\": { \"return_route\": \"all\" }

    ## Message Types\n\n### Status Request\n\nSent by the _Recipient_ to the _Mediator_ to request a `status` message.\n#### Example:\n\n```json=\n{\n    \"@id\": \"123456781\",\n    \"@type\": \"https://didcomm.org/messagepickup/2.0/status-request\",\n    \"recipient_key\": \"<key for messages>\"\n}\n

    recipient_key is optional. When specified, the Mediator MUST only return status related to that recipient key. This allows the Recipient to discover if any messages are in the queue that were sent to a specific key. You can find more details about recipient_key and how it's managed in 0211-route-coordination.

    "},{"location":"features/0685-pickup-v2/#status","title":"Status","text":"

    Status details about waiting messages.

    "},{"location":"features/0685-pickup-v2/#example","title":"Example:","text":"

    ```json= { \"@id\": \"123456781\", \"@type\": \"https://didcomm.org/messagepickup/2.0/status\", \"recipient_key\": \"\", \"message_count\": 7, \"longest_waited_seconds\": 3600, \"newest_received_time\": \"2019-05-01 12:00:00Z\", \"oldest_received_time\": \"2019-05-01 12:00:01Z\", \"total_bytes\": 8096, \"live_delivery\": false }

    `message_count` is the only REQUIRED attribute. The others MAY be present if offered by the _Mediator_.\n\n`longest_waited_seconds` is in seconds, and is the longest delay of any message in the queue.\n\n`total_bytes` represents the total size of all messages.\n\nIf a `recipient_key` was specified in the `status-request` message, the matching value MUST be specified \nin the `recipient_key` attribute of the status message.\n\n`live_delivery` state is also indicated in the status message. \n\n> Note: due to the potential for confusing what the actual state of the message queue\n> is, a status message MUST NOT be put on the pending message queue and MUST only\n> be sent when the _Recipient_ is actively connected (HTTP request awaiting\n> response, WebSocket, etc.).\n\n### Delivery Request\n\nA request from the _Recipient_ to the _Mediator_ to have pending messages delivered. \n\n#### Examples:\n\n```json=\n{\n    \"@id\": \"123456781\",\n    \"@type\": \"https://didcomm.org/messagepickup/2.0/delivery-request\",\n    \"limit\": 10,\n    \"recipient_key\": \"<key for messages>\"\n}\n

    ```json= { \"@type\": \"https://didcomm.org/messagepickup/2.0/delivery-request\", \"limit\": 1 }

    `limit` is a REQUIRED attribute, and after receipt of this message, the _Mediator_ SHOULD deliver up to the `limit` indicated. \n\n`recipient_key` is optional. When [specified](), the _Mediator_ MUST only return messages sent to that recipient key.\n\nIf no messages are available to be sent, a `status` message MUST be sent immediately.\n\nDelivered messages MUST NOT be deleted until delivery is acknowledged by a `messages-received` message.\n\n### Message Delivery\n\nMessages delivered from the queue must be delivered in a batch `delivery` message as attachments. The ID of each attachment is used to confirm receipt. The ID is an opaque value, and the _Recipient_ should not infer anything from the value.\n\nThe ONLY valid type of attachment for this message is a DIDComm Message in encrypted form.\n\nThe `recipient_key` attribute is only included when responding to a `delivery-request` message that indicates a `recipient_key`.\n\n```json=\n{\n    \"@id\": \"123456781\",\n    \"~thread\": {\n        \"thid\": \"<message id of delivery-request message>\"\n      },\n    \"@type\": \"https://didcomm.org/messagepickup/2.0/delivery\",\n    \"recipient_key\": \"<key for messages>\",\n    \"~attach\": [{\n        \"@id\": \"<messageid>\",\n        \"data\": {\n            \"base64\": \"\"\n        }\n    }]\n}\n

    This method of delivery does incur an encoding cost, but is much simpler to implement and a more robust interaction.

    "},{"location":"features/0685-pickup-v2/#messages-received","title":"Messages Received","text":"

    After receiving messages, the Recipient sends an ack message indiciating which messages are safe to clear from the queue.

    "},{"location":"features/0685-pickup-v2/#example_1","title":"Example:","text":"

    ```json= { \"@type\": \"https://didcomm.org/messagepickup/2.0/messages-received\", \"message_id_list\": [\"123\",\"456\"] }

    `message_id_list` is a list of ids of each message received. The id of each message is present in the attachment descriptor of each attached message of a `delivery` message.\n\nUpon receipt of this message, the _Mediator_ knows which messages have been received, and can remove them from the collection of queued messages with confidence. The mediator SHOULD send an updated `status` message reflecting the changes to the queue.\n\n### Multiple Recipients\n\nIf a message arrives at a _Mediator_ addressed to multiple _Recipients_, the message MUST be queued for each _Recipient_ independently. If one of the addressed _Recipients_ retrieves a message and indicates it has been received, that message MUST still be held and then removed by the other addressed _Recipients_.\n\n## Live Mode\nLive mode is the practice of delivering newly arriving messages directly to a connected _Recipient_. It is disabled by default and only activated by the _Recipient_. Messages that arrive when Live Mode is off MUST be stored in the queue for retrieval as described above. If Live Mode is active, and the connection is broken, a new inbound connection starts with Live Mode disabled.\n\nMessages already in the queue are not affected by Live Mode - they must still be requested with `delivery-request` messages.\n\nLive mode MUST only be enabled when a persistent transport is used, such as WebSockets.\n\n_Recipients_ have three modes of possible operation for message delivery with various abilities and level of development complexity:\n\n1. Never activate live mode. Poll for new messages with a `status_request` message, and retrieve them when available.\n2. Retrieve all messages from queue, and then activate Live Mode. This simplifies message processing logic in the _Recipient_.\n3. Activate Live Mode immediately upon connecting to the _Mediator_. Retrieve messages from the queue as possible. When receiving a message delivered live, the queue may be queried for any waiting messages delivered to the same key for processing.\n\n### Live Mode Change\nLive Mode is changed with a `live-delivery-change` message.\n\n#### Example:\n\n```json=\n{\n    \"@type\": \"https://didcomm.org/messagepickup/2.0/live-delivery-change\",\n    \"live_delivery\": true\n}\n

    Upon receiving the live_delivery_change message, the Mediator MUST respond with a status message.

    If sent with live_delivery set to true on a connection incapable of live delivery, a problem_report SHOULD be sent as follows:

    json= { \"@type\": \"https://didcomm.org/notification/1.0/problem-report\", \"~thread\": { \"pthid\": \"<message id of offending live_delivery_change>\" }, \"description\": \"Connection does not support Live Delivery\" }

    "},{"location":"features/0685-pickup-v2/#prior-art","title":"Prior art","text":"

    Version 1.0 of this protocol served as the main inspiration for this version. Version 1.0 suffered from not being very explicit, and an incomplete model of message delivery signaling.

    "},{"location":"features/0685-pickup-v2/#alternatives","title":"Alternatives","text":""},{"location":"features/0685-pickup-v2/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0693-credential-representation/","title":"0693: Cross-Platform Credential Representation","text":""},{"location":"features/0693-credential-representation/#summary","title":"Summary","text":"

    Aries Agent developers currently build end user products without a standard method of rendering credentials. This RFC proposes how the Aries community can reuse available open technologies to build such a rendering method.

    Key results include: - Feasibility of cross platform rendering. - Enable branding of credentials.

    This RFC also enumerate the specific challenges that by using this method could be tackled next.

    "},{"location":"features/0693-credential-representation/#motivation","title":"Motivation","text":"

    The human computer interaction between agents and their users will always gravitate around credentials. This interaction is more useful for users when their representation resembles that of their conventional (physical) counterparts.

    Achieving effortless semiotic parity with analog credentials doesn't come easy or cheap. In fact, when reviewing new Aries-base projects, is always the case that the rendering of credentials with any form of branding is a demanding portion of the roadmap.

    Since the work required here is never declarative the work never stops feeling sysyphean. Indeed, the cost of writing code of representing a credential remains constant over time, no matter how many times we do it.

    Imagine if we achieve declarative while empowering branding.

    "},{"location":"features/0693-credential-representation/#entering-svg","title":"Entering SVG","text":"

    The solution we propose is to adopt SVG as the default format to describe how to represent SSI credentials, and to introduce a convention to ensure that credentials values could be embedded in the final user interface. The following images illustrates how this can work:

    "},{"location":"features/0693-credential-representation/#svg-credential-values","title":"SVG + Credential Values","text":"

    We propose a notation of the form {{credential.values.[AttributeName]}} and {{credential.names.[AttributeName]}}. This way both values and attributes names can be used in branding activities.

    "},{"location":"features/0693-credential-representation/#cross-platform","title":"Cross Platform","text":"

    Since SVG is a web standard based on XML there isn't a shortage of existing tools to power brand and engineering needs right away. Indeed, any implementation will be powered by native SVG renderer and XML parser.

    "},{"location":"features/0693-credential-representation/#future-work","title":"Future work","text":""},{"location":"features/0699-push-notifications-apns/","title":"Aries RFC 0699: Push Notifications apns Protocol 1.0","text":"

    Note: This protocol is currently written to support native push notifications for iOS via Apple Push Notification Service. For the implementation for Android (using fcm), please refer to 0734: Push Notifications fcm

    "},{"location":"features/0699-push-notifications-apns/#summary","title":"Summary","text":"

    A protocol to coordinate a push notification configuration between two agents.

    "},{"location":"features/0699-push-notifications-apns/#motivation","title":"Motivation","text":"

    This protocol would give an agent enough information to send push notifications about specific events to an iOS device. This would be of great benefit for mobile wallets, as a holder can be notified when new messages are pending at the mediator. Mobile applications, such as wallets, are often killed and can not receive messages from the mediator anymore. Push notifications would resolve this problem.

    "},{"location":"features/0699-push-notifications-apns/#tutorial","title":"Tutorial","text":""},{"location":"features/0699-push-notifications-apns/#name-and-version","title":"Name and Version","text":"

    URI: https://didcomm.org/push-notifications-apns/1.0

    Protocol Identifier: push-notifications-apns

    Version: 1.0

    Since apns only supports iOS, no -ios or -android is required as it is implicit.

    "},{"location":"features/0699-push-notifications-apns/#key-concepts","title":"Key Concepts","text":"

    When an agent would like to receive push notifications at record event changes, e.g. incoming credential offer, incoming connection request, etc., the agent could initiate the protocol by sending a message to the other agent.

    This protocol only defines how an agent would get the token which is necessary for push notifications.

    Each platform is has its own protocol so that we can easily use 0031: Discover Features 1.0 and 0557: Discover Features 2.X to see which specific services are supported by the other agent.

    "},{"location":"features/0699-push-notifications-apns/#roles","title":"Roles","text":"

    notification-sender

    notification-receiver

    The notification-sender is an agent who will send the notification-receiver notifications. The notification-receiver can get and set their push notification configuration at the notification-sender.

    "},{"location":"features/0699-push-notifications-apns/#services","title":"Services","text":"

    This RFC focusses on configuring the data necessary for pushing notifications to iOS, via apns.

    In order to implement this protocol, the set-device-info and get-device-info messages MUST be implemented by the notification-sender and device-info message MUST be implemented by the notification-receiver.

    "},{"location":"features/0699-push-notifications-apns/#supported-services","title":"Supported Services","text":"

    The protocol currently supports the following push notification services

    "},{"location":"features/0699-push-notifications-apns/#messages","title":"Messages","text":"

    When a notification-receiver wants to receive push notifications from the notification-sender, the notification-receiver has to send the following message:

    "},{"location":"features/0699-push-notifications-apns/#set-device-info","title":"Set Device Info","text":"

    Message to set the device info using the native iOS device token for push notifications.

    {\n  \"@type\": \"https://didcomm.org/push-notifications-apns/1.0/set-device-info\",\n  \"@id\": \"<UUID>\",\n  \"device_token\": \"<DEVICE_TOKEN>\"\n}\n

    Description of the fields:

    It is important to note that the set device info message can be used to set, update and remove the device info. To set, and update, these values the normal messages as stated above can be used. To remove yourself from receiving push notifications, you can send the same message where all values MUST be null. If either value is null a problem-report MAY be sent back with missing-value.

    "},{"location":"features/0699-push-notifications-apns/#get-device-info","title":"Get Device Info","text":"

    When a notification-receiver wants to get their push-notification configuration, they can send the following message:

    {\n  \"@type\": \"https://didcomm.org/push-notifications-apns/1.0/get-device-info\",\n  \"@id\": \"<UUID>\"\n}\n
    "},{"location":"features/0699-push-notifications-apns/#device-info","title":"Device Info","text":"

    Response to the get device info:

    {\n  \"@type\": \"https://didcomm.org/push-notifications-apns/1.0/device-info\",\n  \"device_token\": \"<DEVICE_TOKEN>\",\n  \"~thread\": {\n    \"thid\": \"<GET_DEVICE_INFO_UUID>\"\n  }\n}\n

    This message can be used by the notification-receiver to receive their device info, e.g. device_token. If the notification-sender does not have this field for that connection, a problem-report MAY be used as a response with not-registered-for-push-notifications.

    "},{"location":"features/0699-push-notifications-apns/#adopted-messages","title":"Adopted messages","text":"

    In addition, the ack message is adopted into the protocol for confirmation by the notification-sender. The ack message SHOULD be sent in response to any of the set-device-info messages.

    "},{"location":"features/0699-push-notifications-apns/#sending-push-notifications","title":"Sending Push Notifications","text":"

    When an agent wants to send a push notification to another agent, the payload of the push notifications MUST include the @type property, and COULD include the message_tag property, to indicate the message is sent by the notification-sender. Guidelines on notification messages are not defined.

    {\n  \"@type\": \"https://didcomm.org/push-notifications-apns\",\n  \"message_tag\": \"<MESSAGE_TAG>\",\n  \"message_id\": \"<MESSAGE_ID>\",\n  ...\n}\n

    Description of the fields:

    "},{"location":"features/0699-push-notifications-apns/#drawbacks","title":"Drawbacks","text":"

    Each service requires a considerable amount of domain knowledge. The RFC can be extended with new services over time.

    The @type property in the push notification payload currently doesn't indicate which agent the push notification came from. In e.g. the instance of using multiple mediators, this means the notification-receiver does not know which mediator to retrieve the message from.

    "},{"location":"features/0699-push-notifications-apns/#prior-art","title":"Prior art","text":""},{"location":"features/0699-push-notifications-apns/#unresolved-questions","title":"Unresolved questions","text":"

    None

    "},{"location":"features/0699-push-notifications-apns/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0721-revocation-notification-v2/","title":"Aries RFC 0721: Revocation Notification 2.0","text":""},{"location":"features/0721-revocation-notification-v2/#summary","title":"Summary","text":"

    This RFC defines the message format which an issuer uses to notify a holder that a previously issued credential has been revoked.

    "},{"location":"features/0721-revocation-notification-v2/#change-log","title":"Change Log","text":""},{"location":"features/0721-revocation-notification-v2/#motivation","title":"Motivation","text":"

    We need a standard protocol for an issuer to notify a holder that a previously issued credential has been revoked.

    For example, suppose a passport agency revokes Alice's passport. The passport agency (an issuer) may want to notify Alice (a holder) that her passport has been revoked so that she knows that she will be unable to use her passport to travel.

    "},{"location":"features/0721-revocation-notification-v2/#tutorial","title":"Tutorial","text":"

    The Revocation Notification protocol is a very simple protocol consisting of a single message:

    This simple protocol allows an issuer to choose to notify a holder that a previously issued credential has been revoked.

    It is the issuer's prerogative whether or not to notify the holder that a credential has been revoked. It is not a security risk if the issuer does not notify the holder that the credential has been revoked, nor if the message is lost. The holder will still be unable to use a revoked credential without this notification.

    "},{"location":"features/0721-revocation-notification-v2/#roles","title":"Roles","text":"

    There are two parties involved in a Revocation Notification: issuer and holder. The issuer sends the revoke message to the holder.

    "},{"location":"features/0721-revocation-notification-v2/#messages","title":"Messages","text":"

    The revoke message sent by the issuer to the holder. The holder should verify that the revoke message came from the connection that was originally used to issue the credential.

    Message format:

    {\n  \"@type\": \"https://didcomm.org/revocation_notification/2.0/revoke\",\n  \"@id\": \"<uuid-revocation-notification>\",\n  \"revocation_format\": \"<revocation_format>\",\n  \"credential_id\": \"<credential_id>\",\n  \"comment\": \"Some comment\"\n}\n

    Description of fields:

    "},{"location":"features/0721-revocation-notification-v2/#revocation-credential-identification-formats","title":"Revocation Credential Identification Formats","text":"

    In order to support multiple credential revocation formats, the following dictates the format of revocation formats and their credential ids. As additional credential revocation formats are determined their credential id formats should be added.

    Revocation Format Credential Identifier Format Example indy-anoncreds <revocation-registry-id>::<credential-revocation-id> AsB27X6KRrJFsqZ3unNAH6:4:AsB27X6KRrJFsqZ3unNAH6:3:cl:48187:default:CL_ACCUM:3b24a9b0-a979-41e0-9964-2292f2b1b7e9::1 anoncreds <revocation-registry-id>::<credential-revocation-id> did:indy:sovrin:5nDyJVP1NrcPAttP3xwMB9/anoncreds/v0/REV_REG_DEF/56495/npdb/TAG1::1"},{"location":"features/0721-revocation-notification-v2/#reference","title":"Reference","text":""},{"location":"features/0721-revocation-notification-v2/#drawbacks","title":"Drawbacks","text":"

    If we later added support for more general event subscription and notification message flows, this would be redundant.

    "},{"location":"features/0721-revocation-notification-v2/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0721-revocation-notification-v2/#prior-art","title":"Prior art","text":""},{"location":"features/0721-revocation-notification-v2/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0721-revocation-notification-v2/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0728-device-binding-attachments/","title":"Aries RFC 0728 : Device Binding Attachments","text":""},{"location":"features/0728-device-binding-attachments/#summary","title":"Summary","text":"

    Extends existing present-proof protocols to allow proofing the control of a hardware bound key embedded within a verfiable credential.

    "},{"location":"features/0728-device-binding-attachments/#motivation","title":"Motivation","text":"

    To enable use-cases which require a high level of assurance a verifier must reach a high degree of confidence that a verifiable credential (VC) can only be used by the person it was issued for. One way to enforce this requirement is that the issuer additionally binds the VC to a hardware bound public key and therefore binding the credential to the device, as discussed in the DIF Wallet Security WG. The issaunce process, including the attestation of the wallet and the hardware bound key is off-scope for this Aries RFC. A valid presentation of the VC then requires an additional challenge which proofs that the presenter is in control of the corresponding private key. Since the proof of control must be part of legitimate presentation it makes sense to extend all current present-proof protocols.

    Note: The focus so far has been on AnonCreds, we will also look into device binding of W3C VC, however this is currently lacking in the examples.

    Warning: This concept is primarily meant for regulated, high-security usecases. Please review the drawbacks before considering using this.

    "},{"location":"features/0728-device-binding-attachments/#tutorial","title":"Tutorial","text":"

    To proof the control of a hardware bound key the holder must answer a challenge for one or more public keys embedded within verifiable credentials.

    "},{"location":"features/0728-device-binding-attachments/#challenge","title":"Challenge","text":"

    The following challenge object must be provided by the verifier.

    "},{"location":"features/0728-device-binding-attachments/#device-binding-challenge","title":"device-binding-challenge","text":"

    ```json= { \"@type\": \"https://didcomm.org/device-binding/%ver/device-binding-challenge\", \"@id\": \"\", \"nonce\": \"\", // recommend at least 128-bit unsigned integer \"requests\": [ { \"id\": \"libindy-request-presentation-0\", \"path\": \"$.requested_attributes.attr2_referent.names.hardwareDid\", } ] }

    Description of attributes:\n\n- `nonce` -- a nonce which has to be signed by the holder to proof control\n- `requests` -- an array of referenced presentation requests\n    - `id` -- reference to an attached presentation request of `request-presentation` message (e.g. libindy request) \n    - `path` -- JsonPath to a requested attribute which represents a public key of a hardware bound key pair - represented as did:key\n\n\nThe `device-binding-challenge` must be attached to the `request-presentations~attach` array of the `request-presentation` message defined by [RFC-0037](https://github.com/hyperledger/aries-rfcs/blob/main../../features/0037-present-proof/README.md#request-presentation) and [RFC-0454](https://github.com/hyperledger/aries-rfcs/tree/main../../features/0454-present-proof-v2#request-presentation).\n\n#### Example request-presentation messages\n\nThe following represents a request-presentation message with an attached libindy presentation request and a corresponding device-binding-challenge.\n\n**Present Proof v1**\n```json=\n{\n    \"@type\": \"https://didcomm.org/present-proof/1.0/request-presentation\",\n    \"@id\": \"<uuid-request>\",\n    \"comment\": \"some comment\",\n    \"request_presentations~attach\": [\n        {\n            \"@id\": \"libindy-request-presentation-0\",\n            \"mime-type\": \"application/json\",\n            \"data\":  {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ],\n    \"device_binding~attach\": [\n        {\n            \"@id\": \"device-binding-challenge-0\"\n            \"mime-type\": \"application/json\",\n            \"data\":  {\n                \"base64\": \"<device-binding-challenge>\"\n            }\n        }\n    ]\n}\n

    Present Proof v2

    ```json= { \"@type\": \"https://didcomm.org/present-proof/2.0/request-presentation\", \"@id\": \"\", \"goal_code\": \"\", \"comment\": \"some comment\", \"will_confirm\": true, \"present_multiple\": false, \"formats\" : [ { \"attach_id\" : \"libindy-request-presentation-0\", \"format\" : \"hlindy/proof-req@v2.0\", } ], \"request_presentations~attach\": [ { \"@id\": \"libindy-request-presentation-0\", \"mime-type\": \"application/json\", \"data\": { \"base64\": \"\" } } ], \"device_binding~attach\": [ { \"@id\": \"device-binding-challenge-0\" \"mime-type\": \"application/json\", \"data\": { \"base64\": \"\" // inner object } } ] }

    ### Response\n\nThe following response must be generated by the holder of the VC.\n\n#### device-binding-reponse\n```json=\n{\n    \"@type\": \"https://didcomm.org/device-binding/%ver/device-binding-response\",\n    \"@id\": \"<uuid-challenge-response>\",\n    \"proofs\" : [\n        {\n            \"id\": \"libindy-presentation-0\",\n            \"path\": \"$.requested_proof.revealed_attrs.attr1_referent.raw\"\n        }\n    ]\n}\n

    Description of attributes:

    The device-binding-response must be attached to the device_binding~attach array of a presentation message defined by RFC-0037 or RFC-0454.

    "},{"location":"features/0728-device-binding-attachments/#example-presentation-messages","title":"Example presentation messages","text":"

    The following represents a presentation message with an attached libindy presentation and a corresponding device-binding-response.

    Present Proof v1

    ```json= { \"@type\": \"https://didcomm.org/present-proof/1.0/presentation\", \"@id\": \"\", \"comment\": \"some comment\", \"presentations~attach\": [ { \"@id\": \"libindy-presentation-0\", \"mime-type\": \"application/json\", \"data\": { \"base64\": \"\" } } ], \"device_binding~attach\": [ { \"@id\": \"device-binding-response-0\", \"mime-type\": \"application/json\", \"data\": { \"base64\": \"\", \"jws\": { \"header\": { \"kid\": \"didz6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\" }, \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\", \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\" } } } ] }

    **Present Proof v2**\n```json=\n{\n    \"@type\": \"https://didcomm.org/present-proof/%VER/presentation\",\n    \"@id\": \"<uuid-presentation>\",\n    \"goal_code\": \"<goal-code>\",\n    \"comment\": \"some comment\",\n    \"last_presentation\": true,\n    \"formats\" : [\n        {\n            \"attach_id\" : \"libindy-presentation-0\",\n            \"format\" : \"hlindy/proof-req@v2.0\",\n        }\n    ],\n    \"presentations~attach\": [\n        {\n            \"@id\": \"libindy-presentation-0\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<libindy presentation>\"\n            }\n        }\n    ],\n    \"device_binding~attach\": [\n        {\n            \"@id\": \"device-binding-response-0\"\n            \"mime-type\": \"application/json\",\n            \"data\":  {\n                \"base64\": \"<device-binding-response>\",\n                \"jws\": {\n                    \"header\": {\n                        \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n                    },\n                    \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n                    \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n                }\n            }\n        }\n    ]\n}\n
    "},{"location":"features/0728-device-binding-attachments/#reference","title":"Reference","text":""},{"location":"features/0728-device-binding-attachments/#drawbacks","title":"Drawbacks","text":"

    Including a hardware-bound public key (as an attribute) into a Verifiable Credential/AnonCred is necessary for this concept but introduces a globally unique and therefore trackable identifier. As this public key is revealed to the verifier, there is a higher risk of correlation. The Issuer must always use a hardware-bound key for a single credential and the Wallet should enforce to never reuse the key. Additionally, the holder should ideally be informed about the increased correlation risk by the wallet UX.

    "},{"location":"features/0728-device-binding-attachments/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    The rationale behind this proposal is to formalize the way a holder wallet can proof the control of a (hardware-bound) key.

    This proposal tries to extend existing protocols to reduce the implementation effort for existing solutions. It might be reasonable to include this only in a new version of the present proof protocol (e.g. present-proof v3).

    "},{"location":"features/0728-device-binding-attachments/#prior-art","title":"Prior art","text":"

    None to our knowledge.

    "},{"location":"features/0728-device-binding-attachments/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0728-device-binding-attachments/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"features/0734-push-notifications-fcm/","title":"Aries RFC 0734: Push Notifications fcm Protocol 1.0","text":"

    Note: This protocol is currently written to support native push notifications using fcm. For the implementation for iOS (via apns), please refer to 0699: Push Notifications apns

    "},{"location":"features/0734-push-notifications-fcm/#summary","title":"Summary","text":"

    A protocol to coordinate a push notification configuration between two agents.

    "},{"location":"features/0734-push-notifications-fcm/#motivation","title":"Motivation","text":"

    This protocol would give an agent enough information to send push notifications about specific events to a device that supports fcm. This would be of great benefit for mobile wallets, as a holder can be notified when new messages are pending at the mediator. Mobile applications, such as wallets, are often killed and can not receive messages from the mediator anymore. Push notifications would resolve this problem.

    "},{"location":"features/0734-push-notifications-fcm/#tutorial","title":"Tutorial","text":""},{"location":"features/0734-push-notifications-fcm/#name-and-version","title":"Name and Version","text":"

    URI: https://didcomm.org/push-notifications-fcm/1.0

    Protocol Identifier: push-notifications-fcm

    Version: 1.0

    "},{"location":"features/0734-push-notifications-fcm/#key-concepts","title":"Key Concepts","text":"

    When an agent would like to receive push notifications at record event changes, e.g. incoming credential offer, incoming connection request, etc., the agent could initiate the protocol by sending a message to the other agent.

    This protocol only defines how an agent would get the token and platform that is necessary for push notifications.

    Each platform has its own protocol so that we can easily use 0031: Discover Features 1.0 and 0557: Discover Features 2.X to see which specific services are supported by the other agent.

    "},{"location":"features/0734-push-notifications-fcm/#roles","title":"Roles","text":"

    notification-sender

    notification-receiver

    The notification-sender is an agent who will send the notification-receiver notifications. The notification-receiver can get and set their push notification configuration at the notification-sender.

    "},{"location":"features/0734-push-notifications-fcm/#services","title":"Services","text":"

    This RFC focuses on configuring the data necessary for pushing notifications via Firebase Cloud Messaging.

    In order to implement this protocol, the set-device-info and get-device-info messages MUST be implemented by the notification-sender and device-info message MUST be implemented by the notification-receiver.

    "},{"location":"features/0734-push-notifications-fcm/#supported-services","title":"Supported Services","text":"

    The protocol currently supports the following push notification services

    "},{"location":"features/0734-push-notifications-fcm/#messages","title":"Messages","text":"

    When a notification-receiver wants to receive push notifications from the notification-sender, the notification-receiver has to send the following message:

    "},{"location":"features/0734-push-notifications-fcm/#set-device-info","title":"Set Device Info","text":"

    Message to set the device info using the fcm device token and device platform for push notifications.

    {\n  \"@type\": \"https://didcomm.org/push-notifications-fcm/1.0/set-device-info\",\n  \"@id\": \"<UUID>\",\n  \"device_token\": \"<DEVICE_TOKEN>\",\n  \"device_platform\": \"<DEVICE_PLATFORM>\"\n}\n

    Description of the fields:

    It is important to note that the set device info message can be used to set, update and remove the device info. To set, and update, these values the normal messages as stated above can be used. To remove yourself from receiving push notifications, you can send the same message where all values MUST be null. If either value is null, a problem-report MAY be sent back with missing-value.

    "},{"location":"features/0734-push-notifications-fcm/#get-device-info","title":"Get Device Info","text":"

    When a notification-receiver wants to get their push-notification configuration, they can send the following message:

    {\n  \"@type\": \"https://didcomm.org/push-notifications-fcm/1.0/get-device-info\",\n  \"@id\": \"<UUID>\"\n}\n
    "},{"location":"features/0734-push-notifications-fcm/#device-info","title":"Device Info","text":"

    Response to the get device info:

    {\n  \"@type\": \"https://didcomm.org/push-notifications-fcm/1.0/device-info\",\n  \"device_token\": \"<DEVICE_TOKEN>\",\n  \"device_platform\": \"<DEVICE_PLATFORM>\",\n  \"~thread\": {\n    \"thid\": \"<GET_DEVICE_INFO_UUID>\"\n  }\n}\n

    This message can be used by the notification-receiver to receive their device info, e.g. device_token and device_platform. If the notification-sender does not have this field for that connection, a problem-report MAY be used as a response with not-registered-for-push-notifications.

    "},{"location":"features/0734-push-notifications-fcm/#adopted-messages","title":"Adopted messages","text":"

    In addition, the ack message is adopted into the protocol for confirmation by the notification-sender. The ack message SHOULD be sent in response to any of the set-device-info messages.

    "},{"location":"features/0734-push-notifications-fcm/#sending-push-notifications","title":"Sending Push Notifications","text":"

    When an agent wants to send a push notification to another agent, the payload of the push notifications MUST include the @type property, and COULD include the message_tags property, to indicate the message is sent by the notification-sender. Guidelines on notification messages are not defined.

    {\n  \"@type\": \"https://didcomm.org/push-notifications-fcm\",\n  \"message_tags\": [\"<MESSAGE_TAG>\"],\n  \"message_ids\": [\"<MESSAGE_ID>\"],\n  ...\n}\n

    Description of the fields:

    "},{"location":"features/0734-push-notifications-fcm/#drawbacks","title":"Drawbacks","text":"

    Each service requires a considerable amount of domain knowledge. The RFC can be extended with new services over time.

    The @type property in the push notification payload currently doesn't indicate which agent the push notification came from. In e.g. the instance of using multiple mediators, this means the notification-receiver does not know which mediator to retrieve the message from.

    "},{"location":"features/0734-push-notifications-fcm/#prior-art","title":"Prior art","text":""},{"location":"features/0734-push-notifications-fcm/#unresolved-questions","title":"Unresolved questions","text":"

    None

    "},{"location":"features/0734-push-notifications-fcm/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0748-n-wise-did-exchange/","title":"Aries RFC 0748: N-wise DID Exchange Protocol 1.0","text":""},{"location":"features/0748-n-wise-did-exchange/#summary","title":"Summary","text":"

    This RFC defines a protocol for creating and managing relationships within a group of SSI subjects. In a certain sense, this RFC is a generalization of the pairwise concept and protocols 0160-connection-protocol and 0023-did-exchange for an arbitrary number of parties (n-wise).

    "},{"location":"features/0748-n-wise-did-exchange/#motivation","title":"Motivation","text":"

    SSI subjects and agents representing them must have a way to establish relationships with each other in a trustful manner. In the simplest case, when only two participants are involved, this goal is achieved using 0023-did-exchange protocol by creating and securely sharing their DID Documents directly between agents. However, it is often desirable to organize an interaction involving more than two paries. The number of parties of such an interaction may change over time, and most of the agents may be mobile ones. The simplest and most frequently used example of such interaction is a group chat in instant messenger. The trusted nature of SSI technology makes it possible to use group relationships for holding legally significant unions, such as board of directors, territorial community or dissertation councils.

    "},{"location":"features/0748-n-wise-did-exchange/#tutorial","title":"Tutorial","text":""},{"location":"features/0748-n-wise-did-exchange/#name-and-version","title":"Name and Version","text":"

    n-wise, version 1.0

    URI: https://didcomm.org/n-wise/1.0

    "},{"location":"features/0748-n-wise-did-exchange/#registry-of-n-wise-states","title":"Registry of n-wise states","text":"

    The current state of n-wise is an up-to-date list of the parties' DID Documents. In pairwise relation the state is stored by the participants and updated by a direct notification of the other party. When there are more than two participants, the problem of synchronizing the state of this n-wise (i.e. consensus) arising. It should be borne in mind that the state may change occasionally: users may be added or deleted, DID Documents may be modified (when keys are rotated or endpoints are changed).

    In principle, any trusted repository can act as a registry of n-wise states. The following options for storing the n-wise state can be distinguished:

    The concept of pluggable consensus implies choosing the most appropriate way to maintain a registry of states, depending on the needs.

    N-wise state update is performed by committing the corresponding transaction to the registry of n-wise states. To get the current n-wise state, the agent receives a list of transactions from the registry of states, verifies them and applies sequentially, starting with the genesisTx. Incorrect transactions (without a proper signature or missing the required fields) are ignored. Thus, n-wise can be considered as a replicated state machine, which is executed on each participant.

    The specifics of recording and receiving transactions depend on the particular method of maintaining the n-wise registry and on a particular ledger. This RFC DOES NOT DEFINE specific n-wise registry implementations.

    "},{"location":"features/0748-n-wise-did-exchange/#directly-on-the-agents-side-edge-chain","title":"Directly on the agent's side (Edge chain)","text":""},{"location":"features/0748-n-wise-did-exchange/#public-or-private-distributed-ledger","title":"Public or private distributed ledger","text":""},{"location":"features/0748-n-wise-did-exchange/#centralized-storage","title":"Centralized storage","text":""},{"location":"features/0748-n-wise-did-exchange/#roles","title":"Roles","text":""},{"location":"features/0748-n-wise-did-exchange/#user","title":"User","text":""},{"location":"features/0748-n-wise-did-exchange/#owner","title":"Owner","text":""},{"location":"features/0748-n-wise-did-exchange/#creator","title":"Creator","text":""},{"location":"features/0748-n-wise-did-exchange/#inviter","title":"Inviter","text":""},{"location":"features/0748-n-wise-did-exchange/#invitee","title":"Invitee","text":""},{"location":"features/0748-n-wise-did-exchange/#actions","title":"Actions","text":""},{"location":"features/0748-n-wise-did-exchange/#n-wise-creation","title":"N-wise creation","text":"

    The creation begins with the initialization of the n-wise registry. This RFC DOES NOT SPECIFY the procedure for n-wise registry creation. After creating the registry, the creator commits the genesisTx transaction. The creator automatically obtains the role of owner. The creator MUST generate a unique DID and DID Document for n-wise.

    "},{"location":"features/0748-n-wise-did-exchange/#invitation-of-a-new-party","title":"Invitation of a new party","text":"

    Any n-wise party can create an invitation to join n-wise. First, inviter generates a pair of public and private invitation keys according to Ed25519. The public key of the invitation is pushed to the registry using the invitationTx transaction. Then the Invitation message with the invitation private key is sent out-of-band to the invitee. The invitation key pair is unique for each invitee and can be used only once.

    "},{"location":"features/0748-n-wise-did-exchange/#accepting-the-invitation","title":"Accepting the invitation","text":"

    Once Invitation received, the invite generates a unique DID and DID Document for the n-wise and commits AddParticipantTx transaction to the registry. It is NOT ALLOWED to reuse DID from other relationships.

    The process of adding a new participant is shown in the figure below

    "},{"location":"features/0748-n-wise-did-exchange/#updating-did-document","title":"Updating DID Document","text":"

    Updating the user's DID Document is required for the key rotation or endpoint updating. To update the associated DID Document, user commits the updateParticipantTx transaction to the registry.

    "},{"location":"features/0748-n-wise-did-exchange/#removing-a-party-form-n-wise","title":"Removing a party form n-wise","text":"

    Removing is performed using the removeParticipantTx transaction. The user can delete itself (the corresponding transaction is signed by the user's public key). The owner can delete any user (the corresponding transaction is signed by the owner's public key).

    "},{"location":"features/0748-n-wise-did-exchange/#updating-n-wise-meta-information","title":"Updating n-wise meta information","text":"

    Meta information can be updated by the owner using the updateMetadataTx transaction.

    "},{"location":"features/0748-n-wise-did-exchange/#transferring-the-owner-role-to-other-user","title":"Transferring the owner role to other user","text":"

    The owner can transfer control of the n-wise to other user. The old owner loses the corresponding privileges and becomes a regular user. The operation is performed using the NewOwnerTx transaction.

    "},{"location":"features/0748-n-wise-did-exchange/#notification-on-n-wise-state-update","title":"Notification on n-wise state update","text":"

    Just after committing the transaction to the n-wise registry, the participant MUST send the ledger-update-notify message to all other parties. The participant who received ledger-update-notify SHOULD fetch updates from the n-wise registry.

    "},{"location":"features/0748-n-wise-did-exchange/#didcomm-messaging-within-n-wise","title":"DIDComm messaging within n-wise","text":"

    It is allowed to exchange DIDComm messages of any type within n-wise. The belonging of the sender to a certain n-wise is determined by the sender's verkey.

    This RFC DOES NOT DEFINE a procedure of exchanging messages within n-wise. In the simplest case, this can be implemented as sending a message to each participant in turn. In case of a large number of parties, it is advisable to consider using a centralized coordinator who would be responsible for the ordering and guaranteed sending of messages from the sender to the rest of parties.

    "},{"location":"features/0748-n-wise-did-exchange/#reference","title":"Reference","text":""},{"location":"features/0748-n-wise-did-exchange/#n-wise-registry-transactions","title":"N-wise registry transactions","text":"

    N-wise state is modified using transactions in the following form

    {\n  \"type\": \"transaction type\",\n  ...\n  \"proof\" {\n    \"type\": \"JcsEd25519Signature2020\",\n    \"verificationMethod\": \"did:alice#key1\",\n    \"signatureValue\": \"...\"\n\n  }\n}\n
    "},{"location":"features/0748-n-wise-did-exchange/#attributes","title":"Attributes","text":""},{"location":"features/0748-n-wise-did-exchange/#genesistx","title":"GenesisTx","text":"

    'GenesisTx' is a mandatory initial transaction that defines the basic properties of the n-wise.

    {\n  \"type\": \"genesisTx\",\n  \"label\": \"Council\",\n  \"creatorNickname\": \"Alice\",\n  \"creatorDid\": \"did:alice\",\n  \"creatorDidDoc\": {\n   ..\n  },\n  \"ledgerType\": \"iota@1.0\",\n  \"metaInfo\" {\n    ...\n  }\n}\n
    "},{"location":"features/0748-n-wise-did-exchange/#attributes_1","title":"Attributes","text":"

    The genesisTx transaction MUST be signed by the creator's public key defined in his DID Document.

    "},{"location":"features/0748-n-wise-did-exchange/#invitationtx","title":"InvitationTx","text":"

    This transaction adds the invitation public keys to the n-wise registry.

    {\n  \"type\": \"invitationTx\",\n  \"publicKey\": [\n    {\n      \"id\": \"invitationVerkeyForBob\",\n      \"type\": \"Ed25519VerificationKey2018\",\n      \"publicKeyBase58\": \"arekhj893yh3489qh\"\n    }\n  ]\n}\n
    "},{"location":"features/0748-n-wise-did-exchange/#attributes_2","title":"Attributes","text":"

    invitationTx` MUST be signed by the user's public key defined in it's DID Document.

    "},{"location":"features/0748-n-wise-did-exchange/#invitation-message","title":"Invitation message","text":"

    The message is intended to invite a new participant. It is sent via an arbitrary communication channel (pairwise, QR code, e-mail, etc.).

    {\n  \"@id\": \"5678876542345\",\n  \"@type\": \"https://didcomm.org/n-wise/1.0/invitation\",\n  \"label\": \"Invitaion to join n-wise\",\n  \"invitationKeyId\": \"invitationVerkeyForBob\",\n  \"invitationPrivateKeyBase58\": \"qAue25rghuFRhrue....\",\n  \"ledgerType\": \"iota@1.0\",\n  \"ledger~attach\": [\n    {\n      \"@id\": \"attachment id\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"base64\": \"<bytes for base64>\"\n      }\n    }  \n  ]\n}\n
    "},{"location":"features/0748-n-wise-did-exchange/#attributes_3","title":"Attributes","text":""},{"location":"features/0748-n-wise-did-exchange/#addparticipanttx","title":"AddParticipantTx","text":"

    The transaction is designed to add a new user to n-wise.

    {\n  \"id\": \"addParticipantTx\",\n  \"nickname\": \"Bob\",\n  \"did\": \"did:bob\",\n  \"didDoc\": {\n    ...\n  }\n\n}\n
    "},{"location":"features/0748-n-wise-did-exchange/#attributes_4","title":"Attributes","text":"

    AddParticipantTx transaction MUST be signed by the invitation private key (invitationPrivateKeyBase58), received in Invitation message. As committing the AddParticipantTx transaction, the corresponding invitation key pair is considered deactivated (other invitations cannot be signed by it).

    The transaction executor MUST verify if the invitation key was indeed previously added. Execution of the transaction entails the addition of a new party to n-wise.

    "},{"location":"features/0748-n-wise-did-exchange/#updateparticipanttx","title":"UpdateParticipantTx","text":"

    The transaction is intended to update information about the participant.

    {\n  \"type\": \"updateParticipantTx\",\n  \"did\": \"did:bob\",\n  \"nickname\": \"Updated Bob\",\n  \"didDoc\" {\n    ...\n  }\n}\n
    "},{"location":"features/0748-n-wise-did-exchange/#attributes_5","title":"Attributes","text":"

    Transaction MUST be signed by the public key of the user being updated. The specified public key MUST be defined in the previous version of the DID Document.

    Execution of the transaction entails updating information about the participant.

    "},{"location":"features/0748-n-wise-did-exchange/#removeparticipanttx","title":"RemoveParticipantTx","text":"

    The transaction is designed to remove a party from n-wise.

    {\n  \"type\": \"removeParticipantTx\",\n  \"did\": \"did:bob\"\n}\n
    "},{"location":"features/0748-n-wise-did-exchange/#attributes_6","title":"Attributes","text":"

    The execution of the transaction entails the removal of the user and his DID Document from the list of n-wise parties.

    The transaction MUST be signed by the public key of the user who is going to be removed from n-wise, or with the public key of the owner.

    "},{"location":"features/0748-n-wise-did-exchange/#updatemetadatatx","title":"UpdateMetadataTx","text":"

    The transaction is intended to update the meta-information about n-wise.

    {\n    \"type\": \"updateMetadataTx\",\n    \"label\": \"Updated Council\"\n    \"metaInfo\": {\n      ...\n    }\n}\n
    "},{"location":"features/0748-n-wise-did-exchange/#attributes_7","title":"Attributes","text":"

    The transaction MUST be signed by the owner's public key.

    "},{"location":"features/0748-n-wise-did-exchange/#newownertx","title":"NewOwnerTx","text":"

    The transaction is intended to transfer the owner role to another user. The old owner simultaneously becomes a regular user.

    {\n    \"type\": \"newOwnerTx\",\n    \"did\": \"did:bob\"\n}\n
    "},{"location":"features/0748-n-wise-did-exchange/#attributes_8","title":"Attributes","text":"

    The transaction MUST be signed by the owner's public key.

    "},{"location":"features/0748-n-wise-did-exchange/#ledger-update-notify","title":"ledger-update-notify","text":"

    The message is intended to notify participants about the modifications of the n-wise state.

    {\n  \"@id\": \"4287428424\",\n  \"@type\": \"https://didcomm.org/n-wise/1.0/ledger-update-notify\"\n}\n
    "},{"location":"features/0748-n-wise-did-exchange/#drawbacks","title":"Drawbacks","text":""},{"location":"features/0748-n-wise-did-exchange/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Public DID methods use blockchain networks or other public storages for its DID Documents. Peer DID rejects the use of external storage, which is absolutely justified for a pairwise relationship, since a DID Document can be stored by the other participant. If there are more than two participants, consensus on the list of DID Documents is required. N-wise is somewhat of a middle ground between a Peer DID (DID document is stored only by a partner) and a public DID (DID document is available to everyone in the internet). So, the concept of n-wise state registry was introduced in this RFC, and its specific implementations (consensus between participants or a third-party trusted registry) remain at the discretion of the n-wise creator. The concept of microledger is also considerable to use for the n-wise state registry.

    One more promising high-level concept for building n-wise protocols is Gossyp.

    "},{"location":"features/0748-n-wise-did-exchange/#prior-art","title":"Prior art","text":"

    The term of n-wise was proposed in Peer DID specification, and previously discussed in document. However, no strict formalization of this process was proposed, as well as the need for consensus between the participants was not noted.

    "},{"location":"features/0748-n-wise-did-exchange/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0748-n-wise-did-exchange/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Sirius SDK Java IOTA Ledger based implementation (IOTA n-wise registry spec). See a detailed example in Jupyter notebook."},{"location":"features/0755-oca-for-aries/","title":"0755: Overlays Capture Architecture (OCA) For Aries","text":""},{"location":"features/0755-oca-for-aries/#summary","title":"Summary","text":"

    Overlays Capture Architecture (OCA) is, per the OCA specification, a \"standardized global solution for data capture and exchange.\" Given a data structure (such as a verifiable credential), OCA allows for the creation of purpose-specific overlays of information about that data structure. Each overlay provides some knowledge (human and machine-readable) about the overall data structure or the individual attributes within it. The information in the overlays makes it possible to create useful software for capturing data, displaying it and exchanging it. While the OCA website and OCA specification can be reviewed for a detailed background of OCA and its various purposes, in this RFC we'll focus on its purpose in Aries, which is quite constrained and pragmatic--a mechanism for an issuer to provide information about a verifiable credential to allow holder and verifier software to display the credential in a human-friendly way, including using the viewers preferred language, and the issuer's preferred branding. The image below shows an Aries mobile Wallet displaying the same credential without and with OCA overlays applied in two languages. All of the differences in the latter two screenshots from the first come from issuer-supplied OCA data.

    This RFC formalizes how Aries verifiable credential issuers can make a JSON OCA Bundle (a set of related OCA overlays about a base data structure) available to holders and verifiers that includes the following information for each type of credential they issue.

    The standard flow of data between participants is as follows:

    While the issuer providing the OCA Bundle for a credential type using the credential supplement mechanism is the typical flow (as detailed in this RFC), other flows, outside of the scope of this RFC are possible. See the rationale and alternatives section of this RFC for some examples.

    "},{"location":"features/0755-oca-for-aries/#motivation","title":"Motivation","text":"

    The core data models for verifiable credentials are more concerned about the correct cryptographic processing of the credentials than about general processing of the attribute data, and the user experience of those using credentials. An AnonCreds verifiable credential contains the bare minimum of metadata about a credential--basically, just the developer-style names for the type of credential and the attributes within it. JSON-LD-based verifiable credentials has the capacity to add more information about the attributes in a credential, but the data is not easily accessed and is provided to enable machine processing rather than improving user experience.

    OCA allows credential issuers to declare information about the verifiable credential types it issues to improve the handling of those credentials by holder and verifier Aries agents, and to improve the on-screen display of the credentials, through the application of issuer-specified branding elements.

    "},{"location":"features/0755-oca-for-aries/#tutorial","title":"Tutorial","text":"

    The tutorial section of this RFC defines the coordination necessary for an the creation, publishing, retrieval and use of an OCA Bundle for a given type of verifiable credential.

    In this overview, we assume the the use of OCA specifically for verifiable\ncredentials, and further, specifically for AnonCreds verifiable credentials. OCA\ncan also be used be applied to any data structure, not just verifiable\ncredentials, and for other verifiable credential models, such as those based on\nthe JSON-LD- or JWT-style verifiable credentials. As the Aries\ncommunity applies OCA to other styles of verifiable credential, we\nwill extend this RFC.\n
    "},{"location":"features/0755-oca-for-aries/#issuer-activities","title":"Issuer Activities","text":"

    The use of OCA as defined in this RFC begins with an issuer preparing an OCA Bundle for each type of credential they issue. An OCA Bundle is a JSON data structure consisting of the Capture Base, and some additional overlays of different types (listed in the next section).

    While an OCA Bundle can be manually maintained in an OCA Bundle JSON file, a common method of maintaining OCA source data is to use a spreadsheet, and generating the OCA Bundle from the Excel source. See the section of this RFC called OCA Tooling for a link to an OCA Source spreadsheet, and information on tools available for managing the OCA Source data and generating a corresponding OCA Bundle.

    The creation of the OCA Bundle and the configuration of the issuer's Aries Framework to deliver the OCA Bundle during credential issuance should be all that a specific issuer needs to do in using OCA for Aries. An Aries Framework that supports OCA for Aries should handle the rest of the technical requirements.

    "},{"location":"features/0755-oca-for-aries/#oca-specification-overlays","title":"OCA Specification Overlays","text":"

    All OCA data is based on a Capture Base, which defines the data structure described in the overlays. For AnonCreds, the Capture Base attributes MUST be the list of attributes in the AnonCreds schema for the given credential type. The Capture Base also MUST contain:

    With the Capture Base defined, the following additional overlay types MAY be created by the Issuer and SHOULD be expected by holders and verifiers. Overlay types flagged \"multilingual\" may have multiple instances of the overlay, one for each issuer-supported language (e.g en for English, fr French, SP Spanish, etc.) or country-language (e.g., en-CA for Canadian English, fr-CA for Canadian French), as defined in the OCA Specification about languages.

    An OCA Bundle that contains overlay types that a holder or verifier does not expect MUST be processed, with the unexpected overlays ignored.

    "},{"location":"features/0755-oca-for-aries/#aries-specific-dates-in-the-oca-format-overlay","title":"Aries-Specific Dates in the OCA Format Overlay","text":"

    In AnonCreds, zero-knowledge proof (ZKP) predicates (used, for example, to prove older than a given age based on date of birth without sharing the actual date of birth) must be based on integers. In the AnonCreds/Aries community, common ways for representing dates and date/times as integers so that they can be used in ZKP predicates are the dateint and Unix Time formats, respectively.

    In an OCA for Aries OCA Bundle, a dateint and Unix Time attributes MUST have the following values in the indicated overlays:

    A recipient of an OCA Bundle with the combination of overlay values referenced above for dateint and Unix Time SHOULD convert the integer attribute data into a date or date/time (respectively) and display the information as appropriate for the user. For example, a mobile app should display the data as a date or date/time based on the user's language/country setting and timezone, possibly combined with an app setting for showing the data in short, medium, long or full form.

    "},{"location":"features/0755-oca-for-aries/#aries-specific-branding-overlay","title":"Aries Specific \"branding\" Overlay","text":"

    In addition to the core OCA Overlays listed earlier, Aries issuers MAY include an additional Aries-specific extension overlay, the \"branding\" overlay, that gives the issuer a way to provide a set of data elements about the branding that they would like to see applied to a given type of credential. The branding overlay is similar to the multilanguage Meta overlay (e.g. ones for English, French and Spanish), with a specified set of name/value pairs. Holders (and verifiers) use the branding values from the issuer when rendering a credential of that type according the RFC0756 OCA for Aries Style Guide.

    An example of the use of the branding overlay is as follows, along with a definition of the name/value pair elements, and a sample image of how the elements are to be used. The sample is provide only to convey the concept of the branding overlay and how it is to be used. Issuers, holders and verifiers should refer to RFC0756 OCA for Aries Style Guide for details on how the elements are to be provided and used in displaying credentials.

    {\n    \"type\": \"aries/overlays/branding/1.0\"\n    \"digest\": \"EBQbQEV6qSEGDzGLj1CqT4e6yzESjPimF-Swmyltw5jU\",\n    \"capture_base\": \"EKpcSmz06sJs0b4g24e0Jc7OerbJrGN2iMVEnwLYKBS8\",\n    \"logo\": \"https://raw.githubusercontent.com/hyperledger/aries-rfcs/oca4aries../../features/0755-oca-for-aries/best-bc-logo.png\",\n    \"background_image\": \"https://raw.githubusercontent.com/hyperledger/aries-rfcs/oca4aries../../features/best-bc-background-image.png\",\n    \"background_image_slice\": \"https://raw.githubusercontent.com/hyperledger/aries-rfcs/oca4aries../../features/best-bc-background-image-slice.png\",\n    \"primary_background_color\": \"#003366\",\n    \"secondary_background_color\": \"#003366\",\n    \"secondary_attribute\": \"given_names\",\n    \"primary_attribute\": \"family_name\",\n    \"secondary_attribute\": \"given_names\",\n    \"issued_date_attribute\": \"\",\n    \"expiry_date_attribute\": \"expiry_date_dateint\",\n}\n

    It is deliberate that the credential branding defined in this RFC does not attempt to achieve pixel-perfect on screen rendering of the equivalent paper credential. There are two reasons for this:

    Instead, the guidance in this RFC and the RFC0756 OCA for Aries Style Guide gives the issuer a few ways to brand their credentials, and holder/verifier apps information on how to use those issuer-provided elements in a manner consistent for all issuers and all credentials.

    "},{"location":"features/0755-oca-for-aries/#oca-issuer-tools","title":"OCA Issuer Tools","text":"

    An Aries OCA Bundle can be managed as pure JSON as found in this sample OCA for Aries OCA Bundle. However, managing such multilingual content in JSON is not easy, particularly if the language translations come from team members not comfortable with working in JSON. An easier way to manage the data is to use an OCA source spreadsheet for most of the data, some in a source JSON file, and to use a converter to create the OCA Bundle JSON from the two sources. We recommend that an issuer maintain the spreadsheet file and source JSON in version control and use a pipeline action to generate the OCA Bundle when the source files are updated.

    The OCA Source Spreadsheet, an example of which is attached to this RFC, contains the following:

    The JSON Source file contains the Aries-specific Branding Overlay. Attached to this RFC is an example Branding Overlay JSON file that issuers can use to start.

    The following is how to create an OCA Source spreadsheet and from that, generate an OCA Bundle. Over time, we expect that this part of the RFC will be clarified as the tooling evolves.

    NOTE: The capture_base and digest fields in the branding overlay of the resulting OCA Bundle JSON file will not be updated to be proper self-addressing identifiers (SAIDs) as required by the OCA Specification. We are looking into how to automate the updating of those data elements.

    Scripting the generation process should be relatively simple, and our expectation is that the community will evolve the [Parser from the Human Colossus Foundation] to simplify the process further.

    Over time, we expect to see other tooling become available--notably, a tool for issuers to see what credentials will look like when their OCA Bundle is applied.

    "},{"location":"features/0755-oca-for-aries/#issuing-a-credential","title":"Issuing A Credential","text":"

    This section of the specification remains under consideration. The use of the credential supplement as currently described here is somewhat problematic for a number of reasons.

    We are currently investigating if an OCA Bundle can be published to the same VDR as holds an AnonCreds Schema or Credential Definition. We think that would overcome each of those concerns and make it easier to both publish and retrieve OCA Bundles.

    The currently preferred mechanism for an issuer to provide an OCA Bundle to a holder is when issuing a credential using RFC0453 Issue Credential, version 2.2 or later, the issuer provides, in the credential offer message, an OCA Bundle as a credential supplement.

    The reason OCA Bundle attachment must be signed by the issuer so that if the holder passes the OCA Bundle on to the verifier, the verifier can be certain that the issuer provided the OCA Bundle, and that it was not created by a malicious holder.

    Issuers should be aware that to ensure that the signature on a linked OCA Bundle (using the attachment type link) remains verifiable, the content resolved by the link must not change over time. For example, an Issuer might publish their OCA Bundles in a public GitHub repository, and send a link to the OCA Bundle during issuance. In that case the Issuer is advised to send a commit-based GitHub URL, rather than a branch-based reference. The Issuer may update the OCA Bundle sent to different holders over time, but once issued, each OCA Bundle MUST remain accessible.

    "},{"location":"features/0755-oca-for-aries/#warning-external-attachments","title":"Warning: External Attachments","text":"

    The use of an attachment of type link for the OCA Bundle itself, or the use of external references to the images in the branding Overlay could provide malicious issuers with a mechanism for tracking the use of a holder's verifiable credential. Specifically, the issuer could:

    A holder MAY choose not to attach an OCA Bundle to a verifier if it contains any external references. Non-malicious issuers are encouraged to not use external references in their OCA Bundles and as such, to minimize the inlined images in the branding overlay.

    "},{"location":"features/0755-oca-for-aries/#holder-activities","title":"Holder Activities","text":"

    Before processing a credential and an associated OCA Bundle, the holder SHOULD determine if the issuer is known in an ecosystem and has a sufficiently positive reputation. For example, the holder might determine if the issuer is in a suitable Trust Registry or request a presentation from the issuer about their identity.

    On receipt of a credential with an OCA Bundle supplement, the holder SHOULD retrieve the OCA Bundle attachment, verify the signature is from the issuer's public DID, verify the signature, and verify that the [OCA Capture Base] is for the credential being offered or issued to the holder. If verified, the holder should associate the OCA Bundle with the credential, including the signature.

    The holder SHOULD take appropriate security precautions in handling the remainder of the OCA data, especially the images as they could contain a malicious payload. The security risk is comparable to a browser receiving a web page containing images.

    Holder software should be implemented to use the OCA Bundle when processing and displaying the credential as noted in the list below. Developers of holder software should be familiar with the overlays the issuer is likely to provide (see list here) and how to use them according to RFC0756 OCA for Aries Style Guide.

    A recommended tactic when adding OCA support to a holder is when a credential is issued without an associated OCA Bundle, generate an OCA Bundle for the credential using the information available about the type of the credential, default images, and randomly generated colors. That allows for the creation of screens that assume an OCA Bundle is available. The RFC0756 OCA for Aries Style Guide contains guidelines for doing that.

    "},{"location":"features/0755-oca-for-aries/#adding-oca-bundles-to-present-proof-messages","title":"Adding OCA Bundles to Present Proof Messages","text":"

    Once a holder has an OCA Bundle that was issued with the credential, it MAY pass the OCA Bundle to a verifier when a presenting a proof that includes claims from that credential. This can be done via the present proof credential supplements approach, similar to what used when the credential was issued to the holder. When constructing the present_proof message to hold a proof, the holder would iterate through the credentials in the proof, and if there is an issuer-supplied OCA Bundle for the credentials, add the OCA Bundle as a supplement to the message. The signature from the Issuer MUST be included with the supplement.

    A holder SHOULD NOT send an OCA Bundle to a verifier if the OCA Bundle is a link, or if any of the data items in the OCA Bundle are links, as noted in the in the warning about external attachments in OCA Bundles.

    "},{"location":"features/0755-oca-for-aries/#verifier-activities","title":"Verifier Activities","text":"

    On receipt of a presentation with OCA Bundle supplements, the verifier SHOULD retrieve the OCA Bundle attachments, verify the signatures are from the credential issuers' public DIDs, verify the signatures, and verify that the [OCA Capture Base] is for the credentials being presented to the verifier. If verified, the verifier should associate the OCA Bundle with the source credential from the presentation.

    On receipt of a presentation with OCA Bundle supplements, the verifier MAY process the OCA Bundle attachment and verify the issuer's signature. If it verifies, the verifier should associate the OCA Bundle with the source credential from the presentation. The verifier SHOULD take appropriate security precautions in handling the data, especially the images. The holder software should be implemented to use the OCA Bundle when processing and displaying the credential as noted in the list below.

    Developers of verifier software should be familiar with the overlays the issuer is likely to provide (see list here) and how to use them according to RFC0756 OCA for Aries Style Guide. The list of how to use the OCA Bundle as a holder applies equally to verifiers.

    "},{"location":"features/0755-oca-for-aries/#reference","title":"Reference","text":""},{"location":"features/0755-oca-for-aries/#drawbacks","title":"Drawbacks","text":""},{"location":"features/0755-oca-for-aries/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0755-oca-for-aries/#prior-art","title":"Prior art","text":"

    None, as far as we are aware.

    "},{"location":"features/0755-oca-for-aries/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0755-oca-for-aries/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"features/0756-oca-for-aries-style-guide/","title":"0756: OCA for Aries Style Guide","text":""},{"location":"features/0756-oca-for-aries-style-guide/#summary","title":"Summary","text":"

    Support for credential branding in Aries agents is provided by information provided from the issuer of a given credential type using Overlays Capture Architecture (OCA) overlays. Aries agents (software) use the issuer-provided OCA data when displaying (rendering) the issuer\u2019s credential on screens. This style guide is for issuers to know what information to include in the OCA overlays and how those elements will be used by holders and verifiers. The style guide is also for Aries holder and verifier software makers about how to use the OCA data provided from issuers for a given credential type. It is up to the software makers to use OCA data provided by the issuers as outlined in this guide.

    For more information about the use of OCA in Aries, please see RFC0755 OCA for Aries

    "},{"location":"features/0756-oca-for-aries-style-guide/#motivation","title":"Motivation","text":"

    OCA Bundles is intended to be used by ALL Aries issuers and ALL Aries Holders. Some Aries verifiers might also use OCA Bundles. This Style Guide provides guidance for issuers about what overlays to populate and with what information, and guidance for holders (and verifiers) about how to use the OCA Bundle data provided by the issuers when rendering Credentials on screen.

    Issuers, holders and verifiers expect other issuers, holders and verifiers to follow this Style Guide. Issuers, holders and verifiers not following this Style Guide will likely cause end users to see unpredictable and potential \"unfriendly\" results when credentials are displayed.

    It is in the best interest of the Aries community as a whole for those writing Aries agent software to use OCA Bundles and to follow this Style Guide in displaying credentials.

    "},{"location":"features/0756-oca-for-aries-style-guide/#tutorial","title":"Tutorial","text":"

    Before reviewing this Style Guide, please review and be familiar with RFC0755 OCA for Aries. It provides the technical details about OCA, the issuer role in creating an OCA Bundle and delivering to holders (and optionally, from holders to verifiers) and the holders role in extracting information from the OCA Bundle about a held credential. This Style Guide provides the details about what each participant is expected to do in creating OCA Bundles and using the data in OCA Bundles to render credentials on screen.

    "},{"location":"features/0756-oca-for-aries-style-guide/#oca-for-aries-style-guide","title":"OCA for Aries Style Guide","text":"

    A Credential User Interface (UI) pulls from a issuer-provided OCA Bundle the following elements:

    "},{"location":"features/0756-oca-for-aries-style-guide/#credential-layouts","title":"Credential Layouts","text":"

    This style guide defines three layouts for credentials, the credential list layout, the stacked list layout, and the single credential layout. Holders and verifiers SHOULD display credentials using only these layouts in the context of a screen containing either a list of credentials or a single credential, respectively. Holders and verifiers MAY display other relevant information on the page along with one of the layouts.

    The stacked list is the same as the credential layout, with the credentials that are stacked cutoff between elements 6 and 7. Examples of the stacked layout can be seen in the Stacking section of this document. In the Stacked layout, one of the credentials in the stack may be displayed using the full credential list layout.

    Credential List Layout Single Credential Layout

    Figure: Credential Layouts

    The numbered items in the layouts are as follows. In the list, the OCA data element(s) is provided first, and, where the needed data element(s) is not available through an OCA Bundle, a calculation for a fallback is defined. It is good practice to have code that populates a per credential data structure with data from the credential\u2019s OCA Bundle if available, and if not, populated by the fallbacks. That way, the credentials are displayed in the same way with or without an OCA Bundle per credential. Unless noted, all of the data elements come from the \u201cbranding\u201d overlay. Items 10 and 11 are not included in the layouts but are included to document the fallbacks for those values.

    1. logo
      • Fallback: First letter of the alias of the DIDComm connection
    2. background_image_slice if present, else secondary_background_color
      • Fallback: Black overlay at 24% opacity
    3. primary_background_color
      • Fallback: Randomly generated color
    4. Credential Status derived from revocation status and expiry date (if available)
      • Fallback: Empty
    5. Meta overlay item issuer_name
      • Fallback: Alias of the DIDComm connection
    6. Meta overlay item name
      • Fallback: The AnonCreds Credential Definition tag, unless the value is either credential or default, otherwise the AnonCreds schema_name attribute from the AnonCreds schema
    7. primary_attribute
      • Fallback: Empty
    8. secondary_attribute
      • Fallback: Empty
    9. background_image if present, else secondary_background_color
      • Fallback: Black overlay at 24% opacity (default)
    10. issued_date_attribute
      • Fallback: If tracked, the date the credential was received by the Holder, else empty.
    11. expiry_date_attribute
      • Fallback: Empty

    Figure: Template layers

    The font color is either black or white, as determined by calculating contrast levels (following Web Content Accessibility Guidelines) against the background colors from either the OCA Bundle or the generated defaults.

    Figure: example of a credential with no override specifications

    "},{"location":"features/0756-oca-for-aries-style-guide/#logo-image-specifications","title":"Logo Image Specifications","text":"

    The image in the top left corner is a space for the issuer logo. This space should not be used for anything other than the issuer logo. The logo image may be masked to fit within a rounded square with varying corner radii. Thus, the logo must be a square image (aspect ratio 1:1), as noted in the table below. The background is default white, therefore logo files with a transparent background will overlay a white background.

    The following are the specifications for the credential logo for issuers.

    Images should be as small as possible to balance quality and download speed. To ensure image quality on all devices, it is recommended to use vector based file types such as SVG.

    Preferred file type SVG, JPG, PNG with transparent background, Aspect ratio 1:1 Recommended image size 240x240 px Color space RGB"},{"location":"features/0756-oca-for-aries-style-guide/#background-image-slice-specifications","title":"Background Image Slice Specifications","text":"

    For issuers to better represent their brand, issuers may specify an image slice that will be used as outlined in the samples below. Note the use of the image in a long, narrow space and the dynamic height. The image slice will be top aligned, scaled (preserving aspect ratio) and cropped as needed to fill the space.

    Credential height is dependent on the content and can be unpredictable. Different languages (English, French, etc.) will add more length to names, OS level settings such as font changes or text enlargement will unpredictably change the height of the credential. The recommended image size below is suggested to accommodate for most situations. Note that since the image is top aligned, the top area of the image is certain to be displayed, while the bottom section of the image may not always be visible.

    Figure: Examples of the image slice behavior

    Types of images best used in this area are abstract images or graphical art. Do not use images that are difficult to interpret when cropped.

    Do

    Use an abstract image that can work even when cropped unexpectedly. Don\u2019t

    Use images that are hard to interpret when cropped. Avoid words

    Figure: Background image slice Do\u2019s and Don\u2019ts

    Preferred file type SVG, PNG, JPG Aspect ratio 1:10 Recommended image size 120x1200 px Color space RGB"},{"location":"features/0756-oca-for-aries-style-guide/#background-image-specifications","title":"Background Image Specifications","text":"

    The background image is to give issuers more opportunities to represent their brand and is used in some credential display screens. Avoid text in the background image.

    Do

    Use an image that represents your brand. Don\u2019t

    Use this image as a marketing platform. Avoid the use of text.

    Figure: Background image Do\u2019s and Don\u2019ts

    Preferred file type SVG, PNG, JPG Aspect ratio 3:1 Recommended image size 1080x360 px Color space RGB"},{"location":"features/0756-oca-for-aries-style-guide/#credential-status","title":"Credential Status","text":"

    To reduce visual clutter, the issued date (if present), expiry date (if present), and revocation status (if applicable) may be interpreted by an icon at the top right corner when:

    Figure: An example demonstrating how the revocation date, expiry date or issued date may be represented.

    The interpretation of the issued date, expiry date and revocation status may be dependent on the holder software, such as a wallet. For example, the specific icons used may vary by wallet, or the full status data may be printed over the credential.

    "},{"location":"features/0756-oca-for-aries-style-guide/#credential-name-and-issuer-name-guidelines","title":"Credential name and Issuer name guidelines","text":"

    Issuers should be mindful of the length of text on the credential as lengthy text will dynamically change the height of the credential. Expansive credentials risk reducing the number of fully visible credentials in a list.

    Figure: An example demonstrating how lengthy credentials can limit the number of visible credentials.

    Be mindful of other factors that may increase the length of text and hence, the height of the credential such as translated languages or the font size configured at the OS level.

    Figure: Examples showing the treatment of lengthy names

    "},{"location":"features/0756-oca-for-aries-style-guide/#primary-and-secondary-attribute-guidelines","title":"Primary and Secondary Attribute Guidelines","text":"

    If issuers expect people to hold multiples of their credentials of the same type, they may want to specify a primary and secondary attribute to display on the card face.

    Note that wallet builders or holders may limit the visibility of the primary and secondary attributes on the card face to mitigate privacy concerns. Issuers can expect that these attributes may be fully visible, redacted, or hidden.

    To limit personal information from being displayed on a card face, only specify what is absolutely necessary for wallet holders to differentiate between credentials of the same type. Do not display private information such as medical related attributes.

    Do

    Use attributes that help users identify their credentials. Always consider if a primary and secondary attribute is absolutely necessary. Don\u2019t

    Display attributes that contain private information.

    Figure: Primary/secondary attribute Do\u2019s and Don\u2019ts

    "},{"location":"features/0756-oca-for-aries-style-guide/#non-production-watermark","title":"Non-production watermark","text":"

    To identify non-production credentials, issuers can add a watermark to their credentials. The watermark is a simple line of text that can be customized depending on the issuer needs. The line of text will also appear as a prefix to the credential name. The line of text should be succinct to ensure legibility. This watermark is not intended to be used for any other purpose than to mark non-production credentials. Ensure proper localization to the watermark is present in all languages.

    Example text include:

    Do

    Use succinct words to describe the type of issued credential. This ensures legibility and does not increase the size of the credential unnecessarily. Don\u2019t

    Use long works or words that do not describe non-production credentials."},{"location":"features/0756-oca-for-aries-style-guide/#credential-resizing","title":"Credential resizing","text":"

    Credential size depends on the content of the credential and the size of the device. Text areas are resized according to the width.

    Figure: Treatment of the credential template on different devices

    Figure: An example of credential on different devices

    "},{"location":"features/0756-oca-for-aries-style-guide/#stacking","title":"Stacking","text":"

    Credentials may be stacked to overlap each other to increase the number of visible credentials in the viewport. The header remains unchanged. The issuer name, logo and credential name will always be visible but the primary and secondary attributes and the image slice will be obscured.

    Figure: An example of stacked credentials with default and enlarged text.

    "},{"location":"features/0756-oca-for-aries-style-guide/#accessibility","title":"Accessibility","text":"

    The alt-tags for the logo and background images come from the multilingual OCA Meta Overlay for the issuer name and credential type name.

    "},{"location":"features/0756-oca-for-aries-style-guide/#more-variations","title":"More Variations","text":"

    To view more credential variations using this template, view the Adobe XD file.

    "},{"location":"features/0756-oca-for-aries-style-guide/#drawbacks","title":"Drawbacks","text":"

    Defining and requesting adherence to a style guide is a lofty goal. With so many independent issuers, holders and verifiers using Aries, it is a challenge to get everyone to agree on a single way to display credentials for users. However, the alternative of everyone \"doing their own thing\", perhaps in small groups, will result in a poor experience for users, and be frustrating to both issuers trying to convey their brand, and holders (and verifiers) trying to create a beautiful experience for their users.

    "},{"location":"features/0756-oca-for-aries-style-guide/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    In coming up with this Style Guide, we consider how much control to give issuers, ultimately deciding that given them too much control (e.g., pixel precise layout of their credential) creates a usage/privacy risk (people using their credentials by showing them on screen, with all private data showing), is technical extremely difficult given the variations of holder devices, and likely to result in a very poor user experience.

    A user experience group in Canada came up with the core design, and the Aries Working Group reviewed and approved of the Style Guide.

    "},{"location":"features/0756-oca-for-aries-style-guide/#prior-art","title":"Prior art","text":"

    The basic concept of giving issuers a small set of parameters that they could control in branding of their data is used in many applications and communities. Relevant to the credential use case is the application of this concept in the Apple Wallet and Google Wallet. Core to this is the setting of expectations of all of the participants of how their data will be used, and how to use the data provided. In the Aries holder (and verifier) case, unlike that of the Apple Wallet and Google Wallet, is that there is not just one holder that is using the data from many issuers to render the data on screen, but many holders that are expected to adhere to this Style Guide.

    "},{"location":"features/0756-oca-for-aries-style-guide/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0756-oca-for-aries-style-guide/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"features/0771-anoncreds-attachments/","title":"Aries RFC 0771: AnonCreds Attachment Formats for Requesting and Presenting Credentials","text":""},{"location":"features/0771-anoncreds-attachments/#summary","title":"Summary","text":"

    This RFC registers attachment formats used with Hyperledger AnonCreds ZKP-oriented credentials in the Issue Credential Protocol 2.0 and Present Proof Protocol 2.0. If not specified otherwise, this follows the rules as defined in the AnonCreds Specification.

    "},{"location":"features/0771-anoncreds-attachments/#motivation","title":"Motivation","text":"

    Allows AnonCreds credentials to be used with credential-related protocols that take pluggable formats as payloads.

    "},{"location":"features/0771-anoncreds-attachments/#reference","title":"Reference","text":""},{"location":"features/0771-anoncreds-attachments/#credential-filter-format","title":"Credential Filter format","text":"

    The potential holder uses this format to propose criteria for a potential credential for the issuer to offer. The format defined here is not part of the AnonCreds spec, but is a Hyperledger Aries-specific message.

    The identifier for this format is anoncreds/credential-filter@v1.0. The data structure allows specifying zero or more criteria from the following structure:

    {\n  \"schema_issuer_id\": \"<schema_issuer_id>\",\n  \"schema_name\": \"<schema_name>\",\n  \"schema_version\": \"<schema_version>\",\n  \"schema_id\": \"<schema_identifier>\",\n  \"issuer_id\": \"<issuer_id>\",\n  \"cred_def_id\": \"<credential_definition_identifier>\"\n}\n

    The potential holder may not know, and need not specify, all of these criteria. For example, the holder might only know the schema name and the (credential) issuer id. Recall that the potential holder may specify target attribute values and MIME types in the credential preview.

    For example, the JSON structure might look like this:

    {\n  \"schema_issuer_id\": \"did:sov:4RW6QK2HZhHxa2tg7t1jqt\",\n  \"schema_name\": \"bcgov-mines-act-permit.bcgov-mines-permitting\",\n  \"issuer_id\": \"did:sov:4RW6QK2HZhHxa2tg7t1jqt\"\n}\n

    A complete propose-credential message from the Issue Credential protocol 2.0 embeds this format as an attachment in the filters~attach array:

    {\n  \"@id\": \"<uuid of propose message>\",\n  \"@type\": \"https://didcomm.org/issue-credential/%VER/propose-credential\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"<attach@id value>\",\n      \"format\": \"anoncreds/credential-filter@v1.0\"\n    }\n  ],\n  \"filters~attach\": [\n    {\n      \"@id\": \"<attach@id value>\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"base64\": \"ewogICAgInNjaGVtYV9pc3N1ZXJfZGlkIjogImRpZDpzb3Y... (clipped)... LMkhaaEh4YTJ0Zzd0MWpxdCIKfQ==\"\n      }\n    }\n  ]\n}\n
    "},{"location":"features/0771-anoncreds-attachments/#credential-offer-format","title":"Credential Offer format","text":"

    This format is used to clarify the structure and semantics (but not the concrete data values) of a potential credential, in offers sent from issuer to potential holder.

    The identifier for this format is anoncreds/credential-offer@v1.0. It must follow the structure of a Credential Offer as defined in the AnonCreds specification.

    The JSON structure might look like this:

    {\n    \"schema_id\": \"4RW6QK2HZhHxa2tg7t1jqt:2:bcgov-mines-act-permit.bcgov-mines-permitting:0.2.0\",\n    \"cred_def_id\": \"4RW6QK2HZhHxa2tg7t1jqt:3:CL:58160:default\",\n    \"nonce\": \"57a62300-fbe2-4f08-ace0-6c329c5210e1\",\n    \"key_correctness_proof\" : <key_correctness_proof>\n}\n

    A complete offer-credential message from the Issue Credential protocol 2.0 embeds this format as an attachment in the offers~attach array:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/offer-credential\",\n    \"@id\": \"<uuid of offer message>\",\n    \"replacement_id\": \"<issuer unique id>\",\n    \"comment\": \"<some comment>\",\n    \"credential_preview\": <json-ld object>,\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\": \"anoncreds/credential-offer@v1.0\"\n        }\n    ],\n    \"offers~attach\": [\n        {\n            \"@id\": \"<attach@id value>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"ewogICAgInNjaGVtYV9pZCI6ICI0Ulc2UUsySFpoS... (clipped)... jb3JyZWN0bmVzc19wcm9vZj4KfQ==\"\n            }\n        }\n    ]\n}\n
    "},{"location":"features/0771-anoncreds-attachments/#credential-request-format","title":"Credential Request format","text":"

    This format is used to formally request a credential. It differs from the Credential Offer above in that it contains a cryptographic commitment to a link secret; an issuer can therefore use it to bind a concrete instance of an issued credential to the appropriate holder. (In contrast, the credential offer describes the schema and cred definition, but not enough information to actually issue to a specific holder.)

    The identifier for this format is anoncreds/credential-request@v1.0. It must follow the structure of a Credential Request as defined in the AnonCreds specification.

    The JSON structure might look like this:

    {\n    \"entropy\" : \"e7bc23ad-1ac8-4dbc-92dd-292ec80c7b77\",\n    \"cred_def_id\" : \"4RW6QK2HZhHxa2tg7t1jqt:3:CL:58160:default\",\n    // Fields below can depend on Cred Def type\n    \"blinded_ms\" : <blinded_master_secret>,\n    \"blinded_ms_correctness_proof\" : <blinded_ms_correctness_proof>,\n    \"nonce\": \"fbe22300-57a6-4f08-ace0-9c5210e16c32\"\n}\n

    A complete request-credential message from the Issue Credential protocol 2.0 embeds this format as an attachment in the requests~attach array:

    {\n  \"@id\": \"cf3a9301-6d4a-430f-ae02-b4a79ddc9706\",\n  \"@type\": \"https://didcomm.org/issue-credential/%VER/request-credential\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"7cd11894-838a-45c0-a9ec-13e2d9d125a1\",\n      \"format\": \"anoncreds/credential-request@v1.0\"\n    }\n  ],\n  \"requests~attach\": [\n    {\n      \"@id\": \"7cd11894-838a-45c0-a9ec-13e2d9d125a1\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"base64\": \"ewogICAgInByb3Zlcl9kaWQiIDogImRpZDpzb3Y6YWJjeHl.. (clipped)... DAtNTdhNi00ZjA4LWFjZTAtOWM1MjEwZTE2YzMyIgp9\"\n      }\n    }\n  ]\n}\n
    "},{"location":"features/0771-anoncreds-attachments/#credential-format","title":"Credential format","text":"

    A concrete, issued AnonCreds credential may be transmitted over many protocols, but is specifically expected as the final message in Issuance Protocol 2.0. The identifier for this format is anoncreds/credential@v1.0.

    This is a credential that's designed to be held but not shared directly. It is stored in the holder's wallet and used to derive a novel ZKP or W3C-compatible verifiable presentation just in time for each sharing of credential material.

    The encoded values of the credential MUST follow the encoding algorithm as described in Encoding Attribute Data. It must follow the structure of a Credential as defined in the AnonCreds specification.

    The JSON structure might look like this:

    {\n    \"schema_id\": \"4RW6QK2HZhHxa2tg7t1jqt:2:bcgov-mines-act-permit.bcgov-mines-permitting:0.2.0\",\n    \"cred_def_id\": \"4RW6QK2HZhHxa2tg7t1jqt:3:CL:58160:default\",\n    \"rev_reg_id\", \"EyN78DDGHyok8qw6W96UBY:4:EyN78DDGHyok8qw6W96UBY:3:CL:56389:CardossierOrgPerson:CL_ACCUM:1-1000\",\n    \"values\": {\n        \"attr1\" : {\"raw\": \"value1\", \"encoded\": \"value1_as_int\" },\n        \"attr2\" : {\"raw\": \"value2\", \"encoded\": \"value2_as_int\" }\n    },\n    // Fields below can depend on Cred Def type\n    \"signature\": <signature>,\n    \"signature_correctness_proof\": <signature_correctness_proof>\n    \"rev_reg\": <revocation registry state>\n    \"witness\": <witness>\n}\n

    An exhaustive description of the format is out of scope here; it is more completely documented in the AnonCreds Specification.

    "},{"location":"features/0771-anoncreds-attachments/#proof-request-format","title":"Proof Request format","text":"

    This format is used to formally request a verifiable presenation (proof) derived from an AnonCreds-style ZKP-oriented credential.

    The format can also be used to propose a presentation, in this case the nonce field MUST NOT be provided. The nonce field is required when the proof request is used to request a proof.

    The identifier for this format is anoncreds/proof-request@v1.0. It must follow the structure of a Proof as defined in the AnonCreds specification.

    Here is a sample proof request that embodies the following: \"Using a government-issued ID, disclose the credential holder\u2019s name and height, hide the credential holder\u2019s sex, get them to self-attest their phone number, and prove that their age is at least 18\":

    {\n    \"nonce\": \"2934823091873049823740198370q23984710239847\",\n    \"name\":\"proof_req_1\",\n    \"version\":\"0.1\",\n    \"requested_attributes\":{\n        \"attr1_referent\": {\"name\":\"sex\"},\n        \"attr2_referent\": {\"name\":\"phone\"},\n        \"attr3_referent\": {\"names\": [\"name\", \"height\"], \"restrictions\": <restrictions specifying government-issued ID>}\n    },\n    \"requested_predicates\":{\n        \"predicate1_referent\":{\"name\":\"age\",\"p_type\":\">=\",\"p_value\":18}\n    }\n}\n
    "},{"location":"features/0771-anoncreds-attachments/#proof-format","title":"Proof format","text":"

    This is the format of an AnonCreds-style ZKP. The raw values encoded in the presentation MUST be verified against the encoded values using the encoding algorithm as described in Encoding Attribute Data

    The identifier for this format is anoncreds/proof@v1.0. It must follow the structure of a Presentation as defined in the AnonCreds specification.

    A proof that responds to the previous proof request sample looks like this:

    {\n  \"proof\":{\n    \"proofs\":[\n      {\n        \"primary_proof\":{\n          \"eq_proof\":{\n            \"revealed_attrs\":{\n              \"height\":\"175\",\n              \"name\":\"1139481716457488690172217916278103335\"\n            },\n            \"a_prime\":\"5817705...096889\",\n            \"e\":\"1270938...756380\",\n            \"v\":\"1138...39984052\",\n            \"m\":{\n              \"master_secret\":\"375275...0939395\",\n              \"sex\":\"3511483...897083518\",\n              \"age\":\"13430...63372249\"\n            },\n            \"m2\":\"1444497...2278453\"\n          },\n          \"ge_proofs\":[\n            {\n              \"u\":{\n                \"1\":\"152500...3999140\",\n                \"2\":\"147748...2005753\",\n                \"0\":\"8806...77968\",\n                \"3\":\"10403...8538260\"\n              },\n              \"r\":{\n                \"2\":\"15706...781609\",\n                \"3\":\"343...4378642\",\n                \"0\":\"59003...702140\",\n                \"DELTA\":\"9607...28201020\",\n                \"1\":\"180097...96766\"\n              },\n              \"mj\":\"134300...249\",\n              \"alpha\":\"827896...52261\",\n              \"t\":{\n                \"2\":\"7132...47794\",\n                \"3\":\"38051...27372\",\n                \"DELTA\":\"68025...508719\",\n                \"1\":\"32924...41082\",\n                \"0\":\"74906...07857\"\n              },\n              \"predicate\":{\n                \"attr_name\":\"age\",\n                \"p_type\":\"GE\",\n                \"value\":18\n              }\n            }\n          ]\n        },\n        \"non_revoc_proof\":null\n      }\n    ],\n    \"aggregated_proof\":{\n      \"c_hash\":\"108743...92564\",\n      \"c_list\":[ 6 arrays of 257 numbers between 0 and 255]\n    }\n  },\n  \"requested_proof\":{\n    \"revealed_attrs\":{\n      \"attr1_referent\":{\n        \"sub_proof_index\":0,\n        \"raw\":\"Alex\",\n        \"encoded\":\"1139481716457488690172217916278103335\"\n      }\n    },\n    \"revealed_attr_groups\":{\n      \"attr4_referent\":{\n        \"sub_proof_index\":0,\n        \"values\":{\n          \"name\":{\n            \"raw\":\"Alex\",\n            \"encoded\":\"1139481716457488690172217916278103335\"\n          },\n          \"height\":{\n            \"raw\":\"175\",\n            \"encoded\":\"175\"\n          }\n        }\n      }\n    },\n    \"self_attested_attrs\":{\n      \"attr3_referent\":\"8-800-300\"\n    },\n    \"unrevealed_attrs\":{\n      \"attr2_referent\":{\n        \"sub_proof_index\":0\n      }\n    },\n    \"predicates\":{\n      \"predicate1_referent\":{\n        \"sub_proof_index\":0\n      }\n    }\n  },\n  \"identifiers\":[\n    {\n      \"schema_id\":\"NcYxiDXkpYi6ov5FcYDi1e:2:gvt:1.0\",\n      \"cred_def_id\":\"NcYxi...cYDi1e:2:gvt:1.0:TAG_1\",\n      \"rev_reg_id\":null,\n      \"timestamp\":null\n    }\n  ]\n}\n
    "},{"location":"features/0771-anoncreds-attachments/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0780-data-urls-images/","title":"RFC 0780: Use Data URLs for Images and More in Credential Attributes","text":""},{"location":"features/0780-data-urls-images/#summary","title":"Summary","text":"

    Some credentials include attributes that are not simple strings or numbers, such as images or JSON data structures. When complex data is put in an attribute the issuer SHOULD issue the attribute as a Data URL, as defined in IETF RFC 2397, and whose use is described in this Mozilla Developer Documentation article.

    On receipt of all credentials and presentations, holders and verifiers SHOULD check all string attributes to determine if they are Data URLs. If so, they SHOULD securely process the data according to the metadata information in the Data URL, including:

    This allows, for example, an Aries Mobile Wallet to detect that a data element is an image and how it is encoded, and display it for the user as an image, not as a long (long) string of gibberish.

    "},{"location":"features/0780-data-urls-images/#motivation","title":"Motivation","text":"

    Holders and verifiers want to enable a delightful user experience when an issuer issues attributes that contain other than strings or numbers, such as an image or a JSON data structure. In such cases, the holder and verifiers need a way to know the format of the data so it can be processed appropriately and displayed usefully. While the Aries community encourages the use of the Overlays Capture Architecture specification as outlined in RFC 0755 OCA for Aries for such information, there will be times where an OCA Bundle is not available for a given credential. In the absence of an OCA Bundle, the holders and verifiers of such attributes need data type information for processing and displaying the attributes.

    "},{"location":"features/0780-data-urls-images/#tutorial","title":"Tutorial","text":"

    An issuer wants to issue a verifiable credential that contains an image, such as a photo of the holder to which the credential is issued. Issuing such an attribute is typically done by converting the image to a base64 string. This is handled by the various verifiable credential formats supported by Aries issuers. The challenge is to convey to the holder and verifiers that the attribute is not \"just another string\" that can be displayed on screen to the user. By making the attribute a Data URL, the holder and verifiers can detect the type and encoding of the attribute, process it, and display it correctly.

    For example, this image (from the IETF 2793 specification):

    can be issued as the attribute photo in a verifiable credential with its value a Data URL as follows:

    {\n\"photo\": \"data:image/png;base64,R0lGODdhMAAwAPAAAAAAAP///ywAAAAAMAAwAAAC8IyPqcvt3wCcDkiLc7C0qwyGHhSWpjQu5yqmCYsapyuvUUlvONmOZtfzgFzByTB10QgxOR0TqBQejhRNzOfkVJ+5YiUqrXF5Y5lKh/DeuNcP5yLWGsEbtLiOSpa/TPg7JpJHxyendzWTBfX0cxOnKPjgBzi4diinWGdkF8kjdfnycQZXZeYGejmJlZeGl9i2icVqaNVailT6F5iJ90m6mvuTS4OK05M0vDk0Q4XUtwvKOzrcd3iq9uisF81M1OIcR7lEewwcLp7tuNNkM3uNna3F2JQFo97Vriy/Xl4/f1cf5VWzXyym7PHhhx4dbgYKAAA7\"\n}\n

    The syntax of a Data URL is described in IETF RFC 2397. The \\ version is:

    A holder or verifier receiving a credential or presentation MUST check each attribute is a string, and if so, if it is a Data URL (likely by using a regular expression). If it is a Data URL it should be securely processed accordingly.

    Aries Data URL verifiable credential attributes MUST include the <MIME type>.

    "},{"location":"features/0780-data-urls-images/#image-size","title":"Image Size","text":"

    A separate issue from the use of Data URLs is how large an image (or other data type) can be put into an attribute and issued as a verifiable credential. That is an issue that is dependent on the verifiable credential implementation and other factors. For AnonCreds credentials, the attribute will be treated as a string, a hash will be calculated over the string, and the resulting number will be signed--just as for any string. The size of the image does not matter. However, there may be other components in your deployment that might impact how big an attribute in a credential can be. Many in the community have successfully experimented with the use of images in credentials, so consulting others on the question might be helpful.

    For the purpose of this RFC, the amount of data in the attribute is not relevant.

    "},{"location":"features/0780-data-urls-images/#security","title":"Security","text":"

    As noted in this Mozilla Developer Documentation and this Mozilla Security Blog Post about Data URLs, Data URLs are blocked from being used in the Address Bar of all major browsers. That is because Data URLs may contain HTML that can contain anything, including HTML forms that collect data from users. Since Aries holder and verifier agents are not general purpose content presentation engines (as are browsers) the use of Data URLs are less of a security risk. Regardless, holders and verifiers MUST limit their processing of attributes containing Data URLs to displaying the data, and not executing the data. Further, Aries holders and verifiers MUST stay up on dependency vulnerabilities, such as images constructed to exploit vulnerabilities in libraries that display images.

    "},{"location":"features/0780-data-urls-images/#reference","title":"Reference","text":"

    References for implementing this RFC are:

    "},{"location":"features/0780-data-urls-images/#drawbacks","title":"Drawbacks","text":"

    The Aries community is moving to the use of the [Overlay Capture Architecture Specification] to provide a more generalized way to accomplish the same thing (understanding the meaning, format and encoding of attributes), so this RFC is duplicating a part of that capability. That said, it is easier and faster for issuers to start using, and for holders and verifiers to detect and use.

    Issuers may choose to issue Data URLs with MIME types not commonly known to Aries holder and verifier components. In such cases, the holder or verifier MUST NOT display the data.

    Even if the MIME type of the data is known to the holders and verifiers, it may not be obvious how to present the data on screen in a useful way. For example, an attribute holding a JSON data structure with an array of values may not easily be displayed.

    "},{"location":"features/0780-data-urls-images/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    We considered using the same approach as is used in RFC 0441 Present Proof Best Practices of a special suffix (_img) for the attribute name in a credential to indicate that the attribute held an image. However, that provides far less information than this approach (e.g., what type of image?), and its use is limited to images. This RFC defines a far more complete, standard, and useful approach.

    As noted in the drawbacks section, this same functionality can (and should) be achieved with the broad deployment of [Overlay Capture Architecture Specification] and RFC 0755 OCA for Aries. However, the full deployment of RFC 0755 OCA for Aries will take some time, and in the meantime, this is a \"quick and easy\" alternate solution that is useful alongside OCA for Aries.

    "},{"location":"features/0780-data-urls-images/#prior-art","title":"Prior art","text":"

    In the use cases of which we are aware of issuers putting images and JSON structures into attributes, there was no indicator of the attribute content, and the holders and verifiers were assumed to either \"know\" about the data content based on the type of credential, or they just displayed the data as a string.

    "},{"location":"features/0780-data-urls-images/#unresolved-questions","title":"Unresolved questions","text":"

    Should this RFC define a list (or the location of a list) of MIME types that Aries issuers can use in credential attributes?

    For supported MIME types that do not have obvious display methods (such as JSON), should there be a convention for how to display the data?

    "},{"location":"features/0780-data-urls-images/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"features/0793-unqualfied-dids-transition/","title":"Aries RFC 0793: Unqualified DID Transition","text":""},{"location":"features/0793-unqualfied-dids-transition/#summary","title":"Summary","text":"

    Historically, Aries use of the Indy SDK's wallet included the use of 'unqualified DIDs' or DIDs without a did: prefix and method. This transition documents the process of migrating any such DIDs still in use to fully qualified DIDs.

    This process involves the adoption of the Rotate DID protocol and algorithm 4 of the Peer DID Method, then the rotation from the unqualified DIDs to any fully qualified DID, with preference for did:peer:4.

    The adoption of these specs will further prepare the Aries community for adoption of DIDComm v2 by providing an avenue for adding DIDComm v2 compatible endpoints.

    Codebases that do not use unqualified DIDs MUST still adopt DID Rotation and did:peer:4 as part of this process, even if no unqualified DIDs must be rotated.

    This RFC follows the guidance in RFC 0345 about community-coordinated updates to (try to) ensure that independently deployed, interoperable agents remain interoperable throughout this transition.

    The transition from the unqualified to qualified DIDs will occur in four steps:

    The community coordination triggers between the steps above will be as follows:

    "},{"location":"features/0793-unqualfied-dids-transition/#motivation","title":"Motivation","text":"

    To enable agent builders to independently update their code bases and deployed agents while maintaining interoperability.

    "},{"location":"features/0793-unqualfied-dids-transition/#tutorial","title":"Tutorial","text":"

    The general mechanism for this type of transition is documented in RFC 0345 about community-coordinated updates.

    The specific sequence of events to make this particular transition is outlined in the summary section of this RFC.

    "},{"location":"features/0793-unqualfied-dids-transition/#reference","title":"Reference","text":"

    See the summary section of this RFC for the details of this transition.

    "},{"location":"features/0793-unqualfied-dids-transition/#drawbacks","title":"Drawbacks","text":"

    None identified.

    "},{"location":"features/0793-unqualfied-dids-transition/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    This approach balances the speed of adoption with the need for independent deployment and interoperability.

    "},{"location":"features/0793-unqualfied-dids-transition/#prior-art","title":"Prior art","text":"

    The approach outlined in RFC 0345 about community-coordinated updates is a well-known pattern for using deprecation to make breaking changes in an ecosystem. That said, this is the first attempt to use this approach in Aries. Adjustments to the transition plan will be made as needed, and RFC 0345 will be updated based on lessons learned in executing this plan.

    "},{"location":"features/0793-unqualfied-dids-transition/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0793-unqualfied-dids-transition/#implementations","title":"Implementations","text":"

    The following table lists the status of various agent code bases and deployments with respect to the steps of this transition. Agent builders MUST update this table as they complete steps of the transition.

    Name / Link Implementation Notes Aries Protocol Test Suite No steps completed Aries Framework - .NET No steps completed Trinsic.id No steps completed Aries Cloud Agent - Python No steps completed Aries Static Agent - Python No steps completed Aries Framework - Go No steps completed Connect.Me No steps completed Verity No steps completed Pico Labs No steps completed IBM No steps completed IBM Agent No steps completed Aries Cloud Agent - Pico No steps completed Aries Framework JavaScript No steps completed"},{"location":"features/0794-did-rotate/","title":"Aries RFC 0794: DID Rotate 1.0","text":""},{"location":"features/0794-did-rotate/#summary","title":"Summary","text":"

    This protocol signals the change of DID in use between parties.

    This protocol is only applicable to DIDComm v1 - in DIDComm v2 use the more efficient DID Rotation header.

    "},{"location":"features/0794-did-rotate/#motivation","title":"Motivation","text":"

    This mechanism allows a party in a relationship to change the DID they use to identify themselves in that relationship. This may be used to switch DID methods, but also to switch to a new DID within the same DID method. For non-updatable DID methods, this allows updating DID Doc attributes such as service endpoints. Inspired by (but different from) the DID rotation feature of the DIDComm Messaging (DIDComm v2) spec.

    "},{"location":"features/0794-did-rotate/#implications-for-software-implementations","title":"Implications for Software Implementations","text":"

    Implementations will need to consider how data (public keys, DIDs and the ID for the relationship) related to the relationship is managed. If the relationship DIDs are used as identifiers, those identifiers may need to be updated during the rotation to maintain data integrity. For example, both parties might have to retain and be able to use as identifiers for the relationship the existing DID and the rotated to DID, and their related keys for a period of time until the rotation is complete.

    "},{"location":"features/0794-did-rotate/#tutorial","title":"Tutorial","text":""},{"location":"features/0794-did-rotate/#name-and-version","title":"Name and Version","text":"

    DID Rotate 1.0

    URI: https://didcomm.org/did-rotate/1.0/"},{"location":"features/0794-did-rotate/#roles","title":"Roles","text":"

    rotating_party: this party is rotating the DID in use for this relationship. They send the rotate message.

    observing_party: this party is notified of the DID rotation

    "},{"location":"features/0794-did-rotate/#messages","title":"Messages","text":""},{"location":"features/0794-did-rotate/#rotate","title":"Rotate","text":"

    Message Type URI: https://didcomm.org/did-rotate/1.0/rotate

    to_did: The new DID to be used to identify the rotating_party

    {\n    \"@id\": \"123456780\",\n    \"@type\": \"https://didcomm.org/did-rotate/1.0/rotate\",\n    \"to_did\": \"did:example:newdid\",\n\n}\n

    The rotating_party is expected to receive messages on both the existing and new DIDs and their associated keys for a reasonable period that MUST extend at least until the following ack message has been received.

    This message MUST be sent using AuthCrypt or as a signed message in order to establish the provenance of the new DID. In Aries implementations, messages sent within the context of a relationship are by default sent using AuthCrypt. Proper provenance prevents injection attacks that seek to take over a relationship. Any rotate message received without being authcrypted or signed MUST be discarded and not processed.

    DIDComm v1 uses public keys as the outer message identifiers. This means that rotation to a new DID using the same public key will not result in a change for new inbound messages. The observing_party must not assume that the new DID uses the same keys as the existing relationship.

    "},{"location":"features/0794-did-rotate/#ack","title":"Ack","text":"

    Message Type URI: https://didcomm.org/did-rotate/1.0/ack

    This message has been adopted from [the ack protocol] (https://github.com/hyperledger/aries-rfcs/tree/main../../features/0015-acks).

    This message is still sent to the prior DID to acknowledge the receipt of the rotation. Following messages will be sent to the new DID.

    In order to correctly process out of order messages, the The observing_party may choose to receive messages from the old DID for a reasonable period. This allows messages sent before rotation but received after rotation in the case of out of order message delivery.

    In this message, the thid (Thread ID) MUST be included to allow the rotating_party to correlate it with the sent rotate message.

    {\n    \"@id\": \"123456780\",\n    \"@type\": \"https://didcomm.org/did-rotate/1.0/ack\",\n      \"~thread\"          : {\n        \"thid\": \"<id of rotate message>\"\n    },\n\n}\n
    "},{"location":"features/0794-did-rotate/#problem-report","title":"Problem Report","text":"

    Message Type URI: https://didcomm.org/did-rotate/1.0/problem-report

    This message has been adopted from [the report-problem protocol] (https://github.com/hyperledger/aries-rfcs/blob/main../../features/0035-report-problem/README.md).

    If the observing_party receives a rotate message with a DID that they cannot resolve, they MUST return a problem-report message.

    The description code must be set to one of the following: - e.did.unresolvable - used for a DID who's method is supported, but will not resolve - e.did.method_unsupported - used for a DID method for which the observing_party does not support resolution. - e.did.doc_unsupported - used for a DID for which the observing_party does not find information sufficient for a DIDComm connection in the resolved DID Document. This would include compatible key types and a DIDComm capable service endpoint.

    Upon receiving this message, the rotating_party must not complete the rotation and resolve the issue. Further rotation attempts must happen in a new thread.

    {\n  \"@type\"            : \"https://didcomm.org/did-rotate/1.0/problem-report\",\n  \"@id\"              : \"an identifier that can be used to discuss this error message\",\n  \"~thread\"          : {\n        \"pthid\": \"<id of rotate message>\"\n    },\n  \"description\"      : { \"en\": \"DID Unresolvable\", \"code\": \"e.did.unresolvable\" },\n  \"problem_items\"    : [ {\"did\": \"<did_passed_in_rotate>\"} ],\n}\n
    "},{"location":"features/0794-did-rotate/#hangup","title":"Hangup","text":"

    Message Type URI: https://didcomm.org/did-rotate/1.0/hangup

    This message is sent by the rotating_party to inform the observing_party that they are done with the relationship and will no longer be responding.

    There is no response message.

    Use of this message does not require or indicate that all data has been deleted by either party, just that interaction has ceased.

    {\n    \"@id\": \"123456780\",\n    \"@type\": \"https://didcomm.org/did-rotate/1.0/hangup\"\n}\n
    "},{"location":"features/0794-did-rotate/#prior-art","title":"Prior art","text":"

    This protocol is inspired by the rotation feature of DIDComm Messaging (DIDComm v2). The implementation differs in important ways. The DIDComm v2 method is a post rotate operation: the first message sent AFTER the rotation contains the prior DID and a signature authorizing the rotation. This is efficient, but requires the use of a message header and a higher level of integration with message processing. This protocol is a pre rotate operation: notifying the other party of the new DID in advance is a less efficient but simpler approach. This was done to minimize adoption pain. The pending move to DIDComm v2 will provide the efficiency.

    "},{"location":"features/0794-did-rotate/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"features/0804-didcomm-rpc/","title":"0804: DIDComm Remote Procedure Call (DRPC)","text":""},{"location":"features/0804-didcomm-rpc/#summary","title":"Summary","text":"

    The DIDComm Remote Procedure Call (DRPC) protocol enables a JSON-RPC-based request-response interaction to be carried out across a DIDComm channel. The protocol is designed to enable custom interactions between connected agents, and to allow for the rapid prototyping of experimental DIDComm protocols. An agent sends a DIDComm message to request a JSON-RPC service be invoked by another agent, and gets back the JSON-RPC-format response in subsequent DIDComm message. The protocol enables any request to be conveyed that the other agent understands. Out of scope of this protocol is how the requesting agent discovers the services available from the responding agent, and how the two agents know the semantics of the specified JSON-RPC requests and responses. By using DIDComm between the requesting and responding agents, the security and privacy benefits of DIDComm are accomplished, and the generic parameters of the requests allow for flexibility in how and where the protocol can be used.

    "},{"location":"features/0804-didcomm-rpc/#motivation","title":"Motivation","text":"

    There are several use cases that are driving the initial need for this protocol.

    "},{"location":"features/0804-didcomm-rpc/#app-attestation","title":"App Attestation","text":"

    A mobile wallet needs to get an app attestation verifiable credential from the wallet publisher. To do that, the wallet and publisher need to exchange information specific to to the attestation process with the Google and Apple stores. The sequence is as follows:

    The wallet and service are using instances of three protocols (two DRPC and one Issue Credential) to carry out a full business process. Each participant must have knowledge of the full business process--there is nothing inherent in the DRPC protocol about this process, or how it is being used. The DRPC protocol is included to provide a generic request-response mechanism that alleviates the need for formalizing special purpose protocols.

    App attestation is a likely candidate for a having its own DIDComm protocol. This use of DRPC is ideal for developing and experimenting with the necessary agent interactions before deciding on if a use-specific protocol is needed and its semantics.

    "},{"location":"features/0804-didcomm-rpc/#video-verification-service","title":"Video Verification Service","text":"

    A second example is using the DRPC protocol is to implement a custom video verification service that is used by a specific mobile wallet implementation and a proprietary backend service prior to issuing a credential to the wallet. Since the interactions are to a proprietary service, so an open specification does not make sense, but the use of DIDComm is valuable. In this example, the wallet communicates over DIDComm to a Credential Issuer agent that (during verification) proxies the requests/responses to a backend (\"behind the firewall\") service. The wallet is implemented to use DRPC protocol instances to initiate the verification and receive the actions needed to carry out the steps of the verification (take picture, take video, instruct movements, etc.), sending to the Issuer agent the necessary data. The Issuer conveys the requests to the verification service and the responses back to the mobile wallet. At the end of the process, the Issuer can see the result of the process, and decide on the next actions between it and the mobile wallet, such as issuing a credential.

    Again, after using the DRPC protocol for developing and experimenting with the implementation, the creators of the protocol can decide to formalize their own custom, end-to-end protocol, or continue to use the DRPC protocol instances. Important is that they can begin development without doing any Aries frameworks customizations or plugins by using DRPC.

    "},{"location":"features/0804-didcomm-rpc/#tutorial","title":"Tutorial","text":""},{"location":"features/0804-didcomm-rpc/#name-and-version","title":"Name and Version","text":"

    This is the DRPC protocol. It is uniquely identified by the URI:

    \"https://didcomm.org/drpc/1.0\"\n
    "},{"location":"features/0804-didcomm-rpc/#key-concepts","title":"Key Concepts","text":"

    This RFC assumes that you are familiar with DID communication.

    The protocol consists of a DIDComm request message carrying an arbitrary JSON-RPC request to a responding agent, and a second message that carries the result of processing the request back to the client of the first message. The interpretation of the request, how to carry out the request, the content of the response, and the interpretation of the response, are all up to the business logic (controllers) of the participating agents. There is no discovery of remote services offered by agents--it is assumed that the two participants are aware of the DRPC capabilities of one another through some other means. For example, from the App Attestation use case, functionality to carry out the app attestation process, and the service to use it is built into the mobile wallet.

    Those unfamiliar with JSON-RPC, the <tl;dr> is that it is a very simple request response protocol using JSON where the only data shared is:

    The response is likewise simple:

    An example of a simple JSON-RPC request/response pair from the specification is:

    --> {\"jsonrpc\": \"2.0\", \"method\": \"subtract\", \"params\": [42, 23], \"id\": 1}\n<-- {\"jsonrpc\": \"2.0\", \"result\": 19, \"id\": 1}\n

    A JSON-RPC request may be a batch of requests, each with a different id value, and the response a similar array, with an entry for each of the requests.

    JSON-RPC follows a similar \"parameters defined by the message type\" pattern as DIDComm. As a result, in this protocol we do not need to add any special handling around the params such as Base64 encoding, signing, headers and so on, as the parties interacting with the protocol by definition must have a shared understanding of the content of the params and can define any special handling needed amongst themselves.

    It is expected (although not required) that an Aries Framework receiving a DRPC message will simply pass to its associated \"business logic\" (controller) the request from the client, and wait on the controller to provide the response content to be sent back to the original client. Apart from the messaging processing applied to all inbound and outbound messages, the Aries Framework will not perform any of the actual processing of the request.

    "},{"location":"features/0804-didcomm-rpc/#roles","title":"Roles","text":"

    There are two roles, adopted from the JSON-RPC specification, in the protocol client and server:

    "},{"location":"features/0804-didcomm-rpc/#states","title":"States","text":""},{"location":"features/0804-didcomm-rpc/#client-states","title":"Client States","text":"

    The client agent goes through the following states:

    The state transition table for the client is:

    State / Events Send Request Receive Response Start Transition to request-sent request-sent Transition to complete completed problem-report received Transition to abandoned abandoned"},{"location":"features/0804-didcomm-rpc/#server-states","title":"Server States","text":"

    The server agent goes through the following states:

    The state transition table for the server is:

    State / Events Receive Request Send Response or Problem Report Start Transition to request-received request-received Transition to complete completed"},{"location":"features/0804-didcomm-rpc/#messages","title":"Messages","text":"

    The following are the messages in the DRPC protocol. The response message handles all positive responses, so the ack (RFC 0015 ACKs) message is NOT adopted by this protocol. The RFC 0035 Report Problem is adopted by this protocol in the event that a request is not recognizable as a JSON-RPC message and as such, a JSON-RPC response message cannot be created. See the details below in the Problem Report Message section.

    "},{"location":"features/0804-didcomm-rpc/#request-message","title":"Request Message","text":"

    The request message is sent by the client to initiate the protocol. The message contains the JSON-RPC information necessary for the server to process the request, prepare the response, and send the response message back to the client. It is assumed the client knows what types of requests the server is prepared to receive and process. If the server does not know how to process the error, JSON-RPC has a standard response, outlined in the response message section below. How the client and server coordinate that understanding is out of scope of this protocol.

    The request message uses the same JSON items as JSON-RPC, skipping the id in favor of the existing DIDComm @id and thread handling.

      {\n    \"@type\": \"https://didcomm.org/drpc/1.0/request\",\n    \"@id\": \"2a0ec6db-471d-42ed-84ee-f9544db9da4b\",\n    \"request\" : {\"jsonrpc\": \"2.0\", \"method\": \"subtract\", \"params\": [42, 23], \"id\": 1}\n  }\n

    The items in the message are as follows:

    Per the JSON-RPC specification, if the id field of a JSON-RPC request is omitted, the server should not respond. In this DRPC DIDComm protocol, the server is always expected to send a response, but MUST NOT include a JSON-RPC response for any JSON-RPC request for which the id is omitted. This is covered further in the response message section (below).

    "},{"location":"features/0804-didcomm-rpc/#response-message","title":"Response Message","text":"

    A response message is sent by the server to following the processing of the request to convey the output of the processing to the client. As with the request the format mostly exactly that of a JSON-RPC response.

    If the request is unrecognizable as a JSON-RPC message such that a JSON-RPC message cannot be generated, the server SHOULD send a RFC 0035 Report Problem message to the client.

    It is assumed the client understands what the contents of the response message means in the context of the protocol instance. How the client and server coordinate that understanding is out of scope of this protocol.

      {\n    \"@type\": \"https://didcomm.org/drpc/1.0/response\",\n    \"@id\": \"63d6f6cf-b723-4eaf-874b-ae13f3e3e5c5\",\n    \"response\": {\"jsonrpc\": \"2.0\", \"result\": 19, \"id\": 1}\n  }\n

    The items in the message are as follows:

    As with all DIDComm messages that are not the first in a protocol instance, a ~thread decorator MUST be included in the response message.

    The special handling of the \"all JSON-RPC requests are notifications\" described above is to simplify the DRPC handling to know when a DRPC protocol instance is complete. If a response message is not always required, the DRPC handler would have to inspect the request message to look for ids to determine when the protocol completes.

    If the server does not understand how to process a given JSON-RPC request, a response error SHOULD be returned (as per the JSON-RPC specification) with:

    "},{"location":"features/0804-didcomm-rpc/#problem-report-message","title":"Problem Report Message","text":"

    A RFC 0035 Report Problem message SHOULD be sent by the server instead of a response message only if the request is unrecognizable as a JSON-RPC message. An JSON-RPC errors MUST be provided to the client by the server via the response message, not a problem-report. The client MUST NOT respond to a response message, even if the response message is not a valid JSON-RPC response. This is because once the server sends the response, the protocol is in the completed state (from the server's perspective) and so is subject to deletion. As such, a follow up problem-report message would have an invalid thid (thread ID) and (at best) be thrown away by the server.

    "},{"location":"features/0804-didcomm-rpc/#constraints","title":"Constraints","text":"

    The primary constraint with this protocol is that the two parties using the protocol must understand one another--what JSON-RPC request(s) to use, what parameters to provide, how to process the those requests, what the response means, and so on. It is not a protocol to be used between arbitrary parties, but rather one where the parties have knowledge outside of DIDComm of one another and their mutual capabilities.

    On the other hand, that constraint enables great flexibility for explicitly collaborating agents (such as a mobile wallet and the agent of its manufacturer) to accomplish request-response transactions over DIDComm without needing to define additional DIDComm protocols. More complex interactions can be accomplished by carrying out a sequence of DRPC protocol instances between agents.

    The flexibility the DRPC protocol allows for experimenting with specific interactions between agents that could later evolve into formal DIDComm \"fit for purpose\" protocols.

    "},{"location":"features/0804-didcomm-rpc/#reference","title":"Reference","text":""},{"location":"features/0804-didcomm-rpc/#codes-catalog","title":"Codes Catalog","text":"

    A JSON-RPC request codes catalog could be developed over time and be included in this part of the RFC. This might an intermediate step in transitioning a given interaction implemented using DRPC into formally specified interaction. On the other hand, simply defining a full DIDComm protocol will often be a far better approach.

    At this time, there are no codes to be cataloged.

    "},{"location":"features/0804-didcomm-rpc/#drawbacks","title":"Drawbacks","text":"

    Anything that can be done by using the DRPC protocol can be accomplished by a formally defined protocol specific to the task to be accomplished. The advantage of the DRPC protocol is that pairs of agent instances that are explicitly collaborating can use this protocol without having to first define a task-specific protocol.

    "},{"location":"features/0804-didcomm-rpc/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    We considered not supporting the notification and batch forms of the JSON-RPC specification, and decided it made sense to allow for the full support of the JSON-RPC specification, including requests of those forms. That said, we also found that the concept of not having a DRPC response message in some (likely, rare) cases based on the contents of the request JSON item (e.g., when all of the ids are omitted from the JSON-RPC requests) would unnecessarily complicate the DIDComm protocol instance handling about when it is complete. As a result, a DRPC response message is always required.

    This design builds on the experience of implementations of this kind of feature using RFC 0095 Basic Message and RFC 0335 HTTP Over DIDComm. This design tries to build off the learnings gained from both of those implementations.

    Based on feedback to an original version of the RFC, we looked as well at using gRPC as the core of this protocol, versus JSON-RPC. Our assessment was that gRPC was a much heavier weight mechanism that required more effort between parties to define and implement what will often be a very simple request-response transaction -- at the level of defining a DIDComm protocol.

    The use of params and leaving the content and semantics of the params up to the client and server means that they can define the appropriate handling of the parameters. This eliminates the need for the protocol to define, for example, that some data needs to be Base64 encoding for transmission, or if some values need to be cryptographically signed. Such details are left to the participants and how they are using the protocol.

    "},{"location":"features/0804-didcomm-rpc/#prior-art","title":"Prior art","text":"

    This protocol has similar goals to the RFC 0335 HTTP Over DIDComm protocol, but takes a lighter weight, more flexible approach. We expect that implementing HTTP over DIDComm using this protocol will be as easy as using RFC 0335 HTTP Over DIDComm, where the JSON-RPC request's params data structure holds the headers and body elements for the HTTP request. On the other hand, using the explicit RFC 0335 HTTP Over DIDComm is might be a better choice if it is available and exactly what is needed.

    One of the example use cases for this protocol has been implemented by \"hijacking\" the RFC 0095 Basic Message protocol to carry out the needed request/response actions. This approach is less than ideal in that:

    "},{"location":"features/0804-didcomm-rpc/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0804-didcomm-rpc/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"features/0809-w3c-data-integrity-credential-attachment/","title":"Aries RFC 0809: W3C Verifiable Credential Data Integrity Attachment format for requesting and issuing credentials","text":""},{"location":"features/0809-w3c-data-integrity-credential-attachment/#summary","title":"Summary","text":"

    This RFC registers an attachment format for use in the issue-credential V2 protocol based on W3C Verifiable Credentials with Data Integrity Proofs from the VC Data Model.

    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#motivation","title":"Motivation","text":"

    The Issue Credential protocol needs an attachment format to be able to exchange W3C verifiable credentials. It is desirable to make use of specifications developed in an open standards body, such as the Credential Manifest for which the attachment format is described in RFC 0511: Credential-Manifest Attachment format. However, the Credential Manifest is not finished and ready yet, and therefore there is a need to bridge the gap between standards.

    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#tutorial","title":"Tutorial","text":"

    Complete examples of messages are provided in the reference section.

    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#reference","title":"Reference","text":""},{"location":"features/0809-w3c-data-integrity-credential-attachment/#credential-offer-attachment-format","title":"Credential Offer Attachment Format","text":"

    Format identifier: didcomm/w3c-di-vc-offer@v0.1

    {\n  \"data_model_versions_supported\": [\"1.1\", \"2.0\"],\n  \"binding_required\": true,\n  \"binding_method\": {\n    \"anoncreds_link_secret\": {\n      \"nonce\": \"1234\",\n      \"cred_def_id\": \"did:key:z6MkwXG2WjeQnNxSoynSGYU8V9j3QzP3JSqhdmkHc6SaVWoT/credential-definition\",\n      \"key_correctness_proof\": \"<key_correctness_proof>\"\n    },\n    \"didcomm_signed_attachment\": {\n      \"algs_supported\": [\"EdDSA\"],\n      \"did_methods_supported\": [\"key\", \"web\"],\n      \"nonce\": \"1234\"\n    }\n  },\n  \"credential\": {\n    \"@context\": [\n      \"https://www.w3.org/2018/credentials/v1\",\n      \"https://w3id.org/security/data-integrity/v2\",\n      {\n        \"@vocab\": \"https://www.w3.org/ns/credentials/issuer-dependent#\"\n      }\n    ],\n    \"type\": [\"VerifiableCredential\"],\n    \"issuer\": \"did:key:z6MkwXG2WjeQnNxSoynSGYU8V9j3QzP3JSqhdmkHc6SaVWoT\",\n    \"issuanceDate\": \"2024-01-10T04:44:29.563418Z\",\n    \"credentialSubject\": {\n      \"height\": 175,\n      \"age\": 28,\n      \"name\": \"Alex\",\n      \"sex\": \"male\"\n    }\n  }\n}\n
    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#credential-offer-exceptions","title":"Credential Offer Exceptions","text":"

    To allow for validation of the credential according to the corresponding VC Data Model version, the credential in the offer MUST be conformant to the corresponding VC Data Model version, except for the exceptions listed below. This still allows the credential to be validated, knowing which deviations are possible.

    The list of exception is as follows:

    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#credential-request-attachment-format","title":"Credential Request Attachment Format","text":"

    Format identifier: didcomm/w3c-di-vc-request@v0.1

    This format is used to request a verifiable credential. The JSON structure might look like this:

    {\n  \"data_model_version\": \"2.0\",\n  \"binding_proof\": {\n    \"anoncreds_link_secret\": {\n      \"entropy\": \"<random-entropy>\",\n      \"cred_def_id\": \"did:key:z6MkwXG2WjeQnNxSoynSGYU8V9j3QzP3JSqhdmkHc6SaVWoT/credential-definition\",\n      \"blinded_ms\": {},\n      \"blinded_ms_corectness_proof\": {},\n      \"nonce\": \"<random-nonce>\"\n    },\n    \"didcomm_signed_attachment\": {\n      \"attachment_id\": \"<@id of the attachment>\"\n    }\n  }\n}\n
    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#credential-attachment-format","title":"Credential Attachment Format","text":"

    Format identifier: didcomm/w3c-di-vc@v0.1

    This format is used to transmit a verifiable credential. The JSON structure might look like this:

    {\n  \"credential\": {\n    // vc with proof object or array\n  }\n}\n

    It is up to the issuer to the pick an appropriate cryptographic suite to sign the credential. The issuer may use the cryptographic binding material provided by the holder to select the cryptographic suite. For example, when the anoncreds_link_secret binding method is used, the issuer should use an DataIntegrityProof with the anoncredsvc-2023 cryptographic suite. When a holder provides a signed attachment as part of the binding proof using the EdDSA JWA alg, the issuer could use a DateIntegrityProof with the eddsa-rdfc-2022 cryptographic suite. However, it is not required for the cryptographic suite used for the signature on the credential to be in any way related to the cryptographic suite used for the binding proof, unless the binding method explicitly requires this (for example the anoncreds_link_secret binding method).

    A complete issue-credential message from the Issue Credential protocol 2.0 might look like this:

    {\n  \"@id\": \"284d3996-ba85-45d9-964b-9fd5805517b6\",\n  \"@type\": \"https://didcomm.org/issue-credential/2.0/issue-credential\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"format\": \"didcomm/w3c-di-vc@v0.1\"\n    }\n  ],\n  \"credentials~attach\": [\n    {\n      \"@id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"base64\": \"ewogICAgICAgICAgIkBjb250ZXogWwogICAgICAg...(clipped)...RNVmR0SXFXZhWXgySkJBIgAgfQogICAgICAgIH0=\"\n      }\n    }\n  ]\n}\n
    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#binding-methods","title":"Binding Methods","text":"

    The attachment format supports different methods to bind the credential to the receiver of the credential. In the offer message the issuer can indicate which binding methods are supported in the binding_methods object. Each key represents the id of the supported binding method.

    This section defines a set of binding methods supported by this attachment format, but other binding methods may be used. Based on the binding method, the request needs to include a binding_proof object where the key matches the key of the binding method from the offer.

    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#anoncreds-link-secret","title":"AnonCreds Link Secret","text":"

    Identifier: anoncreds_link_secret

    This binding method is intended to be used in combination with a credential containing an AnonCreds proof.

    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#binding-method-in-offer","title":"Binding Method in Offer","text":"

    The structure of the binding method in the offer MUST match the structure of the Credential Offer as defiend in the AonCreds specification, with the exclusion of the schema_id key.

    {\n  \"nonce\": \"1234\",\n  \"cred_def_id\": \"did:key:z6MkwXG2WjeQnNxSoynSGYU8V9j3QzP3JSqhdmkHc6SaVWoT/credential-definition\",\n  \"key_correctness_proof\": {\n    /* key correctness proof object */\n  }\n}\n
    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#binding-proof-in-request","title":"Binding Proof in Request","text":"

    The structure of the binding proof in the request MUST match the structure of the Credential Request as defined in the AnonCreds specification.

    {\n  \"anoncreds_link_secret\": {\n    \"entropy\": \"<random-entropy>\",\n    \"blinded_ms\": {\n      /* blinded ms object */\n    },\n    \"blinded_ms_corectness_proof\": {\n      /* blinded ms correctness proof object */\n    },\n    \"nonce\": \"<random-nonce>\"\n  }\n}\n
    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#binding-in-credential","title":"Binding in Credential","text":"

    The issued credential should be bound to the holder by including the blinded link secret in the credential as defined in the Issue Credential section of the AnonCreds specification. Credential bound using the AnonCreds link secret binding method MUST contain an proof with proof.type value of DataIntegrityProof and cryptosuite value of anoncredsvc-2023, and conform to the AnonCreds W3C Verifiable Credential Representation.

    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#didcomm-signed-attachment","title":"DIDComm Signed Attachment","text":"

    Identifier: didcomm_signed_attachment

    This binding method leverages DIDComm signed attachments to bind a credential to a specific key and/or identifier.

    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#binding-method-in-offer_1","title":"Binding Method in Offer","text":"
    {\n  \"didcomm_signed_attachment\": {\n    \"algs_supported\": [\"EdDSA\"],\n    \"did_methods_supported\": [\"key\"],\n    \"nonce\": \"b19439b0-4dc9-4c28-b796-99d17034fb5c\"\n  }\n}\n
    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#binding-proof-in-request_1","title":"Binding Proof in Request","text":"

    The binding proof in the request points to an appended attachment containing the signed attachment.

    {\n  \"didcomm_signed_attachment\": {\n    \"attachment_id\": \"<@id of the attachment>\"\n  }\n}\n
    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#signed-attachment-content","title":"Signed Attachment Content","text":"

    The attachment MUST be signed by including a signature in the jws field of the attachment. The data MUST be a JSON document encoded in the base64 field of the attachment. The structure of the signed attachment is described below.

    JWS Payload

    {\n  \"nonce\": \"<request_nonce>\",\n}\n

    Protected Header

    {\n  \"alg\": \"EdDSA\",\n  \"kid\": \"did:key:z6MkkwiqX7BvkBbi37aNx2vJkCEYSKgHd2Jcgh4AUhi4YY1u#z6MkkwiqX7BvkBbi37aNx2vJkCEYSKgHd2Jcgh4AUhi4YY1u\"\n}\n

    A signed binding request attachment appended to a request message might look like this:

    {\n  \"@id\": \"284d3996-ba85-45d9-964b-9fd5805517b6\",\n  \"@type\": \"https://didcomm.org/issue-credential/2.0/request-credential\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"format\": \"didcomm/w3c-di-vc-request@v0.1\"\n    }\n  ],\n  \"~attach\": [\n    {\n      \"@id\": \"123\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"base64\": \"<base64-encoded-json-attachment-content>\",\n        \"jws\": {\n          \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n          \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n        }\n      }\n    }\n  ],\n  \"credentials~attach\": [\n    {\n      \"@id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"base64\": \"ewogICAgICAgICAgIkBjb250ZXogWwogICAgICAg...(clipped)...RNVmR0SXFXZhWXgySkJBIgAgfQogICAgICAgIH0=\"\n      }\n    }\n  ]\n}\n
    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#binding-in-credential_1","title":"Binding in Credential","text":"

    The issued credential should be bound to the holder by including the did in the credential as credentialSubject.id or holder.

    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#drawbacks","title":"Drawbacks","text":""},{"location":"features/0809-w3c-data-integrity-credential-attachment/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    RFC 0593: JSON-LD Credential Attachment, W3C VC API allows issuance of credentials using only linked data signatures, while RFC 0592: Indy Attachment supports issuance of AnonCreds credentials. This attachment format aims to support issuance of both previous attachment formats (while for AnonCreds it now being in the W3C model), as well as supporting additional ../../features such as issuance W3C JWT VCs, credentials with multiple proofs, and cryptographic binding of the credential to the holder.

    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#prior-art","title":"Prior art","text":"

    The attachment format in this RFC is heavily inspired by RFC 0593: JSON-LD Credential Attachment, W3C VC API and OpenID for Verifiable Credential Issuance.

    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#unresolved-questions","title":"Unresolved questions","text":""}]} \ No newline at end of file +{"config":{"lang":["en"],"separator":"[\\s\\-]+","pipeline":["stopWordFilter"]},"docs":[{"location":"","title":"Welcome","text":"

    This repo holds Request for Comment (RFCs) for the Aries project. They describe important topics (not minor details) that we want to standardize across the Aries ecosystem.

    If you are here to learn about Aries, we recommend you use the the RFC Index for a current listing of all RFCs and their statuses.

    There are 2 types of Aries RFCs:

    RFCs are for developers building on Aries. They don't provide guidance on how Aries components implement features internally; individual Aries repos have design docs for that. Each Aries RFC includes an \"implementations\" section and all RFCs with a status greater than Proposed should have at least one listed implementation.

    "},{"location":"#rfc-lifecycle","title":"RFC Lifecycle","text":"

    RFCs go through a standard lifecycle.

    "},{"location":"#proposed","title":"PROPOSED","text":"

    To propose an RFC, use these instructions to raise a PR against the repo. Proposed RFCs are considered a \"work in progress\", even after they are merged. In other words, they haven't been endorsed by the community yet, but they seem like reasonable ideas worth exploring.

    "},{"location":"#demonstrated","title":"DEMONSTRATED","text":"

    Demonstrated RFCs have one or more implementations available, listed in the \"Implementations\" section of the RFC document. As with the PROPOSED status, demonstrated RFCs haven't been endorsed by the community, but the ideas put forth have been more thoroughly explored through the implementation(s). The demonstrated status is an optional step in the lifecycle. For protocol-related RFCs, work on protocol tests SHOULD begin in the test suite repo by the time this status is assigned.

    "},{"location":"#accepted","title":"ACCEPTED","text":"

    To get an RFC accepted, build consensus for your RFC on chat and in community meetings. If your RFC is a feature that's protocol- or decorator-related, it MUST have reasonable tests in the test suite repo, it MUST list the test suite in the protocol RFC's Implementations section, at least one other implementation must have passed the relevant portions of the test suite, and all implementations listed in this section of the RFC MUST hyperlink to their test results. An accepted RFC is incubating on a standards track; the community has decided to polish it and is exploring or pursuing implementation.

    "},{"location":"#adopted","title":"ADOPTED","text":"

    To get an RFC adopted, socialize and implement. An RFC gets this status once it has significant momentum--when implementations accumulate, or when the mental model it advocates has begun to permeate our discourse. In other words, adoption is acknowledgment of a de facto standard.

    To refine an RFC, propose changes to it through additional PRs. Typically these changes are driven by experience that accumulates during or after adoption. Minor refinements that just improve clarity can happen inline with lightweight review. Status is still ADOPTED.

    "},{"location":"#stalled","title":"STALLED","text":"

    An RFC is stalled when a proposed RFC makes no progress towards implementation such that it is extremely unlikely it will ever move forward. The stalled state differs from retired in that it is an RFC that has never been implemented or superseded. Like the retired state, it is (likely) an end state and the RFC will not proceed further. Such an RFC remains in the repository on the off chance it will ring a chord with others, be returned to the proposed state, and continue to evolve.

    "},{"location":"#retired","title":"RETIRED","text":"

    An RFC is retired when it is withdrawn from community consideration by its authors, when implementation seems permanently stalled, or when significant refinements require a superseding document. If a retired RFC has been superseded, its Superseded By field should contain a link to the newer spec, and the newer spec's Supersedes field should contain a link to the older spec. Permalinks are not broken.

    "},{"location":"#changing-an-rfc-status","title":"Changing an RFC Status","text":"

    See notes about this in Contributing.

    "},{"location":"#about","title":"About","text":""},{"location":"#license","title":"License","text":"

    This repository is licensed under an Apache 2 License. It is protected by a Developer Certificate of Origin on every commit. This means that any contributions you make must be licensed in an Apache-2-compatible way, and must be free from patent encumbrances or additional terms and conditions. By raising a PR, you certify that this is the case for your contribution.

    For more instructions about contributing, see Contributing.

    "},{"location":"#acknowledgement","title":"Acknowledgement","text":"

    The structure and a lot of the initial language of this repository was borrowed from Indy HIPEs, which borrowed it from Rust RFC. Their good work has made the setup of this repository much quicker and better than it otherwise would have been. If you are not familiar with the Rust community, you should check them out.

    "},{"location":"0000-template-protocol/","title":"Aries RFC 0000: Your Protocol 0.9","text":""},{"location":"0000-template-protocol/#summary","title":"Summary","text":"

    One paragraph explanation of the feature.

    If the RFC you are proposing is NOT a protocol, please use this template as a starting point.

    When completing this template and before submitting as a PR, please remove the template text in sections (other than Implementations). The implementations section should remain as is.

    "},{"location":"0000-template-protocol/#motivation","title":"Motivation","text":"

    Why are we doing this? What use cases does it support? What is the expected outcome?

    "},{"location":"0000-template-protocol/#tutorial","title":"Tutorial","text":""},{"location":"0000-template-protocol/#name-and-version","title":"Name and Version","text":"

    Name and Version

    Specify the official name of the protocol and its version, e.g., \"My Protocol 0.9\".

    Protocol names are often either lower_snake_case or kebob-case. The non-version components of the protocol named are matched exactly.

    URI: https://didcomm.org/lets_do_lunch/<version>/<messageType>

    Message types and protocols are identified with special URIs that match certain conventions. See Message Type and Protocol Identifier URIs for more details.

    The version of a protocol is declared carefully. See Semver Rules for Protocols for details.

    "},{"location":"0000-template-protocol/#key-concepts","title":"Key Concepts","text":"

    This is short--a paragraph or two. It defines terms and describes the flow of the interaction at a very high level. Key preconditions should be noted (e.g., \"You can't issue a credential until you have completed the connection protocol first\"), as well as ways the protocol can start and end, and what can go wrong. The section might also talk about timing constraints and other assumptions. After reading this section, a developer should know what problem your protocol solves, and should have a rough idea of how the protocol works in its simpler variants.

    "},{"location":"0000-template-protocol/#roles","title":"Roles","text":"

    See this note for definitions of the terms \"role\", \"participant\", and \"party\".

    Provides a formal name to each role in the protocol, says who and how many can play each role, and describes constraints associated with those roles (e.g., \"You can only issue a credential if you have a DID on the public ledger\"). The issue of qualification for roles can also be explored (e.g., \"The holder of the credential must be known to the issuer\").

    The formal names for each role are important because they are used when agents discover one another's capabilities; an agent doesn't just claim that it supports a protocol; it makes a claim about which roles in the protocol it supports. An agent that supports credential issuance and an agent that supports credential holding may have very different features, but they both use the credential-issuance protocol. By convention, role names use lower-kebab-case and are compared case-sensitively.

    "},{"location":"0000-template-protocol/#states","title":"States","text":"

    This section lists the possible states that exist for each role. It also enumerates the events (often but not always messages) that can occur, including errors, and what should happen to state as a result. A formal representation of this information is provided in a state machine matrix. It lists events as columns, and states as rows; a cell answers the question, \"If I am in state X (=row), and event Y (=column) occurs, what happens to my state?\" The Tic Tac Toe example is typical.

    Choreography Diagrams from BPMN are good artifacts here, as are PUML sequence diagrams and UML-style state machine diagrams. The matrix form is nice because it forces an exhaustive analysis of every possible event. The diagram styles are often simpler to create and consume, and the PUML and BPMN forms have the virtue that they can support line-by-line diffs when checked in with source code. However, they don't offer an easy way to see if all possible flows have been considered; what they may NOT describe isn't obvious. This--and the freedom from fancy tools--is why the matrix form is used in many early RFCs. We leave it up to the community to settle on whether it wants to strongly recommend specific diagram types.

    The formal names for each state are important, as they are used in acks and problem-reports). For example, a problem-report message declares which state the sender arrived at because of the problem. This helps other participants to react to errors with confidence. Formal state names are also used in the agent test suite, in log messages, and so forth.

    By convention, state names use lower-kebab-case. They are compared case-sensitively.

    State management in protocols is a deep topic. For more information, please see State Details and State Machines.

    "},{"location":"0000-template-protocol/#messages","title":"Messages","text":"

    This section describes each message in the protocol. It should also note the names and versions of messages from other message families that are adopted by the protocol (e.g., an ack or a problem-report). Typically this section is written as a narrative, showing each message type in the context of an end-to-end sample interaction. All possible fields may not appear; an exhaustive catalog is saved for the \"Reference\" section.

    Sample messages that are presented in the narrative should also be checked in next to the markdown of the RFC, in DIDComm Plaintext format.

    The message element of a message type URI are typically lower_camel_case or lower-kebab-case, matching the style of the protocol. JSON items in messages are lower_camel_case and inconsistency in the application of a style within a message is frowned upon by the community.

    "},{"location":"0000-template-protocol/#adopted-messages","title":"Adopted Messages","text":"

    Many protocols should use general-purpose messages such as ack and problem-report) at certain points in an interaction. This reuse is strongly encouraged because it helps us avoid defining redundant message types--and the code to handle them--over and over again (see DRY principle).

    However, using messages with generic values of @type (e.g., \"@type\": \"https://didcomm.org/notification/1.0/ack\") introduces a challenge for agents as they route messages to their internal routines for handling. We expect internal handlers to be organized around protocols, since a protocol is a discrete unit of business value as well as a unit of testing in our agent test suite. Early work on agents has gravitated towards pluggable, routable protocols as a unit of code encapsulation and dependency as well. Thus the natural routing question inside an agent, when it sees a message, is \"Which protocol handler should I route this message to, based on its @type?\" A generic ack can't be routed this way.

    Therefore, we allow a protocol to adopt messages into its namespace. This works very much like python's from module import symbol syntax. It changes the @type attribute of the adopted message. Suppose a rendezvous protocol is identified by the URI https://didcomm.org/rendezvous/2.0, and its definition announces that it has adopted generic 1.x ack messages. When such ack messages are sent, the @type should now use the alias defined inside the namespace of the rendezvous protocol:

    Adoption should be declared in an \"Adopted\" subsection of \"Messages\". When adoption is specified, it should include a minimum adopted version of the adopted message type: \"This protocol adopts ack with version >= 1.4\". All versions of the adopted message that share the same major number should be compatible, given the semver rules that apply to protocols.

    "},{"location":"0000-template-protocol/#constraints","title":"Constraints","text":"

    Many protocols have constraints that help parties build trust. For example, in buying a house, the protocol includes such things as commission paid to realtors to guarantee their incentives, title insurance, earnest money, and a phase of the process where a home inspection takes place. If you are documenting a protocol that has attributes like these, explain them here. If not, the section can be omitted.

    "},{"location":"0000-template-protocol/#reference","title":"Reference","text":"

    All of the sections of reference are optional. If none are needed, the \"Reference\" section can be deleted.

    "},{"location":"0000-template-protocol/#messages-details","title":"Messages Details","text":"

    Unless the \"Messages\" section under \"Tutorial\" covered everything that needs to be known about all message fields, this is where the data type, validation rules, and semantics of each field in each message type are details. Enumerating possible values, or providing ABNF or regexes is encouraged. Following conventions such as those for date- and time-related fields can save a lot of time here.

    Each message type should be associated with one or more roles in the protocol. That is, it should be clear which roles can send and receive which message types.

    If the \"Tutorial\" section covers everything about the messages, this section should be deleted.

    "},{"location":"0000-template-protocol/#examples","title":"Examples","text":"

    This section is optional. It can be used to show alternate flows through the protocol.

    "},{"location":"0000-template-protocol/#collateral","title":"Collateral","text":"

    This section is optional. It could be used to reference files, code, relevant standards, oracles, test suites, or other artifacts that would be useful to an implementer. In general, collateral should be checked in with the RFC.

    "},{"location":"0000-template-protocol/#localization","title":"Localization","text":"

    If communication in the protocol involves humans, then localization of message content may be relevant. Default settings for localization of all messages in the protocol can be specified in an l10n.json file described here and checked in with the RFC. See \"Decorators at Message Type Scope\" in the Localization RFC.

    "},{"location":"0000-template-protocol/#codes-catalog","title":"Codes Catalog","text":"

    If the protocol has a formally defined catalog of codes (e.g., for errors or for statuses), define them in this section. See \"Message Codes and Catalogs\" in the Localization RFC.

    "},{"location":"0000-template-protocol/#drawbacks","title":"Drawbacks","text":"

    Why should we not do this?

    "},{"location":"0000-template-protocol/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"0000-template-protocol/#prior-art","title":"Prior art","text":"

    Discuss prior art, both the good and the bad, in relation to this proposal. A few examples of what this can include are:

    This section is intended to encourage you as an author to think about the lessons from other implementers, provide readers of your proposal with a fuller picture. If there is no prior art, that is fine - your ideas are interesting to us whether they are brand new or if they are an adaptation from other communities.

    Note that while precedent set by other communities is some motivation, it does not on its own motivate an enhancement proposal here. Please also take into consideration that Aries sometimes intentionally diverges from common identity features.

    "},{"location":"0000-template-protocol/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"0000-template-protocol/#implementations","title":"Implementations","text":"

    NOTE: This section should remain in the RFC as is on first release. Remove this note and leave the rest of the text as is. Template text in all other sections should be removed before submitting your Pull Request.

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"0000-template/","title":"Title (Ex. 0000: RFC Topic)","text":""},{"location":"0000-template/#summary","title":"Summary","text":"

    One paragraph explanation of the feature.

    NOTE: If you are creating a protocol RFC, please use this template instead.

    "},{"location":"0000-template/#motivation","title":"Motivation","text":"

    Why are we doing this? What use cases does it support? What is the expected outcome?

    "},{"location":"0000-template/#tutorial","title":"Tutorial","text":"

    Explain the proposal as if it were already implemented and you were teaching it to another Aries contributor or Aries consumer. That generally means:

    Some enhancement proposals may be more aimed at contributors (e.g. for consensus internals); others may be more aimed at consumers.

    "},{"location":"0000-template/#reference","title":"Reference","text":"

    Provide guidance for implementers, procedures to inform testing, interface definitions, formal function prototypes, error codes, diagrams, and other technical details that might be looked up. Strive to guarantee that:

    "},{"location":"0000-template/#drawbacks","title":"Drawbacks","text":"

    Why should we not do this?

    "},{"location":"0000-template/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"0000-template/#prior-art","title":"Prior art","text":"

    Discuss prior art, both the good and the bad, in relation to this proposal. A few examples of what this can include are:

    This section is intended to encourage you as an author to think about the lessons from other implementers, provide readers of your proposal with a fuller picture. If there is no prior art, that is fine - your ideas are interesting to us whether they are brand new or if they are an adaptation from other communities.

    Note that while precedent set by other communities is some motivation, it does not on its own motivate an enhancement proposal here. Please also take into consideration that Aries sometimes intentionally diverges from common identity features.

    "},{"location":"0000-template/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"0000-template/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"LICENSE/","title":"License","text":"
                                 Apache License\n                       Version 2.0, January 2004\n                    http://www.apache.org/licenses/\n

    TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION

    1. Definitions.

      \"License\" shall mean the terms and conditions for use, reproduction, and distribution as defined by Sections 1 through 9 of this document.

      \"Licensor\" shall mean the copyright owner or entity authorized by the copyright owner that is granting the License.

      \"Legal Entity\" shall mean the union of the acting entity and all other entities that control, are controlled by, or are under common control with that entity. For the purposes of this definition, \"control\" means (i) the power, direct or indirect, to cause the direction or management of such entity, whether by contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the outstanding shares, or (iii) beneficial ownership of such entity.

      \"You\" (or \"Your\") shall mean an individual or Legal Entity exercising permissions granted by this License.

      \"Source\" form shall mean the preferred form for making modifications, including but not limited to software source code, documentation source, and configuration files.

      \"Object\" form shall mean any form resulting from mechanical transformation or translation of a Source form, including but not limited to compiled object code, generated documentation, and conversions to other media types.

      \"Work\" shall mean the work of authorship, whether in Source or Object form, made available under the License, as indicated by a copyright notice that is included in or attached to the work (an example is provided in the Appendix below).

      \"Derivative Works\" shall mean any work, whether in Source or Object form, that is based on (or derived from) the Work and for which the editorial revisions, annotations, elaborations, or other modifications represent, as a whole, an original work of authorship. For the purposes of this License, Derivative Works shall not include works that remain separable from, or merely link (or bind by name) to the interfaces of, the Work and Derivative Works thereof.

      \"Contribution\" shall mean any work of authorship, including the original version of the Work and any modifications or additions to that Work or Derivative Works thereof, that is intentionally submitted to Licensor for inclusion in the Work by the copyright owner or by an individual or Legal Entity authorized to submit on behalf of the copyright owner. For the purposes of this definition, \"submitted\" means any form of electronic, verbal, or written communication sent to the Licensor or its representatives, including but not limited to communication on electronic mailing lists, source code control systems, and issue tracking systems that are managed by, or on behalf of, the Licensor for the purpose of discussing and improving the Work, but excluding communication that is conspicuously marked or otherwise designated in writing by the copyright owner as \"Not a Contribution.\"

      \"Contributor\" shall mean Licensor and any individual or Legal Entity on behalf of whom a Contribution has been received by Licensor and subsequently incorporated within the Work.

    2. Grant of Copyright License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare Derivative Works of, publicly display, publicly perform, sublicense, and distribute the Work and such Derivative Works in Source or Object form.

    3. Grant of Patent License. Subject to the terms and conditions of this License, each Contributor hereby grants to You a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable (except as stated in this section) patent license to make, have made, use, offer to sell, sell, import, and otherwise transfer the Work, where such license applies only to those patent claims licensable by such Contributor that are necessarily infringed by their Contribution(s) alone or by combination of their Contribution(s) with the Work to which such Contribution(s) was submitted. If You institute patent litigation against any entity (including a cross-claim or counterclaim in a lawsuit) alleging that the Work or a Contribution incorporated within the Work constitutes direct or contributory patent infringement, then any patent licenses granted to You under this License for that Work shall terminate as of the date such litigation is filed.

    4. Redistribution. You may reproduce and distribute copies of the Work or Derivative Works thereof in any medium, with or without modifications, and in Source or Object form, provided that You meet the following conditions:

      (a) You must give any other recipients of the Work or Derivative Works a copy of this License; and

      (b) You must cause any modified files to carry prominent notices stating that You changed the files; and

      \u00a9 You must retain, in the Source form of any Derivative Works that You distribute, all copyright, patent, trademark, and attribution notices from the Source form of the Work, excluding those notices that do not pertain to any part of the Derivative Works; and

      (d) If the Work includes a \"NOTICE\" text file as part of its distribution, then any Derivative Works that You distribute must include a readable copy of the attribution notices contained within such NOTICE file, excluding those notices that do not pertain to any part of the Derivative Works, in at least one of the following places: within a NOTICE text file distributed as part of the Derivative Works; within the Source form or documentation, if provided along with the Derivative Works; or, within a display generated by the Derivative Works, if and wherever such third-party notices normally appear. The contents of the NOTICE file are for informational purposes only and do not modify the License. You may add Your own attribution notices within Derivative Works that You distribute, alongside or as an addendum to the NOTICE text from the Work, provided that such additional attribution notices cannot be construed as modifying the License.

      You may add Your own copyright statement to Your modifications and may provide additional or different license terms and conditions for use, reproduction, or distribution of Your modifications, or for any such Derivative Works as a whole, provided Your use, reproduction, and distribution of the Work otherwise complies with the conditions stated in this License.

    5. Submission of Contributions. Unless You explicitly state otherwise, any Contribution intentionally submitted for inclusion in the Work by You to the Licensor shall be under the terms and conditions of this License, without any additional terms or conditions. Notwithstanding the above, nothing herein shall supersede or modify the terms of any separate license agreement you may have executed with Licensor regarding such Contributions.

    6. Trademarks. This License does not grant permission to use the trade names, trademarks, service marks, or product names of the Licensor, except as required for reasonable and customary use in describing the origin of the Work and reproducing the content of the NOTICE file.

    7. Disclaimer of Warranty. Unless required by applicable law or agreed to in writing, Licensor provides the Work (and each Contributor provides its Contributions) on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied, including, without limitation, any warranties or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A PARTICULAR PURPOSE. You are solely responsible for determining the appropriateness of using or redistributing the Work and assume any risks associated with Your exercise of permissions under this License.

    8. Limitation of Liability. In no event and under no legal theory, whether in tort (including negligence), contract, or otherwise, unless required by applicable law (such as deliberate and grossly negligent acts) or agreed to in writing, shall any Contributor be liable to You for damages, including any direct, indirect, special, incidental, or consequential damages of any character arising as a result of this License or out of the use or inability to use the Work (including but not limited to damages for loss of goodwill, work stoppage, computer failure or malfunction, or any and all other commercial damages or losses), even if such Contributor has been advised of the possibility of such damages.

    9. Accepting Warranty or Additional Liability. While redistributing the Work or Derivative Works thereof, You may choose to offer, and charge a fee for, acceptance of support, warranty, indemnity, or other liability obligations and/or rights consistent with this License. However, in accepting such obligations, You may act only on Your own behalf and on Your sole responsibility, not on behalf of any other Contributor, and only if You agree to indemnify, defend, and hold each Contributor harmless for any liability incurred by, or claims asserted against, such Contributor by reason of your accepting any such warranty or additional liability.

    END OF TERMS AND CONDITIONS

    APPENDIX: How to apply the Apache License to your work.

      To apply the Apache License to your work, attach the following\n  boilerplate notice, with the fields enclosed by brackets \"[]\"\n  replaced with your own identifying information. (Don't include\n  the brackets!)  The text should be enclosed in the appropriate\n  comment syntax for the file format. We also recommend that a\n  file or class name and description of purpose be included on the\n  same \"printed page\" as the copyright notice for easier\n  identification within third-party archives.\n

    Copyright [yyyy] [name of copyright owner]

    Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

       http://www.apache.org/licenses/LICENSE-2.0\n

    Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

    "},{"location":"MAINTAINERS/","title":"Maintainers","text":""},{"location":"MAINTAINERS/#active-maintainers","title":"Active Maintainers","text":"Name Github LFID Daniel Hardman dhh1128 George Aristy llorllale Nathan George nage Stephen Curran swcurran Drummond Reed talltree Sam Curren TelegramSam"},{"location":"MAINTAINERS/#emeritus-maintainers","title":"Emeritus Maintainers","text":"Name Github LFID"},{"location":"MAINTAINERS/#becoming-a-maintainer","title":"Becoming a Maintainer","text":"

    The Aries community welcomes contributions. Contributors may progress to become a maintainer. To become a maintainer the following steps occur, roughly in order.

    "},{"location":"MAINTAINERS/#removing-maintainers","title":"Removing Maintainers","text":"

    Being a maintainer is not a status symbol or a title to be maintained indefinitely. It will occasionally be necessary and appropriate to move a maintainer to emeritus status. This can occur in the following situations:

    Like adding a maintainer the record and governance process for moving a maintainer to emeritus status is recorded in the github PR making that change.

    Returning to active status from emeritus status uses the same steps as adding a new maintainer. Note that the emeritus maintainer already has the 5 required significant changes as there is no contribution time horizon for those.

    "},{"location":"RFCindex/","title":"Aries RFCs by Status","text":""},{"location":"RFCindex/#adopted","title":"ADOPTED","text":""},{"location":"RFCindex/#accepted","title":"ACCEPTED","text":""},{"location":"RFCindex/#demonstrated","title":"DEMONSTRATED","text":""},{"location":"RFCindex/#proposed","title":"PROPOSED","text":""},{"location":"RFCindex/#stalled","title":"STALLED","text":""},{"location":"RFCindex/#retired","title":"RETIRED","text":"

    (This file is machine-generated; see code/generate_index.py.)

    "},{"location":"SECURITY/","title":"Hyperledger Security Policy","text":""},{"location":"SECURITY/#reporting-a-security-bug","title":"Reporting a Security Bug","text":"

    If you think you have discovered a security issue in any of the Hyperledger projects, we'd love to hear from you. We will take all security bugs seriously and if confirmed upon investigation we will patch it within a reasonable amount of time and release a public security bulletin discussing the impact and credit the discoverer.

    There are two ways to report a security bug. The easiest is to email a description of the flaw and any related information (e.g. reproduction steps, version) to security at hyperledger dot org.

    The other way is to file a confidential security bug in our JIRA bug tracking system. Be sure to set the \u201cSecurity Level\u201d to \u201cSecurity issue\u201d.

    The process by which the Hyperledger Security Team handles security bugs is documented further in our Defect Response page on our wiki.

    "},{"location":"contributing/","title":"Contributing","text":""},{"location":"contributing/#contributing","title":"Contributing","text":""},{"location":"contributing/#do-you-need-an-rfc","title":"Do you need an RFC?","text":"

    Use an RFC to advocate substantial changes to the Aries ecosystem, where those changes need to be understood by developers who use Aries. Minor changes are not RFC-worthy, and changes that are internal in nature, invisible to those consuming Aries, should be documented elsewhere.

    "},{"location":"contributing/#preparation","title":"Preparation","text":"

    Before writing an RFC, consider exploring the idea on the aries chat channel, on community calls (see the Hyperledger Community Calendar), or on aries@lists.hyperledger.org. Encouraging feedback from maintainers is a good sign that you're on the right track.

    "},{"location":"contributing/#how-to-propose-an-rfc","title":"How to propose an RFC","text":"

    Make sure that all of your commits satisfy the DCO requirements of the repo and conform to the license restrictions noted below.

    The RFC Maintainers will check to see if the process has been followed, and request any process changes before merging the PR.

    When the PR is merged, your RFC is now formally in the PROPOSED state.

    "},{"location":"contributing/#changing-an-rfc-status","title":"Changing an RFC Status","text":"

    The lifecycle of an RFC is driven by the author or current champion of the RFC. To move an RFC along in the lifecycle, submit a PR with the following characteristics:

    "},{"location":"contributing/#how-to-get-an-rfc-demonstrated","title":"How to get an RFC demonstrated","text":"

    If your RFC is a feature, it's common (though not strictly required) for it to go to a DEMONSTRATED state next. Write some code that embodies the concepts in the RFC. Publish the code. Then submit a PR that adds your early implementation to the Implementations section, and that changes the status to DEMONSTRATED. These PRs should be accepted immediately, as long as all unit tests pass.

    "},{"location":"contributing/#how-to-get-an-rfc-accepted","title":"How to get an RFC accepted","text":"

    After your RFC is merged and officially acquires the PROPOSED status, the RFC will receive feedback from the larger community, and the author should be prepared to revise it. Updates may be made via pull request, and those changes will be merged as long as the process is followed.

    When you believe that the RFC is mature enough (feedback is somewhat resolved, consensus is emerging, and implementation against it makes sense), submit a PR that changes the status to ACCEPTED. The status change PR will remain open until the maintainers agree on the status change.

    NOTE: contributors who used the Indy HIPE process prior to May 2019 should see the acceptance process substantially simplified under this approach. The bar for acceptance is not perfect consensus and all issues resolved; it's just general agreement that a doc is \"close enough\" that it makes sense to put it on a standards track where it can be improved as implementation teaches us what to tweak.

    "},{"location":"contributing/#how-to-get-an-rfc-adopted","title":"How to get an RFC adopted","text":"

    An accepted RFC is a standards-track document. It becomes an acknowledged standard when there is evidence that the community is deriving meaningful value from it. So:

    When you believe an RFC is a de facto standard, raise a PR that changes the status to ADOPTED. If the community is friendly to the idea, the doc will enter a two-week \"Final Comment Period\" (FCP), after which there will be a vote on disposition.

    "},{"location":"contributing/#intellectual-property","title":"Intellectual Property","text":"

    This repository is licensed under an Apache 2 License. It is protected by a Developer Certificate of Origin on every commit. This means that any contributions you make must be licensed in an Apache-2-compatible way, and must be free from patent encumbrances or additional terms and conditions. By raising a PR, you certify that this is the case for your contribution.

    "},{"location":"contributing/#signing-off-commits-dco","title":"Signing off commits (DCO)","text":"

    If you are here because you forgot to sign off your commits, fear not. Check out how to sign off previous commits

    We use developer certificate of origin (DCO) in all Hyperledger repositories, so to get your pull requests accepted, you must certify your commits by signing off on each commit.

    "},{"location":"contributing/#signing-off-your-current-commit","title":"Signing off your current commit","text":"

    The -s flag signs off the commit message with your name and email.

    "},{"location":"contributing/#how-to-sign-off-previous-commits","title":"How to Sign Off Previous Commits","text":"
    1. Use $ git log to see which commits need to be signed off. Any commits missing a line with Signed-off-by: Example Author <author.email@example.com> need to be re-signed.
    2. Go into interactive rebase mode using $ git rebase -i HEAD~X where X is the number of commits up to the most current commit you would like to see.
    3. You will see a list of the commits in a text file. On the line after each commit you need to sign off, add exec git commit --amend --no-edit -s with the lowercase -s adding a text signature in the commit body. Example that signs both commits:
    pick 12345 commit message\nexec git commit --amend --no-edit -s\npick 67890 commit message\nexec git commit --amend --no-edit -s\n
    1. If you need to re-sign a bunch of previous commits at once, find the earliest commit missing the sign off line using $ git log and use that the HASH of the commit before it in this command:

       $ git rebase --exec 'git commit --amend --no-edit -n -s' -i HASH.\n
      This will sign off every commit from most recent to right before the HASH.

    2. You will probably need to do a force push ($ git push -f) if you had previously pushed unsigned commits to remote.

    "},{"location":"github-issues/","title":"Submitting Issues","text":""},{"location":"github-issues/#github-issues","title":"Github Issues","text":"

    RFCs that are not on the brink of changing status are discussed through Github Issues. We generally use Issues to discuss changes that are controversial, and PRs to propose changes that are vetted. This keeps the PR backlog small.

    Any community member can open an issue; specify the RFC number in the issue title so the relationship is clear. For example, to open an issue on RFC 0025, an appropriate title for the issue might be:

    RFC 0025: Need better diagram in Reference section\n

    When the community feels that it's reasonable to suggest a formal status change for an RFC, best efforts are made to resolve all open issues against it. Then a PR is raised against the RFC's main README.md, where the status field in the header is updated. Discussion about the status change typically takes place in the comment stream for the PR, with issues being reserved for non-status-change topics.

    "},{"location":"tags/","title":"Tags on RFCs","text":"

    We categorize RFCs with tags to enrich searches. The meaning of tags is given below.

    "},{"location":"tags/#protocol","title":"protocol","text":"

    Defines one or more protocols that explain how messages are passed to accomplish a stateful interaction.

    "},{"location":"tags/#decorator","title":"decorator","text":"

    Defines one or more decorators that act as mixins to DIDComm messages. Decorators can be added to many different message types without explicitly declaring them in message schemas.

    "},{"location":"tags/#feature","title":"feature","text":"

    Defines a specific, concrete feature that agents might support.

    "},{"location":"tags/#concept","title":"concept","text":"

    Defines a general aspect of the Aries mental model, or a pattern that manifests in many different features.

    "},{"location":"tags/#community-update","title":"community-update","text":"

    An RFC that tracks a community-coordinated update, as described in RFC 0345. Such updates enable independently deployed, interoperable agents to remain interoperable throughout the transition.

    "},{"location":"tags/#credentials","title":"credentials","text":"

    Relates to verifiable credentials.

    "},{"location":"tags/#rich-schemas","title":"rich-schemas","text":"

    Relates to next-generation schemas, such as those used by https://schema.org, as used in verifiable credentials.

    "},{"location":"tags/#test-anomaly","title":"test-anomaly","text":"

    Violates some aspect of our policy on writing tests for protocols before allowing their status to progress beyond DEMONSTRATED. RFCs should only carry this tag temporarily, to grandfather something where test improvements are happening in the background. When this tag is applied to an RFC, unit tests run by our CI/CD pipeline will emit a warning rather than an error about missing tests, IFF each implementation that lacks tests formats its notes about test results like this:

    name of impl | [MISSING test results](/tags.md#test-anomaly)\n
    "},{"location":"aip2/0003-protocols/","title":"Aries RFC 0003: Protocols","text":""},{"location":"aip2/0003-protocols/#summary","title":"Summary","text":"

    Defines peer-to-peer application-level protocols in the context of interactions among agent-like things, and shows how they should be designed and documented.

    "},{"location":"aip2/0003-protocols/#table-of-contents","title":"Table of Contents","text":""},{"location":"aip2/0003-protocols/#motivation","title":"Motivation","text":"

    APIs in the style of Swagger are familiar to nearly all developers, and it's a common assumption that we should use them to solve the problems at hand in the decentralized identity space. However, to truly decentralize, we must think about interactions at a higher level of generalization. Protocols can model all APIs, but not the other way around. This matters. We need to explain why.

    We also need to show how a protocol is defined, so the analog to defining a Swagger API is demystified.

    "},{"location":"aip2/0003-protocols/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0003-protocols/#what-is-a-protocol","title":"What is a Protocol?","text":"

    A protocol is a recipe for a stateful interaction. Protocols are all around us, and are so ordinary that we take them for granted. Each of the following interactions is stateful, and has conventions that constitute a sort of \"recipe\":

    In the context of decentralized identity, protocols manifest at many different levels of the stack: at the lowest levels of networking, in cryptographic algorithms like Diffie Hellman, in the management of DIDs, in the conventions of DIDComm, and in higher-level interactions that solve problems for people with only minimal interest in the technology they're using. However, this RFC focuses on the last of these layers, where use cases and personas are transformed into ../../features with obvious social value like:

    When \"protocol\" is used in an Aries context without any qualifying adjective, it is referencing a recipe for a high-level interaction like these. Lower-level protocols are usually described more specifically and possibly with other verbiage: \"cryptographic algorithms\", \"DID management procedures\", \"DIDComm conventions\", \"transports\", and so forth. This helps us focus \"protocol\" on the place where application developers that consume Aries do most of the work that creates value.

    "},{"location":"aip2/0003-protocols/#relationship-to-apis","title":"Relationship to APIs","text":"

    The familiar world of web APIs is a world of protocols, but it comes with constraints antithetical to decentralized identity:

    Protocols impose none of these constraints. Web APIs can easily be modeled as protocols where the transport is HTTP and the payload is a message, and the Aries community actively does this. We are not opposed to APIs. We just want to describe and standardize the higher level abstraction so we don't have a web solution and a BlueTooth solution that are diverged for no good reason.

    "},{"location":"aip2/0003-protocols/#decentralized","title":"Decentralized","text":"

    As used in the agent/DIDComm world, protocols are decentralized. This means there is not an overseer for the protocol, guaranteeing information flow, enforcing behaviors, and ensuring a coherent view. It is a subtle but important divergence from API-centric approaches, where a server holds state against which all other parties (clients) operate. Instead, all parties are peers, and they interact by mutual consent and with a (hopefully) shared understanding of the rules and goals. Protocols are like a dance\u2014not one that's choreographed or directed, but one where the parties make dynamic decisions and react to them.

    "},{"location":"aip2/0003-protocols/#types-of-protocols","title":"Types of Protocols","text":"

    The simplest protocol style is notification. This style involves two parties, but it is one-way: the notifier emits a message, and the protocol ends when the notified receives it. The basic message protocol uses this style.

    Slightly more complex is the request-response protocol style. This style involve two parties, with the requester making the first move, and the responder completing the interaction. The Discover Features Protocol uses this style. Note that with protocols as Aries models them (and unlike an HTTP request), the request-response messages are asynchronous.

    However, more complex protocols exist. The Introduce Protocol involves three parties, not two. The issue credential protocol includes up to six message types (including ack and problem_report), two of which (proposal and offer) can be used to interactively negotiate details of the elements of the subsequent messages in the protocol.

    See this subsection for definitions of the terms \"role\", \"participant\", and \"party\".

    "},{"location":"aip2/0003-protocols/#agent-design","title":"Agent Design","text":"

    Protocols are the key unit of interoperable extensibility in agents and agent-like things. To add a new interoperable feature to an agent, give it the ability to handle a new protocol.

    When agents receive messages, they map the messages to a protocol handler and possibly to an interaction state that was previously persisted. This is the analog to routes, route handlers, and sessions in web APIs, and could actually be implemented as such if the transport for the protocol is HTTP. The protocol handler is code that knows the rules of a particular protocol; the interaction state tracks progress through an interaction. For more information, see the agents explainer\u2014RFC 0004 and the DIDComm explainer\u2014RFC 0005.

    "},{"location":"aip2/0003-protocols/#composable","title":"Composable","text":"

    Protocols are composable--meaning that you can build complex ones from simple ones. The protocol for asking someone to repeat their last sentence can be part of the protocol for ordering food at a restaurant. It's common to ask a potential driver's license holder to prove their street address before issuing the license. In protocol terms, this is nicely modeled as the present proof being invoked in the middle of an issue credential protocol.

    When we run one protocol inside another, we call the inner protocol a subprotocol, and the outer protocol a superprotocol. A given protocol may be a subprotocol in some contexts, and a standalone protocol in others. In some contexts, a protocol may be a subprotocol from one perspective, and a superprotocol from another (as when protocols are nested at least 3 deep).

    Commonly, protocols wait for subprotocols to complete, and then they continue. A good example of this is mentioned above\u2014starting an issue credential flow, but requiring the potential issuer and/or the potential holder to prove something to one another before completing the process.

    In other cases, a protocol B is not \"contained\" inside protocol A. Rather, A triggers B, then continues in parallel, without waiting for B to complete. This coprotocol relationship is analogous to relationship between coroutines in computer science. In the Introduce Protocol, the final step is to begin a connection protocol between the two introducees-- but the introduction coprotocol completes when the connect coprotocol starts, not when it completes.

    "},{"location":"aip2/0003-protocols/#message-types","title":"Message Types","text":"

    A protocol includes a number of message types that enable the execution of an instance of a protocol. Collectively, the message types of a protocol become the skeleton of its interface. Most of the message types are defined with the protocol, but several key message types, notably acks and problem reports are defined in separate RFCs and adopted into a protocol. This ensures that the structure of such messages is standardized, but used in the context of the protocol adopting the message types.

    "},{"location":"aip2/0003-protocols/#handling-unrecognized-items-in-messages","title":"Handling Unrecognized Items in Messages","text":"

    In the semver section of this document there is discussion of the handling of mismatches in minor versions supported and received. Notably, a recipient that supports a given minor version of a protocol less than that of a received protocol message should ignore any unrecognized fields in the message. Such handling of unrecognized data items applies more generally than just minor version mismatches. A recipient of a message from a supported major version of a protocol should ignore any unrecognized items in a received message, even if the supported and minor versions are the same. When items from the message are ignored, the recipient may want to send a warning problem-report message with code fields-ignored.

    "},{"location":"aip2/0003-protocols/#ingredients","title":"Ingredients","text":"

    A protocol has the following ingredients:

    "},{"location":"aip2/0003-protocols/#how-to-define-a-protocol","title":"How to Define a Protocol","text":"

    To define a protocol, write an RFC. Specific instructions for protocol RFCs, and a discussion about the theory behind detailed protocol ../../concepts, are given in the instructions for protocol RFCs and in the protocol RFC template.

    The tictactoe protocol is attached to this RFC as an example.

    "},{"location":"aip2/0003-protocols/#security-considerations","title":"Security Considerations","text":""},{"location":"aip2/0003-protocols/#replay-attacks","title":"Replay Attacks","text":"

    It should be noted that when defining a protocol that has domain specific requirements around preventing replay attacks, an @id property SHOULD be required. Given an @id field is most commonly set to be a UUID, it should provide randomness comparable to that of a nonce in preventing replay attacks. However, this means that care will be needed in processing of the @id field to make sure its value has not been used before. In some cases, nonces require being unpredictable as well. In this case, greater review should be taken as to how the @id field should be used in the domain specific protocol. In the event where the @id field is not adequate for preventing replay attacks, it's recommended that an additional nonce field be required by the domain specific protocol specification.

    "},{"location":"aip2/0003-protocols/#reference","title":"Reference","text":""},{"location":"aip2/0003-protocols/#message-type-and-protocol-identifier-uris","title":"Message Type and Protocol Identifier URIs","text":"

    Message types and protocols are identified with URIs that match certain conventions.

    "},{"location":"aip2/0003-protocols/#mturi","title":"MTURI","text":"

    A message type URI (MTURI) identifies message types unambiguously. Standardizing its format is important because it is parsed by agents that will map messages to handlers--basically, code will look at this string and say, \"Do I have something that can handle this message type inside protocol X version Y?\"

    When this analysis happens, strings should be compared for byte-wise equality in all segments except version. This means that case, unicode normalization, and punctuation differences all matter. It is thus best practice to avoid protocol and message names that differ only in subtle, easy-to-mistake ways.

    Comparison of the version segment of an MTURI or PIURI should follow semver rules and is discussed in the semver section of this document.

    The URI MUST be composed as follows:

    message-type-uri  = doc-uri delim protocol-name\n    \"/\" protocol-version \"/\" message-type-name\ndelim             = \"?\" / \"/\" / \"&\" / \":\" / \";\" / \"=\"\nprotocol-name     = identifier\nprotocol-version  = semver\nmessage-type-name = identifier\nidentifier        = alpha *(*(alphanum / \"_\" / \"-\" / \".\") alphanum)\n

    It can be loosely matched and parsed with the following regex:

        (.*?)([a-z0-9._-]+)/(\\d[^/]*)/([a-z0-9._-]+)$\n

    A match will have captures groups of (1) = doc-uri, (2) = protocol-name, (3) = protocol-version, and (4) = message-type-name.

    The goals of this URI are, in descending priority:

    The doc-uri portion is any URI that exposes documentation about protocols. A developer should be able to browse to that URI and use human intelligence to look up the named and versioned protocol. Optionally and preferably, the full URI may produce a page of documentation about the specific message type, with no human mediation involved.

    "},{"location":"aip2/0003-protocols/#piuri","title":"PIURI","text":"

    A shorter URI that follows the same conventions but lacks the message-type-name portion is called a protocol identifier URI (PIURI).

    protocol-identifier-uri  = doc-uri delim protocol-name\n    \"/\" semver\n

    Its loose matcher regex is:

        (.*?)([a-z0-9._-]+)/(\\d[^/]*)/?$\n

    The following are examples of valid MTURIs and PIURIs:

    "},{"location":"aip2/0003-protocols/#semver-rules-for-protocols","title":"Semver Rules for Protocols","text":"

    Semver rules apply to protocols, with the version of a protocol is expressed in the semver portion of its identifying URI. The \"ingredients\" of a protocol combine to form a public API in the semver sense. Core Aries protocols specify only major and minor elements in a version; the patch component is not used. Non-core protocols may choose to use the patch element.

    The major and minor versions of protocols match semver semantics:

    Within a given major version of a protocol, an agent should:

    This leads to the following received message handling rules:

    Note: The deprecation of the \"warning\" problem-reports in cases of minor version mismatches is because the recipient of the response can detect the mismatch by looking at the PIURI, making the \"warning\" unnecessary, and because the problem-report message may be received after (and definitely at a different time than) the response message, and so the warning is of very little value to the recipient. Recipients should still be aware that minor version mismatch warning problem-report messages may be received and handle them appropriately, likely by quietly ignoring them.

    As documented in the semver documentation, these requirements are not applied when major version 0 is used. In that case, minor version increments are considered breaking.

    Agents may support multiple major versions and select which major version to use when initiating an instance of the protocol.

    An agent should reject messages from protocols or unsupported protocol major versions with a problem-report message with code version-not-supported. Agents that receive such a problem-report message may use the discover ../../features protocol to resolve the mismatch.

    "},{"location":"aip2/0003-protocols/#semver-examples","title":"Semver Examples","text":""},{"location":"aip2/0003-protocols/#initiator","title":"Initiator","text":"

    Unless Alice's agent (the initiator of a protocol) knows from prior history that it should do something different, it should begin a protocol using the highest version number that it supports. For example, if A.1 supports versions 2.0 through 2.2 of protocol X, it should use 2.2 as the version in the message type of its first message.

    "},{"location":"aip2/0003-protocols/#recipient-rules","title":"Recipient Rules","text":"

    Agents for Bob (the recipient) should reject messages from protocols with major versions different from those they support. For major version 0, they should also reject protocols with minor versions they don't support, since semver stipulates that ../../features are not stable before 1.0. For example, if B.1 supports only versions 2.0 and 2.1 of protocol X, it should reject any messages from version 3 or version 1 or 0. In most cases, rejecting a message means sending a problem-report that the message is unsupported. The code field in such messages should be version-not-supported. Agents that receive such a problem-report can then use the Discover Features Protocol to resolve version problems.

    Recipient agents should accept messages that differ from their own supported version of a protocol only in the patch, prerelease, and/or build fields, whether these differences make the message earlier or later than the version the recipient prefers. These messages will be robustly compatible.

    For major version >= 1, recipients should also accept messages that differ only in that the message's minor version is earlier than their own preference. In such a case, the recipient should degrade gracefully to use the earlier version of the protocol. If the earlier version lacks important ../../features, the recipient may optionally choose to send, in addition to a response, a problem-report with code version-with-degraded-../../features.

    If a recipient supports protocol X version 1.0, it should tentatively accept messages with later minor versions (e.g., 1.2). Message types that differ in only in minor version are guaranteed to be compatible for the feature set of the earlier version. That is, a 1.0-capable agent can support 1.0 ../../features using a 1.2 message, though of course it will lose any ../../features that 1.2 added. Thus, accepting such a message could have two possible outcomes:

    1. The message at version 1.2 might look and behave exactly like it did at version 1.0, in which case the message will process without any trouble.

    2. The message might contain some fields that are unrecognized and need to be ignored.

    In case 2, it is best practice for the recipient to send a problem-report that is a warning, not an error, announcing that some fields could not be processed (code = fields-ignored-due-to-version-mismatch). Such a message is in addition to any response that the protocol demands of the recipient.

    If the recipient of a protocol's initial message generates a response, the response should use the latest major.minor protocol version that both parties support and know about. Generally, all messages after the first use only major.minor

    "},{"location":"aip2/0003-protocols/#state-details-and-state-machines","title":"State Details and State Machines","text":"

    While some protocols have only one sequence of states to manage, in most different roles perceive the interaction differently. The sequence of states for each role needs to be described with care in the RFC.

    "},{"location":"aip2/0003-protocols/#state-machines","title":"State Machines","text":"

    By convention, protocol state and sequence rules are described using the concept of state machines, and we encourage developers who implement protocols to build them that way.

    Among other benefits, this helps with error handling: when one agent sends a problem-report message to another, the message can make it crystal clear which state it has fallen back to as a result of the error.

    Many developers will have encountered a formal of definition of state machines as they wrote parsers or worked on other highly demanding tasks, and may worry that state machines are heavy and intimidating. But as they are used in Aries protocols, state machines are straightforward and elegant. They cleanly encapsulate logic that would otherwise be a bunch of conditionals scattered throughout agent code. The tictactoe example protocol example includes a complete state machine in less than 50 lines of python code, with tests.

    For an extended discussion of how state machines can be used, including in nested protocols, and with hooks that let custom processing happen at each point in a flow, see https://github.com/dhh1128/distributed-state-machine.

    "},{"location":"aip2/0003-protocols/#processing-points","title":"Processing Points","text":"

    A protocol definition describes key points in the flow where business logic can attach. Some of these processing points are obvious, because the protocol makes calls for decisions to be made. Others are implicit. Some examples include:

    "},{"location":"aip2/0003-protocols/#roles-participants-parties-and-controllers","title":"Roles, Participants, Parties, and Controllers","text":""},{"location":"aip2/0003-protocols/#roles","title":"Roles","text":"

    The roles in a protocol are the perspectives (responsibilities, privileges) that parties take in an interaction.

    This perspective is manifested in three general ways:

    Like parties, roles are normally known at the start of the protocol but this is not a requirement.

    In an auction protocol, there are only two roles\u2014auctioneer and bidder\u2014even though there may be many parties involved.

    "},{"location":"aip2/0003-protocols/#participants","title":"Participants","text":"

    The participants in a protocol are the agents that send and/or receive plaintext application-level messages that embody the protocol's interaction. Alice, Bob, and Carol may each have a cloud agent, a laptop, and a phone; if they engage in an introduction protocol using phones, then the agents on their phones are the participants. If the phones talk directly over Bluetooth, this is particularly clear--but even if the phones leverage push notifications and HTTP such that cloud agents help with routing, only the phone agents are participants, because only they maintain state for the interaction underway. (The cloud agents would be facilitators, and the laptops would be bystanders). When a protocol is complete, the participant agents know about the outcome; they may need to synchronize or replicate their state before other agents of the parties are aware.

    "},{"location":"aip2/0003-protocols/#parties","title":"Parties","text":"

    The parties to a protocol are the entities directly responsible for achieving the protocol's goals. When a protocol is high-level, parties are typically people or organizations; as protocols become lower-level, parties may be specific agents tasked with detail work through delegation.

    Imagine a situation where Alice wants a vacation. She engages with a travel agent named Bob. Together, they begin an \"arrange a vacation\" protocol. Alice is responsible for expressing her parameters and proving her willingness to pay; Bob is responsible for running a bunch of subprotocols to work out the details. Alice and Bob--not software agents they use--are parties to this high-level protocol, since they share responsibility for its goals.

    As soon as Alice has provided enough direction and hangs up the phone, Bob begins a sub-protocol with a hotel to book a room for Alice. This sub-protocol has related but different goals--it is about booking a particular hotel room, not about the vacation as a whole. We can see the difference when we consider that Bob could abandon the booking and choose a different hotel entirely, without affecting the overarching \"arrange a vacation\" protocol.

    With the change in goal, the parties have now changed, too. Bob and a hotel concierge are the ones responsible for making the \"book a hotel room\" protocol progress. Alice is an approver and indirect stakeholder, but she is not doing the work. (In RACI terms, Alice is an \"accountable\" or \"approving\" entity, but only Bob and the concierge are \"responsible\" parties.)

    Now, as part of the hotel reservation, Bob tells the concierge that the guest would like access to a waverunner to play in the ocean on day 2. The concierge engages in a sub-sub-protocol to reserve the waverunner. The goal of this sub-sub-protocol is to reserve the equipment, not to book a hotel or arrange a vacation. The parties to this sub-sub-protocol are the concierge and the person or automated system that manages waverunners.

    Often, parties are known at the start of a protocol; however, that is not a requirement. Some protocols might commence with some parties not yet known or assigned.

    For many protocols, there are only two parties, and they are in a pairwise relationship. Other protocols are more complex. Introductions involves three; an auction may involve many.

    Normally, the parties that are involved in a protocol also participate in the interaction but this is not always the case. Consider a gossip protocol, two parties may be talking about a third party. In this case, the third party would not even know that the protocol was happening and would definitely not participate.

    "},{"location":"aip2/0003-protocols/#controllers","title":"Controllers","text":"

    The controllers in a protocol are entities that make decisions. They may or may not be direct parties.

    Imagine a remote chess game between Bob and Carol, conducted with software agents. The chess protocol isn't technically about how to select a wise chess move; it's about communicating the moves so parties achieve the shared goal of running a game to completion. Yet choices about moves are clearly made as the protocol unfolds. These choices are made by controllers--Bob and Carol--while the agents responsible for the work of moving the game forward wait with the protocol suspended.

    In this case, Bob and Carol could be analyzed as parties to the protocol, as well as controllers. But in other cases, the ../../concepts are distinct. For example, in a protocol to issue credentials, the issuing institution might use an AI and/or business automation as a controller.

    "},{"location":"aip2/0003-protocols/#instructions-for-protocol-rfcs","title":"Instructions for Protocol RFCs","text":"

    A protocol RFC conforms to general RFC patterns, but includes some specific substructure.

    Please see the special protocol RFC template for details.

    "},{"location":"aip2/0003-protocols/#drawbacks","title":"Drawbacks","text":"

    This RFC creates some formalism around defining protocols. It doesn't go nearly as far as SOAP or CORBA/COM did, but it is slightly more demanding of a protocol author than the familiar world of RESTful Swagger/OpenAPI.

    The extra complexity is justified by the greater demands that agent-to-agent communications place on the protocol definition. See notes in Prior Art section for details.

    "},{"location":"aip2/0003-protocols/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Some of the simplest DIDComm protocols could be specified in a Swagger/OpenAPI style. This would give some nice tooling. However, not all fit into that mold. It may be desirable to create conversion tools that allow Swagger interop.

    "},{"location":"aip2/0003-protocols/#prior-art","title":"Prior art","text":""},{"location":"aip2/0003-protocols/#bpmn","title":"BPMN","text":"

    BPMN (Business Process Model and Notation) is a graphical language for modeling flows of all types (plus things less like our protocols as well). BPMN is a mature standard sponsored by OMG(Object Management Group). It has a nice tool ecosystem (such as this). It also has an XML file format, so the visual diagrams have a two-way transformation to and from formal written language. And it has a code generation mode, where BPMN can be used to drive executable behavior if diagrams are sufficiently detailed and sufficiently standard. (Since BPMN supports various extensions and is often used at various levels of formality, execution is not its most common application.)

    BPMN began with a focus on centralized processes (those driven by a business entity), with diagrams organized around the goal of the point-of-view entity and what they experience in the interaction. This is somewhat different from a DIDComm protocol where any given entity may experience the goal and the scope of interaction differently; the state machine for a home inspector in the \"buy a home\" protocol is quite different, and somewhat separable, from the state machine of the buyer, and that of the title insurance company.

    BPMN 2.0 introduced the notion of a choreography, which is much closer to the concept of an A2A protocol, and which has quite an elegant and intuitive visual representation. However, even a BPMN choreography doesn't have a way to discuss interactions with decorators, adoption of generic messages, and other A2A-specific concerns. Thus, we may lean on BPMN for some diagramming tasks, but it is not a substitute for the RFC definition procedure described here.

    "},{"location":"aip2/0003-protocols/#wsdl","title":"WSDL","text":"

    WSDL (Web Services Description Language) is a web-centric evolution of earlier, RPC-style interface definition languages like IDL in all its varieties and CORBA. These technologies describe a called interface, but they don't describe the caller, and they lack a formalism for capturing state changes, especiall by the caller. They are also out of favor in the programmer community at present, as being too heavy, too fragile, or poorly supported by current tools.

    "},{"location":"aip2/0003-protocols/#swagger-openapi","title":"Swagger / OpenAPI","text":"

    Swagger / OpenAPI overlaps with some of the concerns of protocol definition in agent-to-agent interactions. We like the tools and the convenience of the paradigm offered by OpenAPI, but where these two do not overlap, we have impedance.

    Agent-to-agent protocols must support more than 2 roles, or two roles that are peers, whereas RESTful web services assume just client and server--and only the server has a documented API.

    Agent-to-agent protocols are fundamentally asynchronous, whereas RESTful web services mostly assume synchronous request~response.

    Agent-to-agent protocols have complex considerations for diffuse trust, whereas RESTful web services centralize trust in the web server.

    Agent-to-agent protocols need to support transports beyond HTTP, whereas RESTful web services do not.

    Agent-to-agent protocols are nestable, while RESTful web services don't provide any special support for that construct.

    "},{"location":"aip2/0003-protocols/#other","title":"Other","text":""},{"location":"aip2/0003-protocols/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0003-protocols/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python several protocols, circa Feb 2019 Aries Framework - .NET several protocols, circa Feb 2019 Streetcred.id several protocols, circa Feb 2019 Aries Cloud Agent - Python numerous protocols plus extension mechanism for pluggable protocols Aries Static Agent - Python 2 or 3 protocols Aries Framework - Go DID Exchange Connect.Me mature but proprietary protocols; community protocols in process Verity mature but proprietary protocols; community protocols in process Aries Protocol Test Suite 2 or 3 core protocols; active work to implement all that are ACCEPTED, since this tests conformance of other agents Pico Labs implemented protocols: connections, trust_ping, basicmessage, routing"},{"location":"aip2/0003-protocols/roles-participants-etc/","title":"Roles participants etc","text":""},{"location":"aip2/0003-protocols/roles-participants-etc/#roles-participants-parties-and-controllers","title":"Roles, Participants, Parties, and Controllers","text":""},{"location":"aip2/0003-protocols/roles-participants-etc/#roles","title":"Roles","text":"

    The roles in a protocol are the perspectives (responsibilities, privileges) that parties take i an interaction.

    This perspective is manifested in three general ways:

    Like parties, roles are normally known at the start of the protocol but this is not a requirement.

    In an auction protocol, there are only two roles\u2014auctioneer and bidder\u2014even though there may be many parties involved.

    "},{"location":"aip2/0003-protocols/roles-participants-etc/#participants","title":"Participants","text":"

    The participants in a protocol are the agents that send and/or receive plaintext application-level messages that embody the protocol's interaction. Alice, Bob, and Carol may each have a cloud agent, a laptop, and a phone; if they engage in an introduction protocol using phones, then the agents on their phones are the participants. If the phones talk directly over Bluetooth, this is particularly clear--but even if the phones leverage push notifications and HTTP such that cloud agents help with routing, only the phone agents are participants, because only they maintain state for the interaction underway. (The cloud agents would be facilitators, and the laptops would be bystanders). When a protocol is complete, the participant agents know about the outcome; they may need to synchronize or replicate their state before other agents of the parties are aware.

    "},{"location":"aip2/0003-protocols/roles-participants-etc/#parties","title":"Parties","text":"

    The parties to a protocol are the entities directly responsible for achieving the protocol's goals. When a protocol is high-level, parties are typically people or organizations; as protocols become lower-level, parties may be specific agents tasked with detail work through delegation.

    Imagine a situation where Alice wants a vacation. She engages with a travel agent named Bob. Together, they begin an \"arrange a vacation\" protocol. Alice is responsible for expressing her parameters and proving her willingness to pay; Bob is responsible for running a bunch of subprotocols to work out the details. Alice and Bob--not software agents they use--are parties to this high-level protocol, since they share responsibility for its goals.

    As soon as Alice has provided enough direction and hangs up the phone, Bob begins a sub-protocol with a hotel to book a room for Alice. This sub-protocol has related but different goals--it is about booking a particular hotel room, not about the vacation as a whole. We can see the difference when we consider that Bob could abandon the booking and choose a different hotel entirely, without affecting the overarching \"arrange a vacation\" protocol.

    With the change in goal, the parties have now changed, too. Bob and a hotel concierge are the ones responsible for making the \"book a hotel room\" protocol progress. Alice is an approver and indirect stakeholder, but she is not doing the work. (In RACI terms, Alice is an \"accountable\" or \"approving\" entity, but only Bob and the concierge are \"responsible\" parties.)

    Now, as part of the hotel reservation, Bob tells the concierge that the guest would like access to a waverunner to play in the ocean on day 2. The concierge engages in a sub-sub-protocol to reserve the waverunner. The goal of this sub-sub-protocol is to reserve the equipment, not to book a hotel or arrange a vacation. The parties to this sub-sub-protocol are the concierge and the person or automated system that manages waverunners.

    Often, parties are known at the start of a protocol; however, that is not a requirement. Some protocols might commence with some parties not yet known or assigned.

    For many protocols, there are only two parties, and they are in a pairwise relationship. Other protocols are more complex. Introductions involves three; an auction may involve many.

    Normally, the parties that are involved in a protocol also participate in the interaction but this is not always the case. Consider a gossip protocol, two parties may be talking about a third party. In this case, the third party would not even know that the protocol was happening and would definitely not participate.

    "},{"location":"aip2/0003-protocols/roles-participants-etc/#controllers","title":"Controllers","text":"

    The controllers in a protocol are entities that make decisions. They may or may not be direct parties.

    Imagine a remote chess game between Bob and Carol, conducted with software agents. The chess protocol isn't technically about how to select a wise chess move; it's about communicating the moves so parties achieve the shared goal of running a game to completion. Yet choices about moves are clearly made as the protocol unfolds. These choices are made by controllers--Bob and Carol--while the agents responsible for the work of moving the game forward wait with the protocol suspended.

    In this case, Bob and Carol could be analyzed as parties to the protocol, as well as controllers. But in other cases, the concepts are distinct. For example, in a protocol to issue credentials, the issuing institution might use an AI and/or business automation as a controller.

    "},{"location":"aip2/0003-protocols/tictactoe/","title":"Tic Tac Toe Protocol 1.0","text":""},{"location":"aip2/0003-protocols/tictactoe/#summary","title":"Summary","text":"

    Describes a simple protocol, already familiar to most developers, as a way to demonstrate how all protocols should be documented.

    "},{"location":"aip2/0003-protocols/tictactoe/#motivation","title":"Motivation","text":"

    Playing tic-tac-toe is a good way to test whether agents are working properly, since it requires two parties to take turns and to communicate reliably about state. However, it is also pretty simple, and it has a low bar for trust (it's not dangerous to play tic-tac-toe with a malicious stranger). Thus, we expect agent tic-tac-toe to be a good way to test basic plumbing and to identify functional gaps. The game also provides a way of testing interactions with the human owners of agents, or of hooking up an agent AI.

    "},{"location":"aip2/0003-protocols/tictactoe/#tutorial","title":"Tutorial","text":"

    Tic-tac-toe is a simple game where players take turns placing Xs and Os in a 3x3 grid, attempting to capture 3 cells of the grid in a straight line.

    "},{"location":"aip2/0003-protocols/tictactoe/#name-and-version","title":"Name and Version","text":"

    This defines the tictactoe protocol, version 1.x, as identified by the following PIURI:

    did:sov:SLfEi9esrjzybysFxQZbfq;spec/tictactoe/1.0\n
    "},{"location":"aip2/0003-protocols/tictactoe/#key-concepts","title":"Key Concepts","text":"

    A tic-tac-toe game is an interaction where 2 parties take turns to make up to 9 moves. It starts when either party proposes the game, and ends when one of the parties wins, or when all all cells in the grid are occupied but nobody has won (a draw).

    Note: Optionally, a Tic-Tac-Toe game can be preceded by a Coin Flip Protocol to decide who goes first. This is not a high-value enhancement, but we add it for illustration purposes. If used, the choice-id field in the initial propose message of the Coin Flip should have the value did:sov:SLfEi9esrjzybysFxQZbfq;spec/tictactoe/1.0/who-goes-first, and the caller-wins and flipper-wins fields should contain the DIDs of the two players.

    Illegal moves and moving out of turn are errors that trigger a complaint from the other player. However, they do not scuttle the interaction. A game can also be abandoned in an unfinished state by either player, for any reason. Games can last any amount of time.

    About the Key Concepts section: Here we describe the flow at a very\nhigh level. We identify preconditions, ways the protocol can start\nand end, and what can go wrong. We also talk about timing\nconstraints and other assumptions.\n
    "},{"location":"aip2/0003-protocols/tictactoe/#roles","title":"Roles","text":"

    There are two parties in a tic-tac-toe game, but only one role, player. One player places 'X' for the duration of a game; the other places 'O'. There are no special requirements about who can be a player. The parties do not need to be trusted or even known to one another, either at the outset or as the game proceeds. No prior setup is required, other than an ability to communicate.

    About the Roles section: Here we name the roles in the protocol,\nsay who and how many can play each role, and describe constraints.\nWe also explore qualifications for roles.\n
    "},{"location":"aip2/0003-protocols/tictactoe/#states","title":"States","text":"

    The states of each player in the protocol evolve according to the following state machine:

    When a player is in the my-move state, possible valid events include send move (the normal case), send outcome (if the player decides to abandon the game), and receive outcome (if the other player decides to abandon). A receive move event could conceivably occur, too-- but it would be an error on the part of the other player, and would trigger a problem-report message as described above, leaving the state unchanged.

    In the their-move state, send move is an impossible event for a properly behaving player. All 3 of the other events could occur, causing a state transition.

    In the wrap-up state, the game is over, but communication with the outcome message has not yet occurred. The logical flow is send outcome, whereupon the player transitions to the done state.

    About the States section: Here we explain which states exist for each\nrole. We also enumerate the events that can occur, including messages,\nerrors, or events triggered by surrounding context, and what should\nhappen to state as a result. In this protocol, we only have one role,\nand thus only one state machine matrix. But in many protocols, each\nrole may have a different state machine.\n
    "},{"location":"aip2/0003-protocols/tictactoe/#messages","title":"Messages","text":"

    All messages in this protocol are part of the \"tictactoe 1.0\" message family uniquely identified by this DID reference: did:sov:SLfEi9esrjzybysFxQZbfq;spec/tictactoe/1.0

    NOTE 1: All the messages defined in a protocol should follow DIDComm best practices as far as how they name fields and define their data types and semantics. NOTE 2 about the \"DID Reference\" URI that appears here: DIDs can be resolved to a DID doc that contains an endpoint, to which everything after a semicolon can be appended. Thus, if this DID is publicly registered and its DID doc gives an endpoint of http://example.com, this URI would mean that anyone can find a formal definition of the protocol at http://example.com/spec/tictactoe/1.0. It is also possible to use a traditional URI here, such as http://example.com/spec/tictactoe/1.0. If that sort of URI is used, it is best practice for it to reference immutable content, as with a link to specific commit on github: https://github.com/hyperledger/aries-rfcs/blob/ab7a04f/concepts/0003-protocols/tictactoe/README.md#messages"},{"location":"aip2/0003-protocols/tictactoe/#move-message","title":"move message","text":"

    The protocol begins when one party sends a move message to the other. It looks like this:

    @id is required here, as it establishes a message thread that will govern the rest of the game.

    me tells which mark (X or O) the sender is placing. It is required.

    moves is optional in the first message of the interaction. If missing or empty, the sender of the first message is inviting the recipient to make the first move. If it contains a move, the sender is moving first.

    Moves are strings like \"X:B2\" that match the regular expression (?i)[XO]:[A-C][1-3]. They identify a mark to be placed (\"X\" or \"O\") and a position in the 3x3 grid. The grid's columns and rows are numbered like familiar spreadsheets, with columns A, B, and C, and rows 1, 2, and 3.

    comment is optional and probably not used much, but could be a way for players to razz one another or chat as they play. It follows the conventions of localized messages.

    Other decorators could be placed on tic-tac-toe messages, such as those to enable message timing to force players to make a move within a certain period of time.

    "},{"location":"aip2/0003-protocols/tictactoe/#subsequent-moves","title":"Subsequent Moves","text":"

    Once the initial move message has been sent, game play continues by each player taking turns sending responses, which are also move messages. With each new message the move array inside the message grows by one, ensuring that the players agree on the current accumulated state of the game. The me field is still required and must accurately reflect the role of the message sender; it thus alternates values between X and O.

    Subsequent messages in the game use the message threading mechanism where the @id of the first move becomes the ~thread.thid for the duration of the game.

    An evolving sequence of move messages might thus look like this, suppressing all fields except what's required:

    "},{"location":"aip2/0003-protocols/tictactoe/#messagemove-2","title":"Message/Move 2","text":"

    This is the first message in the thread that's sent by the player placing \"O\"; hence it has myindex = 0.

    "},{"location":"aip2/0003-protocols/tictactoe/#messagemove-3","title":"Message/Move 3","text":"

    This is the second message in the thread by the player placing \"X\"; hence it has myindex = 1.

    "},{"location":"aip2/0003-protocols/tictactoe/#messagemove-4","title":"Message/Move 4","text":"

    ...and so forth.

    Note that the order of the items in the moves array is NOT significant. The state of the game at any given point of time is fully captured by the moves, regardless of the order in which they were made.

    If a player makes an illegal move or another error occurs, the other player can complain using a problem-report message, with explain.@l10n.code set to one of the values defined in the Message Catalog section (see below).

    "},{"location":"aip2/0003-protocols/tictactoe/#outcome-message","title":"outcome message","text":"

    Game play ends when one player sends a move message that manages to mark 3 cells in a row. Thereupon, it is best practice, but not strictly required, for the other player to send an acknowledgement in the form of an outcome message.

    The moves and me fields from a move message can also, optionally, be included to further document state. The winner field is required. Its value may be \"X\", \"O\", or--in the case of a draw--\"none\".

    This outcome message can also be used to document an abandoned game, in which case winner is null, and comment can be used to explain why (e.g., timeout, loss of interest).

    About the Messages section: Here we explain the message types, but\nalso which roles send which messages, what sequencing rules apply,\nand how errors may occur during the flow. The message begins with\nan announcement of the identifier and version of the message\nfamily, and also enumerates error codes to be used with problem\nreports. This protocol is simple enough that we document the\ndatatypes and validation rules for fields inline in the narrative;\nin more complex protocols, we'd move that text into the Reference\n> Messages section instead.\n
    "},{"location":"aip2/0003-protocols/tictactoe/#constraints","title":"Constraints","text":"

    Players do not have to trust one another. Messages do not have to be authcrypted, although anoncrypted messages still have to have a path back to the sender to be useful.

    About the Constraints section: Many protocols have rules\nor mechanisms that help parties build trust. For example, in buying\na house, the protocol includes such things as commission paid to\nrealtors to guarantee their incentives, title insurance, earnest\nmoney, and a phase of the process where a home inspection takes\nplace. If you are documenting a protocol that has attributes like\nthese, explain them here.\n
    "},{"location":"aip2/0003-protocols/tictactoe/#reference","title":"Reference","text":"
    About the Reference section: If the Tutorial > Messages section\nsuppresses details, we would add a Messages section here to\nexhaustively describe each field. We could also include an\nExamples section to show variations on the main flow.\n
    "},{"location":"aip2/0003-protocols/tictactoe/#collateral","title":"Collateral","text":"

    A reference implementation of the logic of a game is provided with this RFC as python 3.x code. See game.py. There is also a simple hand-coded AI that can play the game when plugged into an agent (see ai.py), and a set of unit tests that prove correctness (see test_tictactoe.py).

    A full implementation of the state machine is provided as well; see state_machine.py and test_state_machine.py.

    The game can be played interactively by running python game.py.

    "},{"location":"aip2/0003-protocols/tictactoe/#localization","title":"Localization","text":"

    The only localizable field in this message family is comment on both move and outcome messages. It contains ad hoc text supplied by the sender, instead of a value selected from an enumeration and identified by code for use with message catalogs. This means the only approach to localize move or outcome messages is to submit comment fields to an automated translation service. Because the locale of tictactoe messages is not predefined, each message must be decorated with ~l10n.locale to make automated translation possible.

    There is one other way that localization is relevant to this protocol: in error messages. Errors are communicated through the general problem-report message type rather than through a special message type that's part of the tictactoe family. However, we define a catalog of tictactoe-specific error codes below to make this protocol's specific error strings localizable.

    Thus, all instances of this message family carry localization metadata in the form of an implicit ~l10n decorator that looks like this:

    This JSON fragment is checked in next to the narrative content of this RFC as ~l10n.json, for easy machine parsing.

    Individual messages can use the ~l10n decorator to supplement or override these settings.

    For more information about localization concepts, see the RFC about localized messages.

    "},{"location":"aip2/0003-protocols/tictactoe/#message-catalog","title":"Message Catalog","text":"

    To facilitate localization of error messages, all instances of this message family assume the following catalog in their ~l10n data:

    When referencing this catalog, please be sure you have the correct version. The official, immutable URL to this version of the catalog file is:

    https://github.com/hyperledger/indy-hipe/blob/fc7a6028/text/tictactoe-protocol/catalog.json\n

    This JSON fragment is checked in next to the narrative content of this RFC as catalog.json, for easy machine parsing. The catalog currently contains localized alternatives only for English. Other language contributions would be welcome.

    For more information, see the Message Catalog section of the localization HIPE.

    "},{"location":"aip2/0003-protocols/tictactoe/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Verity Commercially licensed enterprise agent, SaaS or on-prem. Pico Labs Open source TicTacToe for Pico Agents"},{"location":"aip2/0004-agents/","title":"Aries RFC 0004: Agents","text":""},{"location":"aip2/0004-agents/#summary","title":"Summary","text":"

    Provide a high-level introduction to the ../../concepts of agents in the self-sovereign identity ecosystem.

    "},{"location":"aip2/0004-agents/#tutorial","title":"Tutorial","text":"

    Managing an identity is complex. We need tools to help us.

    In the physical world, we often delegate complexity to trusted proxies that can help. We hire an accountant to do our taxes, a real estate agent to help us buy a house, and a talent agent to help us pitch an album to a recording studio.

    On the digital landscape, humans and organizations (and sometimes, things) cannot directly consume and emit bytes, store and manage data, or perform the crypto that self-sovereign identity demands. They need delegates--agents--to help. Agents are a vital dimension across which we exercise sovereignty over identity.

    "},{"location":"aip2/0004-agents/#essential-characteristics","title":"Essential Characteristics","text":"

    When we use the term \"agent\" in the SSI community, we more properly mean \"an agent of self-sovereign identity.\" This means something more specific than just a \"user agent\" or a \"software agent.\" Such an agent has three defining characteristics:

    1. It acts as a fiduciary on behalf of a single identity owner (or, for agents of things like IoT devices, pets, and similar things, a single controller).
    2. It holds cryptographic keys that uniquely embody its delegated authorization.
    3. It interacts using interoperable DIDComm protocols.

    These characteristics don't tie an agent to any particular blockchain. It is possible to implement agents without any use of blockchain at all (e.g., with peer DIDs), and some efforts to do so are quite active.

    "},{"location":"aip2/0004-agents/#canonical-examples","title":"Canonical Examples","text":"

    Three types of agents are especially common:

    1. A mobile app that Alice uses to manage credentials and to connect to others is an agent for Alice.
    2. A cloud-based service that Alice uses to expose a stable endpoint where other agents can talk to her is an agent for Alice.
    3. A server run by Faber College, allowing it to issue credentials to its students, is an agent for Faber.

    Depending on your perspective, you might describe these agents in various ways. #1 can correctly be called a \"mobile\" or \"edge\" or \"rich\" agent. #2 can be called a \"cloud\" or \"routing\" agent. #3 can be called an \"on-prem\" or \"edge\" or \"advanced\" agent. See Categorizing Agents for a discussion about why multiple labels are correct.

    Agents can be other things as well. They can big or small, complex or simple. They can interact and be packaged in various ways. They can be written in a host of programming languages. Some are more canonical than others. But all the ones we intend to interact with in the self-sovereign identity problem domain share the three essential characteristics described above.

    "},{"location":"aip2/0004-agents/#how-agents-talk","title":"How Agents Talk","text":"

    DID communication (DIDComm), and the protocols built atop it are each rich subjects unto themselves. Here, we will stay very high-level.

    Agents can use many different communication transports: HTTP(S) 1.x and 2.0, WebSockets, IRC, Bluetooth, AMQP, NFC, Signal, email, push notifications to mobile devices, ZMQ, and more. However, all A2A is message-based, and is secured by modern, best-practice public key cryptography. How messages flow over a transport may vary--but their security and privacy toolset, their links to the DIDs and DID Docs of identity owners, and the ways their messages are packaged and handled are standard.

    Agents connect to one another through a standard connection protocol, discover one another's endpoints and keys through standard DID Docs, discover one another's ../../features in a standard way, and maintain relationships in a standard way. All of these points of standardization are what makes them interoperable.

    Because agents speak so many different ways, and because many of them won't have a permanent, accessible point of presence on the network, they can't all be thought of as web servers with a Swagger-compatible API for request-response. The analog to an API construct in agent-land is protocols. These are patterns for stateful interactions. They specify things like, \"If you want to negotiate a sale with an agent, send it a message of type X. It will respond with a message of type Y or type Z, or with an error message of type W. Repeat until the negotiation finishes.\" Some interesting A2A protocols include the one where two parties connect to one another to build a relationship, the one where agents discover which protocols they each support, the one where credentials are issued, and the one where proof is requested and sent. Hundreds of other protocols are being defined.

    "},{"location":"aip2/0004-agents/#how-to-get-an-agent","title":"How to Get an Agent","text":"

    As the ecosystem for self-sovereign identity matures, the average person or organization will get an agent by downloading it from the app store, installing it with their OS package manager, or subscribing to it as a service. However, the availability of quality pre-packaged agents is still limited today.

    Agent providers are emerging in the marketplace, though. Some are governments, NGOs, or educational institutions that offer agents for free; others are for-profit ventures. If you'd like suggestions about ready-to-use agent offerings, please describe your use case in #aries on chat.hyperledger.org.

    There is also intense activity in the SSI community around building custom agents and the tools and processes that enable them. A significant amount of early work occurred in the Indy Agent Community with some of those efforts materializing in the indy-agent repo on github.com and other code bases. The indy-agent repo is now deprecated but is still valuable in demonstrating the basics of agents. With the introduction of Hyperledger Aries, agent efforts are migrating from the Indy Agent community.

    Hyperledger Aries provides a number of code bases ranging from agent frameworks to tools to aid in development to ready-to-use agents.

    "},{"location":"aip2/0004-agents/#how-to-write-an-agent","title":"How to Write an Agent","text":"

    This is one of the most common questions that Aries newcomers ask. It's a challenging one to answer, because it's so open-ended. It's sort of like someone asking, \"Can you give me a recipe for dinner?\" The obvious follow-up question would be, \"What type of dinner did you have in mind?\"

    Here are some thought questions to clarify intent:

    "},{"location":"aip2/0004-agents/#general-patterns","title":"General Patterns","text":"

    We said it's hard to provide a recipe for an agent without specifics. However, the majority of agents do have two things in common: they listen to and process A2A messages, and they use a wallet to manage keys, credentials, and other sensitive material. Unless you have uses cases that involve IoT, cron jobs, or web hooks, your agent is likely to fit this mold.

    The heart of such an agent is probably a messaging handling loop, with pluggable protocols to give it new capabilities, and pluggable transports to let it talk in different ways. The pseudocode for its main function might look like this:

    "},{"location":"aip2/0004-agents/#pseudocode-for-main","title":"Pseudocode for main()","text":"
    1  While not done:\n2      Get next message.\n3      Verify it (decrypt, identify sender, check signature...).\n3      Look at the type of the plaintext message.\n4      Find a plugged in protocol handler that matches that type.\n5      Give plaintext message and security metadata to handler.\n

    Line 2 can be done via standard HTTP dispatch, or by checking an email inbox, or in many other ways. Line 3 can be quite sophisticated--the sender will not be Alice, but rather one of the agents that she has authorized. Verification may involve consulting cached information and/or a blockchain where a DID and DID Doc are stored, among other things.

    The pseudocode for each protocol handler it loads might look like:

    "},{"location":"aip2/0004-agents/#pseudocode-for-protocol-handler","title":"Pseudocode for protocol handler","text":"
    1  Check authorization against metadata. Reject if needed.\n2  Read message header. Is it part of an ongoing interaction?\n3  If yes, load persisted state.\n4  Process the message and update interaction state.\n5  If a response is appropriate:\n6      Prepare response content.\n7      Ask my outbound comm module to package and send it.\n

    Line 4 is the workhorse. For example, if the interaction is about issuing credentials and this agent is doing the issuance, this would be where it looks up the material for the credential in internal databases, formats it appropriately, and records the fact that the credential has now been built. Line 6 might be where that credential is attached to an outgoing message for transmission to the recipient.

    The pseudocode for the outbound communication module might be:

    "},{"location":"aip2/0004-agents/#pseudocode-for-outbound","title":"Pseudocode for outbound","text":"
    1  Iterate through all pluggable transports to find best one to use\n     with the intended recipient.\n2  Figure out how to route the message over the selected transport.\n3  Serialize the message content and encrypt it appropriately.\n4  Send the message.\n

    Line 2 can be complex. It involves looking up one or more endpoints in the DID Doc of the recipient, and finding an intersection between transports they use, and transports the sender can speak. Line 3 requires the keys of the sender, which would normally be held in a wallet.

    If you are building this sort of code using Aries technology, you will certainly want to use Aries Agent SDK. This gives you a ready-made, highly secure wallet that can be adapted to many requirements. It also provides easy functions to serialize and encrypt. Many of the operations you need to do are demonstrated in the SDK's /doc/how-tos folder, or in its Getting Started Guide.

    "},{"location":"aip2/0004-agents/#how-to-learn-more","title":"How to Learn More","text":""},{"location":"aip2/0004-agents/#reference","title":"Reference","text":""},{"location":"aip2/0004-agents/#categorizing-agents","title":"Categorizing Agents","text":"

    Agents can be categorized in various ways, and these categories lead to terms you're likely to encounter in RFCs and other documentation. Understanding the categories will help the definitions make sense.

    "},{"location":"aip2/0004-agents/#by-trust","title":"By Trust","text":"

    A trustable agent runs in an environment that's under the direct control of its owner; the owner can trust it without incurring much risk. A semi-trustable agent runs in an environment where others besides the owner may have access, so giving it crucial secrets is less advisable. (An untrustable delegate should never be an agent, by definition, so we don't use that term.)

    Note that these distinctions highlight what is advisable, not how much trust the owner actually extends.

    "},{"location":"aip2/0004-agents/#by-location","title":"By Location","text":"

    Two related but deprecated terms are edge agent and cloud agent. You will probably hear these terms in the community or read them in docs. The problem with them is that they suggest location, but were formally defined to imply levels of trust. When they were chosen, location and levels of trust were seen as going together--you trust your edge more, and your cloud less. We've since realized that a trustable agent could exist in the cloud, if it is directly controlled by the owner, and a semi-trustable agent could be on-prem, if the owner's control is indirect. Thus we are trying to correct usage and make \"edge\" and \"cloud\" about location instead.

    "},{"location":"aip2/0004-agents/#by-platform","title":"By Platform","text":""},{"location":"aip2/0004-agents/#by-complexity","title":"By Complexity","text":"

    We can arrange agents on a continuum, from simple to complex. The simplest agents are static--they are preconfigured for a single relationship. Thin agents are somewhat fancier. Thick agents are fancier still, and rich agents exhibit the most sophistication and flexibility:

    A nice visualization of several dimensions of agent category has been built by Michael Herman:

    "},{"location":"aip2/0004-agents/#the-agent-ness-continuum","title":"The Agent-ness Continuum","text":"

    The tutorial above gives three essential characteristics of agents, and lists some canonical examples. This may make it feel like agent-ness is pretty binary. However, we've learned that reality is more fuzzy.

    Having a tight definition of an agent may not matter in all cases. However, it is important when we are trying to understand interoperability goals. We want agents to be able to interact with one another. Does that mean they must interact with every piece of software that is even marginally agent-like? Probably not.

    Some attributes that are not technically necessary in agents include:

    Agents that lack these characteristics can still be fully interoperable.

    Some interesting examples of less prototypical agents or agent-like things include:

    "},{"location":"aip2/0004-agents/#dif-hubs","title":"DIF Hubs","text":"

    A DIF Identity Hub is construct that resembles agents in some ways, but that focuses on the data-sharing aspects of identity. Currently DIF Hubs do not use the protocols known to the Aries community, and vice versa. However, there are efforts to bridge that gap.

    "},{"location":"aip2/0004-agents/#identity-wallets","title":"Identity Wallets","text":"

    \"Identity wallet\" is a term that's carefully defined in our ecosystem, and in strict, technical usage it maps to a concept much closer to \"database\" than \"agent\". This is because it is an inert storage container, not an active interacter. However, in casual usage, it may mean the software that uses a wallet to do identity work--in which case it is definitely an agent.

    "},{"location":"aip2/0004-agents/#crypto-wallets","title":"Crypto Wallets","text":"

    Cryptocurrency wallets are quite agent-like in that they hold keys and represent a user. However, they diverge from the agent definition in that they talk proprietary protocols to blockchains, rather than A2A to other agents.

    "},{"location":"aip2/0004-agents/#uport","title":"uPort","text":"

    The uPort app is an edge agent. Here, too, there are efforts to bridge a protocol gap.

    "},{"location":"aip2/0004-agents/#learning-machine","title":"Learning Machine","text":"

    The credential issuance technology offered by Learning Machine, and the app used to share those credentials, are agents of institutions and individuals, respectively. Again, there is a protocol gap to bridge.

    "},{"location":"aip2/0004-agents/#cron-jobs","title":"Cron Jobs","text":"

    A cron job that runs once a night at Faber, scanning a database and revoking credentials that have changes status during the day, is an agent for Faber. This is true even though it doesn't listen for incoming messages (it only talks revocation protocol to the ledger). In order to talk that protocol, it must hold keys delegated by Faber, and it is surely Faber's fiduciary.

    "},{"location":"aip2/0004-agents/#operating-systems","title":"Operating Systems","text":"

    The operating system on a laptop could be described as agent-like, in that it works for a single owner and may have a keystore. However, it doesn't talk A2A to other agents--at least not yet. (OSes that service multiple users fit the definition less.)

    "},{"location":"aip2/0004-agents/#devices","title":"Devices","text":"

    A device can be thought of as an agent (e.g., Alice's phone as an edge agent). However, strictly speaking, one device might run multiple agents, so this is only casually correct.

    "},{"location":"aip2/0004-agents/#sovrin-mainnet","title":"Sovrin MainNet","text":"

    The Sovrin MainNet can be thought of as an agent for the Sovrin community (but NOT the Sovrin Foundation, which codifies the rules but leaves operation of the network to its stewards). Certainly, the blockchain holds keys, uses A2A protocols, and acts in a fiduciary capacity toward the community to further its interests. The only challenge with this perspective is that the Sovrin community has a very fuzzy identity.

    "},{"location":"aip2/0004-agents/#validators","title":"Validators","text":"

    Validator nodes on a particular blockchain are agents of the stewards that operate them.

    "},{"location":"aip2/0004-agents/#digital-assistants","title":"Digital Assistants","text":"

    Digital assistants like Alexa and Google Home are somewhat agent-like. However, the Alexa in the home of the Jones family is probably not an agent for either the Jones family or Amazon. It accepts delegated work from anybody who talks to it (instead of a single controlling identity), and all current implementations are totally antithetical to the ethos of privacy and security required by self-sovereign identity. Although it interfaces with Amazon to download data and ../../features, it isn't Amazon's fiduciary, either. It doesn't hold keys that allow it to represent its owner. The protocols it uses are not interactions with other agents, but with non-agent entities. Perhaps agents and digtal assistants will converge in the future.

    "},{"location":"aip2/0004-agents/#doorbell","title":"Doorbell","text":"

    An doorbell that emits a simple signal each time it is pressed is not an agent. It doesn't represent a fiduciary or hold keys. (However, a fancy IoT doorbell that reports to Alice's mobile agent using an A2A protocol would be an agent.)

    "},{"location":"aip2/0004-agents/#microservices","title":"Microservices","text":"

    A microservice run by AcmeCorp to integrate with its vendors is not an agent for Acme's vendors. Depending on whether it holds keys and uses A2A protocols, it may or may not be an agent for Acme.

    "},{"location":"aip2/0004-agents/#human-delegates","title":"Human Delegates","text":"

    A human delegate who proves empowerment through keys might be thought of as an agent.

    "},{"location":"aip2/0004-agents/#paper","title":"Paper","text":"

    The keys for an agent can be stored on paper. This storage basically constitutes a wallet. It isn't an agent. However, it can be thought of as playing the role of an agent in some cases when designing backup and recovery solutions.

    "},{"location":"aip2/0004-agents/#prior-art","title":"Prior art","text":""},{"location":"aip2/0004-agents/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community Aries Framework - .NET .NET framework for building agents of all types Streetcred.id Commercial mobile and web app built using Aries Framework for .NET Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases. Aries Framework - Go For building agents, hubs and other DIDComm ../../features in GoLang. Connect.Me Free mobile app from Evernym. Installed via app store on iOS and Android. Verity Commercially licensed enterprise agent, SaaS or on-prem. Aries Protocol Test Suite Pico Labs Pico Agents protocols: connections, trust_ping, basicmessage, routing Rust Agent Rust implementation of a framework for building agents of all types"},{"location":"aip2/0005-didcomm/","title":"Aries RFC 0005: DID Communication","text":""},{"location":"aip2/0005-didcomm/#summary","title":"Summary","text":"

    Explain the basics of DID communication (DIDComm) at a high level, and link to other RFCs to promote deeper exploration.

    NOTE: The version of DIDComm collectively defined in Aries RFCs is known by the label \"DIDComm V1.\" A newer version of DIDComm (\"DIDComm V2\") is now being incubated at DIF. Many ../../concepts are the same between the two versions, but there are some differences in the details. For information about detecting V1 versus V2, see Detecting DIDComm Versions.

    "},{"location":"aip2/0005-didcomm/#motivation","title":"Motivation","text":"

    The DID communication between agents and agent-like things is a rich subject with a lot of tribal knowledge. Newcomers to the decentralized identity ecosystem tend to bring mental models that are subtly divergent from its paradigm. When they encounter dissonance, DIDComm becomes mysterious. We need a standard high-level reference.

    "},{"location":"aip2/0005-didcomm/#tutorial","title":"Tutorial","text":"

    This discussion assumes that you have a reasonable grasp on topics like self-sovereign identity, DIDs and DID docs, and agents. If you find yourself lost, please review that material for background and starting assumptions.

    Agent-like things have to interact with one another to get work done. How they talk in general is DIDComm, the subject of this RFC. The specific interactions enabled by DIDComm--connecting and maintaining relationships, issuing credentials, providing proof, etc.--are called protocols; they are described elsewhere.

    "},{"location":"aip2/0005-didcomm/#rough-overview","title":"Rough Overview","text":"

    A typical DIDComm interaction works like this:

    Imagine Alice wants to negotiate with Bob to sell something online, and that DIDComm, not direct human communication, is involved. This means Alice's agent and Bob's agent are going to exchange a series of messages. Alice may just press a button and be unaware of details, but underneath, her agent begins by preparing a plaintext JSON message about the proposed sale. (The particulars are irrelevant here, but would be described in the spec for a \"sell something\" protocol.) It then looks up Bob's DID Doc to access two key pieces of information: * An endpoint (web, email, etc) where messages can be delivered to Bob. * The public key that Bob's agent is using in the Alice:Bob relationship. Now Alice's agent uses Bob's public key to encrypt the plaintext so that only Bob's agent can read it, adding authentication with its own private key. The agent arranges delivery to Bob. This \"arranging\" can involve various hops and intermediaries. It can be complex. Bob's agent eventually receives and decrypts the message, authenticating its origin as Alice using her public key. It prepares its response and routes it back using a reciprocal process (plaintext -> lookup endpoint and public key for Alice -> encrypt with authentication -> arrange delivery).

    That's it.

    Well, mostly. The description is pretty good, if you squint, but it does not fit all DIDComm interactions:

    Before we provide more details, let's explore what drives the design of DIDComm.

    "},{"location":"aip2/0005-didcomm/#goals-and-ramifications","title":"Goals and Ramifications","text":"

    The DIDComm design attempts to be:

    1. Secure
    2. Private
    3. Interoperable
    4. Transport-agnostic
    5. Extensible

    As a list of buzz words, this may elicit nods rather than surprise. However, several items have deep ramifications.

    Taken together, Secure and Private require that the protocol be decentralized and maximally opaque to the surveillance economy.

    Interoperable means that DIDComm should work across programming languages, blockchains, vendors, OS/platforms, networks, legal jurisdictions, geos, cryptographies, and hardware--as well as across time. That's quite a list. It means that DIDComm intends something more than just compatibility within Aries; it aims to be a future-proof lingua franca of all self-sovereign interactions.

    Transport-agnostic means that it should be possible to use DIDComm over HTTP(S) 1.x and 2.0, WebSockets, IRC, Bluetooth, AMQP, NFC, Signal, email, push notifications to mobile devices, Ham radio, multicast, snail mail, carrier pigeon, and more.

    All software design involves tradeoffs. These goals, prioritized as shown, lead down an interesting path.

    "},{"location":"aip2/0005-didcomm/#message-based-asynchronous-and-simplex","title":"Message-Based, Asynchronous, and Simplex","text":"

    The dominant paradigm in mobile and web development today is duplex request-response. You call an API with certain inputs, and you get back a response with certain outputs over the same channel, shortly thereafter. This is the world of OpenAPI (Swagger), and it has many virtues.

    Unfortunately, many agents are not good analogs to web servers. They may be mobile devices that turn off at unpredictable intervals and that lack a stable connection to the network. They may need to work peer-to-peer, when the internet is not available. They may need to interact in time frames of hours or days, not with 30-second timeouts. They may not listen over the same channel that they use to talk.

    Because of this, the fundamental paradigm for DIDComm is message-based, asynchronous, and simplex. Agent X sends a message over channel A. Sometime later, it may receive a response from Agent Y over channel B. This is much closer to an email paradigm than a web paradigm.

    On top of this foundation, it is possible to build elegant, synchronous request-response interactions. All of us have interacted with a friend who's emailing or texting us in near-realtime. However, interoperability begins with a least-common-denominator assumption that's simpler.

    "},{"location":"aip2/0005-didcomm/#message-level-security-reciprocal-authentication","title":"Message-Level Security, Reciprocal Authentication","text":"

    The security and privacy goals, and the asynchronous+simplex design decision, break familiar web assumptions in another way. Servers are commonly run by institutions, and we authenticate them with certificates. People and things are usually authenticated to servers by some sort of login process quite different from certificates, and this authentication is cached in a session object that expires. Furthermore, web security is provided at the transport level (TLS); it is not an independent attribute of the messages themselves.

    In a partially disconnected world where a comm channel is not assumed to support duplex request-response, and where the security can't be ignored as a transport problem, traditional TLS, login, and expiring sessions are impractical. Furthermore, centralized servers and certificate authorities perpetuate a power and UX imbalance between servers and clients that doesn't fit with the peer-oriented DIDComm.

    DIDComm uses public key cryptography, not certificates from some parties and passwords from others. Its security guarantees are independent of the transport over which it flows. It is sessionless (though sessions can easily be built atop it). When authentication is required, all parties do it the same way.

    "},{"location":"aip2/0005-didcomm/#reference","title":"Reference","text":"

    The following RFCs profide additional information: * 0021: DIDComm Message Anatomy * 0020: Message Types * 0011: Decorators * 0008: Message ID and Threading * 0019: Encryption Envelope * 0025: Agent Transports

    "},{"location":"aip2/0005-didcomm/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community Aries Framework - .NET .NET framework for building agents of all types Streetcred.id Commercial mobile and web app built using Aries Framework - .NET Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases. Aries Framework - Go For building agents, hubs and other DIDComm ../../features in GoLang. Connect.Me Free mobile app from Evernym. Installed via app store on iOS and Android. Verity Commercially licensed enterprise agent, SaaS or on-prem. Aries Protocol Test Suite Pico Labs Pico Agents protocols: connections, trust_ping, basicmessage, routing"},{"location":"aip2/0008-message-id-and-threading/","title":"Aries RFC 0008: Message ID and Threading","text":""},{"location":"aip2/0008-message-id-and-threading/#summary","title":"Summary","text":"

    Definition of the message @id field and the ~thread decorator.

    "},{"location":"aip2/0008-message-id-and-threading/#motivation","title":"Motivation","text":"

    Referring to messages is useful in many interactions. A standard method of adding a message ID promotes good patterns in message families. When multiple messages are coordinated in a message flow, the threading pattern helps avoid having to re-roll the same spec for each message family that needs it.

    "},{"location":"aip2/0008-message-id-and-threading/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0008-message-id-and-threading/#message-ids","title":"Message IDs","text":"

    Message IDs are specified with the @id attribute, which comes from JSON-LD. The sender of the message is responsible for creating the message ID, and any message can be identified by the combination of the sender and the message ID. Message IDs should be considered to be opaque identifiers by any recipients.

    "},{"location":"aip2/0008-message-id-and-threading/#message-id-requirements","title":"Message ID Requirements","text":""},{"location":"aip2/0008-message-id-and-threading/#example","title":"Example","text":"
    {\n    \"@type\": \"did:example:12345...;spec/example_family/1.0/example_type\",\n    \"@id\": \"98fd8d72-80f6-4419-abc2-c65ea39d0f38\",\n    \"example_attribute\": \"stuff\"\n}\n

    The following was pulled from this document written by Daniel Hardman and stored in the Sovrin Foundation's protocol repository.

    "},{"location":"aip2/0008-message-id-and-threading/#threaded-messages","title":"Threaded Messages","text":"

    Message threading will be implemented as a decorator to messages, for example:

    {\n    \"@type\": \"did:example:12345...;spec/example_family/1.0/example_type\",\n    \"@id\": \"98fd8d72-80f6-4419-abc2-c65ea39d0f38\",\n    \"~thread\": {\n        \"thid\": \"98fd8d72-80f6-4419-abc2-c65ea39d0f38\",\n        \"pthid\": \"1e513ad4-48c9-444e-9e7e-5b8b45c5e325\",\n        \"sender_order\": 3,\n        \"received_orders\": {\"did:sov:abcxyz\":1},\n        \"goal_code\": \"aries.vc.issue\"\n    },\n    \"example_attribute\": \"example_value\"\n}\n

    The ~thread decorator is generally required on any type of response, since this is what connects it with the original request.

    While not recommended, the initial message of a new protocol instance MAY have an empty ({}) ~thread item. Aries agents receiving a message with an empty ~thread item MUST gracefully handle such a message.

    "},{"location":"aip2/0008-message-id-and-threading/#thread-object","title":"Thread object","text":"

    A thread object has the following fields discussed below:

    "},{"location":"aip2/0008-message-id-and-threading/#thread-id-thid","title":"Thread ID (thid)","text":"

    Because multiple interactions can happen simultaneously, it's important to differentiate between them. This is done with a Thread ID or thid.

    If the Thread object is defined and a thid is given, the Thread ID is the value given there. But if the Thread object is not defined in a message, the Thread ID is implicitly defined as the Message ID (@id) of the given message and that message is the first message of a new thread.

    "},{"location":"aip2/0008-message-id-and-threading/#sender-order-sender_order","title":"Sender Order (sender_order)","text":"

    It is desirable to know how messages within a thread should be ordered. However, it is very difficult to know with confidence the absolute ordering of events scattered across a distributed system. Alice and Bob may each send a message before receiving the other's response, but be unsure whether their message was composed before the other's. Timestamping cannot resolve an impasse. Therefore, there is no unified absolute ordering of all messages within a thread--but there is an ordering of all messages emitted by a each participant.

    In a given thread, the first message from each party has a sender_order value of 0, the second message sent from each party has a sender_order value of 1, and so forth. Note that both Alice and Bob use 0 and 1, without regard to whether the other party may be known to have used them. This gives a strong ordering with respect to each party's messages, and it means that any message can be uniquely identified in an interaction by its thid, the sender DID and/or key, and the sender_order.

    "},{"location":"aip2/0008-message-id-and-threading/#received-orders-received_orders","title":"Received Orders (received_orders)","text":"

    In an interaction, it may be useful for the recipient of a message to know if their last message was received. A received_orders value addresses this need, and could be included as a best practice to help detect missing messages.

    In the example above, if Alice is the sender, and Bob is identified by did:sov:abcxyz, then Alice is saying, \"Here's my message with index 3 (sender_order=3), and I'm sending it in response to your message 1 (received_orders: {<bob's DID>: 1}. Apparently Alice has been more chatty than Bob in this exchange.

    The received_orders field is plural to acknowledge the possibility of multiple parties. In pairwise interactions, this may seem odd. However, n-wise interactions are possible (e.g., in a doctor ~ hospital ~ patient n-wise relationship). Even in pairwise, multiple agents on either side may introduce other actors. This may happen even if an interaction is designed to be 2-party (e.g., an intermediate party emits an error unexpectedly).

    In an interaction with more parties, the received_orders object has a key/value pair for each actor/sender_order, where actor is a DID or a key for an agent:

    \"received_orders\": {\"did:sov:abcxyz\":1, \"did:sov:defghi\":14}\n

    Here, the received_orders fragment makes a claim about the last sender_order that the sender observed from did:sov:abcxyz and did:sov:defghi. The sender of this fragment is presumably some other DID, implying that 3 parties are participating. Any parties unnamed in received_orders have an undefined value for received_orders. This is NOT the same as saying that they have made no observable contribution to the thread. To make that claim, use the special value -1, as in:

    \"received_orders\": {\"did:sov:abcxyz\":1, \"did:sov:defghi\":14, \"did:sov:jklmno\":-1}\n
    "},{"location":"aip2/0008-message-id-and-threading/#example_1","title":"Example","text":"

    As an example, Alice is an issuer and she offers a credential to Bob.

    "},{"location":"aip2/0008-message-id-and-threading/#nested-interactions-parent-thread-id-or-pthid","title":"Nested interactions (Parent Thread ID or pthid)","text":"

    Sometimes there are interactions that need to occur with the same party, while an existing interaction is in-flight.

    When an interaction is nested within another, the initiator of a new interaction can include a Parent Thread ID (pthid). This signals to the other party that this is a thread that is branching off of an existing interaction.

    "},{"location":"aip2/0008-message-id-and-threading/#nested-example","title":"Nested Example","text":"

    As before, Alice is an issuer and she offers a credential to Bob. This time, she wants a bit more information before she is comfortable providing a credential.

    All of the steps are the same, except the two bolded steps that are part of a nested interaction.

    "},{"location":"aip2/0008-message-id-and-threading/#implicit-threads","title":"Implicit Threads","text":"

    Threads reference a Message ID as the origin of the thread. This allows any message to be the start of a thread, even if not originally intended. Any message without an explicit ~thread attribute can be considered to have the following ~thread attribute implicitly present.

    \"~thread\": {\n    \"thid\": <same as @id of the outer message>,\n    \"sender_order\": 0\n}\n
    "},{"location":"aip2/0008-message-id-and-threading/#implicit-replies","title":"Implicit Replies","text":"

    A message that contains a ~thread block with a thid different from the outer message @id, but no sender_order is considered an implicit reply. Implicit replies have a sender_order of 0 and an received_orders:{other:0}. Implicit replies should only be used when a further message thread is not anticipated. When further messages in the thread are expected, a full regular ~thread block should be used.

    Example Message with am Implicit Reply:

    {\n    \"@id\": \"<@id of outer message>\",\n    \"~thread\": {\n        \"thid\": \"<different than @id of outer message>\"\n    }\n}\n
    Effective Message with defaults in place:
    {\n    \"@id\": \"<@id of outer message>\",\n    \"~thread\": {\n        \"thid\": \"<different than @id of outer message>\"\n        \"sender_order\": 0,\n        \"received_orders\": { \"DID of sender\":0 }\n    }\n}\n

    "},{"location":"aip2/0008-message-id-and-threading/#reference","title":"Reference","text":""},{"location":"aip2/0008-message-id-and-threading/#drawbacks","title":"Drawbacks","text":"

    Why should we not do this?

    "},{"location":"aip2/0008-message-id-and-threading/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"aip2/0008-message-id-and-threading/#prior-art","title":"Prior art","text":"

    If you're aware of relevant prior-art, please add it here.

    "},{"location":"aip2/0008-message-id-and-threading/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0008-message-id-and-threading/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases. Connect.Me Free mobile app from Evernym. Installed via app store on iOS and Android. Verity Commercially licensed enterprise agent, SaaS or on-prem. Aries Protocol Test Suite"},{"location":"aip2/0011-decorators/","title":"Aries RFC 0011: Decorators","text":""},{"location":"aip2/0011-decorators/#summary","title":"Summary","text":"

    Explain how decorators work in DID communication.

    "},{"location":"aip2/0011-decorators/#motivation","title":"Motivation","text":"

    Certain semantic patterns manifest over and over again in communication. For example, all communication needs the pattern of testing the type of message received. The pattern of identifying a message and referencing it later is likely to be useful in a high percentage of all protocols that are ever written. A pattern that associates messages with debugging/tracing/timing metadata is equally relevant. And so forth.

    We need a way to convey metadata that embodies these patterns, without complicating schemas, bloating core definitions, managing complicated inheritance hierarchies, or confusing one another. It needs to be elegant, powerful, and adaptable.

    "},{"location":"aip2/0011-decorators/#tutorial","title":"Tutorial","text":"

    A decorator is an optional chunk of JSON that conveys metadata. Decorators are not declared in a core schema but rather supplementary to it. Decorators add semantic content broadly relevant to messaging in general, and not so much tied to the problem domain of a specific type of interaction.

    You can think of decorators as a sort of mixin for agent-to-agent messaging. This is not a perfect analogy, but it is a good one. Decorators in DIDComm also have some overlap (but not a direct congruence) with annotations in Java, attributes in C#, and both decorators and annotations in python.

    "},{"location":"aip2/0011-decorators/#simple-example","title":"Simple Example","text":"

    Imagine we are designing a protocol and associated messages to arrange meetings between two people. We might come up with a meeting_proposal message that looks like this:

    {\n  \"@id\": \"e2987006-a18a-4544-9596-5ad0d9390c8b\",\n  \"@type\": \"did:sov:8700e296a1458aad0d93;spec/meetings/1.0/proposal\",\n  \"proposed_time\": \"2019-12-23 17:00\",\n  \"proposed_place\": \"at the cathedral, Barf\u00fcsserplatz, Basel\",\n  \"comment\": \"Let's walk through the Christmas market.\"\n}\n

    Now we tackle the meeting_proposal_response messages. Maybe we start with something exceedingly simple, like:

    {\n  \"@id\": \"d9390ce2-8ba1-4544-9596-9870065ad08a\",\n  \"@type\": \"did:sov:8700e296a1458aad0d93;spec/meetings/1.0/response\",\n  \"agree\": true,\n  \"comment\": \"See you there!\"\n}\n

    But we quickly realize that the asynchronous nature of messaging will expose a gap in our message design: if Alice receives two meeting proposals from Bob at the same time, there is nothing to bind a response back to the specific proposal it addresses.

    We could extend the schema of our response so it contains an thread that references the @id of the original proposal. This would work. However, it does not satsify the DRY principle of software design, because when we tackle the protocol for negotiating a purchase between buyer and seller next week, we will need the same solution all over again. The result would be a proliferation of schemas that all address the same basic need for associating request and response. Worse, they might do it in different ways, cluttering the mental model for everyone and making the underlying patterns less obvious.

    What we want instead is a way to inject into any message the idea of a thread, such that we can easily associate responses with requests, errors with the messages that triggered them, and child interactions that branch off of the main one. This is the subject of the message threading RFC, and the solution is the ~thread decorator, which can be added to any response:

    {\n  \"@id\": \"d9390ce2-8ba1-4544-9596-9870065ad08a\",\n  \"@type\": \"did:sov:8700e296a1458aad0d93;spec/meetings/1.0/response\",\n  \"~thread\": {\"thid\": \"e2987006-a18a-4544-9596-5ad0d9390c8b\"},\n  \"agree\": true,\n  \"comment\": \"See you there!\"\n}\n
    This chunk of JSON is defined independent of any particular message schema, but is understood to be available in all DIDComm schemas.

    "},{"location":"aip2/0011-decorators/#basic-conventions","title":"Basic Conventions","text":"

    Decorators are defined in RFCs that document a general pattern such as message threading RFC or message localization. The documentation for a decorator explains its semantics and offers examples.

    Decorators are recognized by name. The name must begin with the ~ character (which is reserved in DIDComm messages for decorator use), and be a short, single-line string suitable for use as a JSON attribute name.

    Decorators may be simple key:value pairs \"~foo\": \"bar\". Or they may associate a key with a more complex structure:

    \"~thread\": {\n  \"thid\": \"e2987006-a18a-4544-9596-5ad0d9390c8b\",\n  \"pthid\": \"0c8be298-45a1-48a4-5996-d0d95a397006\",\n  \"sender_order\": 0\n}\n

    Decorators should be thought of as supplementary to the problem-domain-specific fields of a message, in that they describe general communication issues relevant to a broad array of message types. Entities that handle messages should treat all unrecognized fields as valid but meaningless, and decorators are no exception. Thus, software that doesn't recognize a decorator should ignore it.

    However, this does not mean that decorators are necessarily optional. Some messages may intend something tied so tightly to a decorator's semantics that the decorator effectively becomes required. An example of this is the relationship between a general error reporting mechanism and the ~thread decorator: it's not very helpful to report errors without the context that a thread provides.

    Because decorators are general by design and intent, we don't expect namespacing to be a major concern. The community agrees on decorators that everybody will recognize, and they acquire global scope upon acceptance. Their globalness is part of their utility. Effectively, decorator names are like reserved words in a shared public language of messages.

    Namespacing is also supported, as we may discover legitimate uses. When namespaces are desired, dotted name notation is used, as in ~mynamespace.mydecoratorname. We may elaborate this topic more in the future.

    Decorators are orthogonal to JSON-LD constructs in DIDComm messages.

    "},{"location":"aip2/0011-decorators/#versioning","title":"Versioning","text":"

    We hope that community-defined decorators are very stable. However, new fields (a non-breaking change) might need to be added to complex decorators; occasionally, more significant changes might be necessary as well. Therefore, decorators do support semver-style versioning, but in a form that allows details to be ignored unless or until they become important. The rules are:

    1. As with all other aspects of DIDComm messages, unrecognized fields in decorators must be ignored.
    2. Version information can be appended to the name of a decorator, as in ~mydecorator/1. Only a major version (never minor or patch) is used, since:
      • Minor version variations should not break decorator handling code.
      • The dot character . is reserved for namespacing within field names.
      • The extra complexity is not worth the small amount of value it might add.
    3. A decorator without a version is considered to be synonymous with version 1.0, and the version-less form is preferred. This allows version numbers to be added only in the uncommon cases where they are necessary.
    "},{"location":"aip2/0011-decorators/#decorator-scope","title":"Decorator Scope","text":"

    A decorator may be understood to decorate (add semantics) at several different scopes. The discussion thus far has focused on message decorators, and this is by far the most important scope to understand. But there are more possibilities.

    Suppose we wanted to decorate an individual field. This can be done with a field decorator, which is a sibling field to the field it decorates. The name of decorated field is combined with a decorator suffix, as follows:

    {\n  \"note\": \"Let's have a picnic.\",\n  \"note~l10n\": { ... }\n}\n
    In this example, taken from the localization pattern, note~l10n decorates note.

    Besides a single message or a single field, consider the following scopes as decorator targets:

    "},{"location":"aip2/0011-decorators/#reference","title":"Reference","text":"

    This section of this RFC will be kept up-to-date with a list of globally accepted decorators, and links to the RFCs that define them.

    "},{"location":"aip2/0011-decorators/#drawbacks","title":"Drawbacks","text":"

    By having fields that are meaningful yet not declared in core schemas, we run the risk that parsing and validation routines will fail to enforce details that are significant but invisible. We also accept the possibility that interop may look good on paper, but fail due to different understandings of important metadata.

    We believe this risk will take care of itself, for the most part, as real-life usage accumulates and decorators become a familiar and central part of the thinking for developers who work with agent-to-agent communication.

    "},{"location":"aip2/0011-decorators/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    There is ongoing work in the #indy-semantics channel on Rocket.Chat to explore the concept of overlays. These are layers of additional meaning that accumulate above a schema base. Decorators as described here are quite similar in intent. There are some subtle differences, though. The most interesting is that decorators as described here may be applied to things that are not schema-like (e.g., to a message family as a whole, or to a connection, not just to an individual message).

    We may be able to resolve these two worldviews, such that decorators are viewed as overlays and inherit some overlay goodness as a result. However, it is unlikely that decorators will change significantly in form or substance as a result. We thus believe the current mental model is already RFC-worthy, and represents a reasonable foundation for immediate use.

    "},{"location":"aip2/0011-decorators/#prior-art","title":"Prior art","text":"

    See references to similar ../../features in programming languages like Java, C#, and Python, mentiond above.

    See also this series of blog posts about semantic gaps and the need to manage intent in a declarative style: [ Lacunas Everywhere, Bridging the Lacuna Humana, Introducing Marks, Mountains, Molehills, and Markedness ]

    "},{"location":"aip2/0011-decorators/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0011-decorators/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries RFCs: RFC 0008, RFC 0017, RFC 0015, RFC 0023, RFC 0043, RFC 0056, RFC 0075 many implemented RFCs depend on decorators... Indy Cloud Agent - Python message threading Aries Framework - .NET message threading Streetcred.id message threading Aries Cloud Agent - Python message threading, attachments Aries Static Agent - Python message threading Aries Framework - Go message threading Connect.Me message threading Verity message threading Aries Protocol Test Suite message threading"},{"location":"aip2/0015-acks/","title":"Aries RFC 0015: ACKs","text":""},{"location":"aip2/0015-acks/#summary","title":"Summary","text":"

    Explains how one party can send acknowledgment messages (ACKs) to confirm receipt and clarify the status of complex processes.

    "},{"location":"aip2/0015-acks/#change-log","title":"Change log","text":""},{"location":"aip2/0015-acks/#motivation","title":"Motivation","text":"

    An acknowledgment or ACK is one of the most common procedures in protocols of all types. We need a flexible, powerful, and easy way to send such messages in agent-to-agent interactions.

    "},{"location":"aip2/0015-acks/#tutorial","title":"Tutorial","text":"

    Confirming a shared understanding matters whenever independent parties interact. We buy something on Amazon; moments later, our email client chimes to tell us of a new message with subject \"Thank you for your recent order.\" We verbally accept a new job, but don't rest easy until we've also emailed the signed offer letter back to our new boss. We change a password on an online account, and get a text at our recovery phone number so both parties know the change truly originated with the account's owner.

    When formal acknowledgments are missing, we get nervous. And rightfully so; most of us have a story of a package that was lost in the mail, or a web form that didn't submit the way we expected.

    Agents interact in very complex ways. They may use multiple transport mechanisms, across varied protocols, through long stretches of time. While we usually expect messages to arrive as sent, and to be processed as expected, a vital tool in the agent communication repertoire is the receipt of acknowledgments to confirm a shared understanding.

    "},{"location":"aip2/0015-acks/#implicit-acks","title":"Implicit ACKs","text":"

    Message threading includes a lightweight, automatic sort of ACK in the form of the ~thread.received_orders field. This allows Alice to report that she has received Bob's recent message that had ~thread.sender_order = N. We expect threading to be best practice in many use cases, and we expect interactions to often happen reliably enough and quickly enough that implicit ACKs provide high value. If you are considering ACKs but are not familiar with that mechanism, make sure you understand it, first. This RFC offers a supplement, not a replacement.

    "},{"location":"aip2/0015-acks/#explicit-acks","title":"Explicit ACKs","text":"

    Despite the goodness of implicit ACKs, there are many circumstances where a reply will not happen immediately. Explicit ACKs can be vital here.

    Explicit ACKS may also be vital at the end of an interaction, when work is finished: a credential has been issued, a proof has been received, a payment has been made. In such a flow, an implicit ACK meets the needs of the party who received the final message, but the other party may want explicit closure. Otherwise they can't know with confidence about the final outcome of the flow.

    Rather than inventing a new \"interaction has been completed successfully\" message for each protocol, an all-purpose ack message type is recommended. It looks like this:

    {\n  \"@type\": \"https://didcomm.org/notification/1.0/ack\",\n  \"@id\": \"06d474e0-20d3-4cbf-bea6-6ba7e1891240\",\n  \"status\": \"OK\",\n  \"~thread\": {\n    \"thid\": \"b271c889-a306-4737-81e6-6b2f2f8062ae\",\n    \"sender_order\": 4,\n    \"received_orders\": {\"did:sov:abcxyz\": 3}\n  }\n}\n

    It may also be appropriate to send an ack at other key points in an interaction (e.g., when a key rotation notice is received).

    "},{"location":"aip2/0015-acks/#adopting-acks","title":"Adopting acks","text":"

    As discussed in 0003: Protocols, a protocol can adopt the ack message into its own namespace. This allows the type of an ack to change from: https://didcomm.org/notification/1.0/ack to something like: https://didcomm.org/otherProtocol/2.0/ack. Thus, message routing logic can see the ack as part of the other protocol, and send it to the relevant handler--but still have all the standardization of generic acks.

    "},{"location":"aip2/0015-acks/#ack-status","title":"ack status","text":"

    The status field in an ack tells whether the ack is final or not with respect to the message being acknowledged. It has 2 predefined values: OK (which means an outcome has occurred, and it was positive); and PENDING, which acknowledges that no outcome is yet known.

    There is not an ack status of FAIL. In the case of a protocol failure a Report Problem message must be used to inform the other party(ies). For more details, see the next section.

    In addition, more advanced ack usage is possible. See the details in the Reference section.

    "},{"location":"aip2/0015-acks/#relationship-to-problem-report","title":"Relationship to problem-report","text":"

    Negative outcomes do not necessarily mean that something bad happened; perhaps Alice comes to hope that Bob rejects her offer to buy his house because she's found something better--and Bob does that, without any error occurring. This is not a FAIL in a problem sense; it's a FAIL in the sense that the offer to buy did not lead to the outcome Alice intended when she sent it.

    This raises the question of errors. Any time an unexpected problem arises, best practice is to report it to the sender of the message that triggered the problem. This is the subject of the problem reporting mechanism.

    A problem_report is inherently a sort of ACK. In fact, the ack message type and the problem_report message type are both members of the same notification message family. Both help a sender learn about status. Therefore, a requirement for an ack is that a status of FAIL be satisfied by a problem_report message.

    However, there is some subtlety in the use of the two types of messages. Some acks may be sent before a final outcome, so a final problem_report may not be enough. As well, an ack request may be sent after a previous ack or problem_report was lost in transit. Because of these caveats, developers whose code creates or consumes acks should be thoughtful about where the two message types overlap, and where they do not. Carelessness here is likely to cause subtle, hard-to-duplicate surprises from time to time.

    "},{"location":"aip2/0015-acks/#custom-acks","title":"Custom ACKs","text":"

    This mechanism cannot address all possible ACK use cases. Some ACKs may require custom data to be sent, and some acknowledgment schemes may be more sophisticated or fine-grained that the simple settings offered here. In such cases, developers should write their own ACK message type(s) and maybe their own decorators. However, reusing the field names and conventions in this RFC may still be desirable, if there is significant overlap in the ../../concepts.

    "},{"location":"aip2/0015-acks/#requesting-acks","title":"Requesting ACKs","text":"

    A decorator, ~please_ack, allows one agent to request an ad hoc ACK from another agent. This is described in the 0317-please-ack RFC.

    "},{"location":"aip2/0015-acks/#reference","title":"Reference","text":""},{"location":"aip2/0015-acks/#ack-message","title":"ack message","text":""},{"location":"aip2/0015-acks/#status","title":"status","text":"

    Required, values OK or PENDING. As discussed above, this tells whether the ack is final or not with respect to the message being acknowledged.

    "},{"location":"aip2/0015-acks/#threadthid","title":"~thread.thid","text":"

    Required. This links the ack back to the message that requested it.

    All other fields in an ack are present or absent per requirements of ordinary messages.

    "},{"location":"aip2/0015-acks/#drawbacks-and-alternatives","title":"Drawbacks and Alternatives","text":"

    None identified.

    "},{"location":"aip2/0015-acks/#prior-art","title":"Prior art","text":"

    See notes above about the implicit ACK mechanism in ~thread.received_orders.

    "},{"location":"aip2/0015-acks/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0015-acks/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes RFC 0036: Issue Credential Protocol ACKs are adopted by this protocol. RFC 0037: Present Proof Protocol ACKs are adopted by this protocol. RFC 0193: Coin Flip Protocol ACKs are adopted as a subprotocol. Aries Cloud Agent - Python Contributed by the Government of British Columbia."},{"location":"aip2/0017-attachments/","title":"Aries RFC 0017: Attachments","text":""},{"location":"aip2/0017-attachments/#summary","title":"Summary","text":"

    Explains the three canonical ways to attach data to an agent message.

    "},{"location":"aip2/0017-attachments/#motivation","title":"Motivation","text":"

    DIDComm messages use a structured format with a defined schema and a small inventory of scalar data types (string, number, date, etc). However, it will be quite common for messages to supplement formalized exchange with arbitrary data--images, documents, or types of media not yet invented.

    We need a way to \"attach\" such content to DIDComm messages. This method must be flexible, powerful, and usable without requiring new schema updates for every dynamic variation.

    "},{"location":"aip2/0017-attachments/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0017-attachments/#messages-versus-data","title":"Messages versus Data","text":"

    Before explaining how to associate data with a message, it is worth pondering exactly how these two categories of information differ. It is common for newcomers to DIDComm to argue that messages are just data, and vice versa. After all, any data can be transmitted over DIDComm; doesn't that turn it into a message? And any message can be saved; doesn't that make it data?

    While it is true that messages and data are highly related, some semantic differences matter:

    Some examples:

    The line between these two ../../concepts may not be perfectly crisp in all cases, and that is okay. It is clear enough, most of the time, to provide context for the central question of this RFC, which is:

    How do we send data along with messages?

    "},{"location":"aip2/0017-attachments/#3-ways","title":"3 Ways","text":"

    Data can be \"attached\" to DIDComm messages in 3 ways:

    1. Inlining
    2. Embedding
    3. Appending
    "},{"location":"aip2/0017-attachments/#inlining","title":"Inlining","text":"

    In inlining, data is directly assigned as the value paired with a JSON key in a DIDComm message. For example, a message about arranging a rendezvous may inline data about a location:

    This inlined data is in Google Maps pinning format. It has a meaning at rest, outside the message that conveys it, and the versioning of its structure may evolve independently of the versioning of the rendezvous protocol.

    Only JSON data can be inlined, since any other data format would break JSON format rules.

    "},{"location":"aip2/0017-attachments/#embedding","title":"Embedding","text":"

    In embedding, a JSON data structure called an attachment descriptor is assigned as the value paired with a JSON key in a DIDComm message. (Or, an array of attachment descriptors could be assigned.) By convention, the key name for such attachment fields ends with ~attach, making it a field-level decorator that can share common handling logic in agent code. The attachment descriptor structure describes the MIME type and other properties of the data, in much the same way that MIME headers and body describe and contain an attachment in an email message. Given an imaginary protocol that photographers could use to share their favorite photo with friends, the embedded data might manifest like this:

    Embedding is a less direct mechanism than inlining, because the data is no longer readable by a human inspecting the message; it is base64url-encoded instead. A benefit of this approach is that the data can be any MIME type instead of just JSON, and that the data comes with useful metadata that can facilitate saving it as a separate file.

    "},{"location":"aip2/0017-attachments/#appending","title":"Appending","text":"

    Appending is accomplished using the ~attach decorator, which can be added to any message to include arbitrary data. The decorator is an array of attachment descriptor structures (the same structure used for embedding). For example, a message that conveys evidence found at a crime scene might include the following decorator:

    "},{"location":"aip2/0017-attachments/#choosing-the-right-approach","title":"Choosing the right approach","text":"

    These methods for attaching sit along a continuum that is somewhat like the continuum between strong, statically typed languages versus dynamic, duck-typed languages in programming. The more strongly typed the attachments are, the more strongly bound the attachments are to the protocol that conveys them. Each choice has advantages and disadvantages.

    Inlined data is strongly typed; the schema for its associated message must specify the name of the data field, plus what type of data it contains. Its format is always some kind of JSON--often JSON-LD with a @type and/or @context field to provide greater clarity and some independence of versioning. Simple and small data is the best fit for inlining. As mentioned earlier, the Connection Protocol inlines a DID Doc in its connection_request and connection_response messages.

    Embedded data is still associated with a known field in the message schema, but it can have a broader set of possible formats. A credential exchange protocol might embed a credential in the final message that does credential issuance.

    Appended attachments are the most flexible but also the hardest to run through semantically sophisticated processing. They do not require any specific declaration in the schema of a message, although they can be referenced in fields defined by the schema via their nickname (see below). A protocol that needs to pass an arbitrary collection of artifacts without strong knowledge of their semantics might find this helpful, as in the example mentioned above, where scheduling a venue causes various human-usable payloads to be delivered.

    "},{"location":"aip2/0017-attachments/#ids-for-attachments","title":"IDs for attachments","text":"

    The @id field within an attachment descriptor is used to refer unambiguously to an appended (or less ideally, embedded) attachment, and works like an HTML anchor. It is resolved relative to the root @id of the message and only has to be unique within a message. For example, imagine a fictional message type that's used to apply for an art scholarship, that requires photos of art demonstrating techniques A, B, and C. We could have 3 different attachment descriptors--but what if the same work of art demonstrates both technique A and technique B? We don't want to attach the same photo twice...

    What we can do is stipulate that the datatype of A_pic, B_pic, and C_pic is an attachment reference, and that the references will point to appended attachments. A fragment of the result might look like this:

    Another example of nickname use appeared in the first example of appended attachments above, where the notes field refered to the @ids of the various attachments.

    This indirection offers several benefits:

    We could use this same technique with embedded attachments (that is, assign a nickname to an embedded attachment, and refer to that nickname in another field where attached data could be embedded), but this is not considered best practice. The reason is that it requires a field in the schema to have two possible data types--one a string that's a nickname reference, and one an attachment descriptor. Generally, we like fields to have a single datatype in a schema.

    "},{"location":"aip2/0017-attachments/#content-formats","title":"Content Formats","text":"

    There are multiple ways to include content in an attachment. Only one method should be used per attachment.

    "},{"location":"aip2/0017-attachments/#base64url","title":"base64url","text":"

    This content encoding is an obvious choice for any content different than JSON. You can embed content of any type using this method. Examples are plentiful throughout the document. Note that this encoding is always base64url encoding, not plain base64, and that padding is not required. Code that reads this encoding SHOULD tolerate the presence or absence of padding and base64 versus base64url encodings equally well, but code that writes this encoding SHOULD omit the padding to guarantee alignment with encoding rules in the JOSE (JW*) family of specs.

    "},{"location":"aip2/0017-attachments/#json","title":"json","text":"

    If you are embedding an attachment that is JSON, you can embed it directly in JSON format to make access easier, by replacing data.base64 with data.json, where the value assigned to data.json is the attached content:

    This is an overly trivial example of GeoJSON, but hopefully it illustrates the technique. In cases where there is no mime type to declare, it may be helpful to use JSON-LD's @type construct to clarify the specific flavor of JSON in the embedded attachment.

    "},{"location":"aip2/0017-attachments/#links","title":"links","text":"

    All examples discussed so far include an attachment by value--that is, the attachment's bytes are directly inlined in the message in some way. This is a useful mode of data delivery, but it is not the only mode.

    Another way that attachment data can be incorporated is by reference. For example, you can link to the content on a web server by replacing data.base64 or data.json with data.links in an attachment descriptor:

    When you provide such a link, you are creating a logical association between the message and an attachment that can be fetched separately. This makes it possible to send brief descriptors of attachments and to make the downloading of the heavy content optional (or parallelizable) for the recipient.

    The links field is plural (an array) to allow multiple locations to be offered for the same content. This allows an agent to fetch attachments using whichever mechanism(s) are best suited to its individual needs and capabilities.

    "},{"location":"aip2/0017-attachments/#supported-uri-types","title":"Supported URI Types","text":"

    The set of supported URI types in an attachment link is limited to:

    Additional URI types may be added via updates to this RFC.

    If an attachment link with an unsupported URI is received, the agent SHOULD respond with a Problem Report indicated the problem.

    An ecosystem (coordinating set of agents working in a specific business area) may agree to support other URI types within that ecosystem. As such, implementing a mechanism to easily add support for other attachment link URI types might be useful, but is not required.

    "},{"location":"aip2/0017-attachments/#signing-attachments","title":"Signing Attachments","text":"

    In some cases it may be desirable to sign an attachment in addition to or instead of signing the message as a whole. Consider a home-buying protocol; the home inspection needs to be signed even when it is removed from a messaging flow. Attachments may also be signed by a party separate from the sender of the message, or using a different signing key when the sender is performing key rotation.

    Embedded and appended attachments support signatures by the addition of a data.jws field containing a signature in JWS (RFC 7515) format with Detached Content. The payload of the JWS is the raw bytes of the attachment, appropriately base64url-encoded per JWS rules. If these raw bytes are incorporated by value in the DIDComm message, they are already base64url-encoded in data.base64 and are thus directly substitutable for the suppressed data.jws.payload field; if they are externally referenced, then the bytes must be fetched via the URI in data.links and base64url-encoded before the JWS can be fully reconstituted. Signatures over inlined JSON attachments are not currently defined as this depends upon a canonical serialization for the data.

    Sample JWS-signed attachment:

    {\n  \"@type\": \"https://didcomm.org/xhomebuy/1.0/home_insp\",\n  \"inspection_date\": \"2020-03-25\",\n  \"inspection_address\": \"123 Villa de Las Fuentes, Toledo, Spain\",\n  \"comment\": \"Here's that report you asked for.\",\n  \"report~attach\": {\n    \"mime-type\": \"application/pdf\",\n    \"filename\": \"Garcia-inspection-March-25.pdf\",\n    \"data\": {\n      \"base64\": \"eyJ0eXAiOiJKV1QiLA0KICJhbGciOiJIUzI1NiJ... (bytes omitted to shorten)\",\n      \"jws\": {\n        // payload: ...,  <-- omitted: refer to base64 content when validating\n        \"header\": {\n          \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n        },\n        \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n        \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n      }\n    }\n  }\n}\n

    Here, the JWS structure inlines a public key value in did:key format within the unprotected header's kid field. It may also use a DID URL to reference a key within a resolvable DIDDoc. Supported DID URLs should specify a timestamp and/or version for the containing document.

    The JWS protected header consists of at least the following parameter indicating an Edwards curve digital signature:

    {\n  \"alg\": \"EdDSA\"\n}\n

    Additional protected and unprotected header parameters may be included in the JWS and must be ignored by implementations if not specifically supported. Any registered header parameters defined by the JWS RFC must be used according to the specification if present.

    Multiple signatures may be included using the JWS General Serialization syntax. When a single signature is present, the Flattened Serialization syntax should be preferred. Because each JWS contains an unprotected header with the signing key information, the JWS Compact Serialization cannot be supported.

    "},{"location":"aip2/0017-attachments/#size-considerations","title":"Size Considerations","text":"

    DIDComm messages should be small, as a general rule. Just as it's a bad idea to send email messages with multi-GB attachments, it would be bad to send DIDComm messages with huge amounts of data inside them. Remember, a message is about advancing a protocol; usually that can be done without gigabytes or even megabytes of JSON fields. Remember as well that DIDComm messages may be sent over channels having size constraints tied to the transport--an HTTP POST or Bluetooth or NFC or AMQP payload of more than a few MB may be problematic.

    Size pressures in messaging are likely to come from attached data. A good rule of thumb might be to not make DIDComm messages bigger than email or MMS messages--whenever more data needs to be attached, use the inclusion-by-reference technique to allow the data to be fetched separately.

    "},{"location":"aip2/0017-attachments/#security-implications","title":"Security Implications","text":"

    Attachments are a notorious vector for malware and mischief with email. For this reason, agents that support attachments MUST perform input validation on attachments, and MUST NOT invoke risky actions on attachments until such validation has been performed. The status of input validation with respect to attachment data MUST be reflected in the Message Trust Context associated with the data's message.

    "},{"location":"aip2/0017-attachments/#privacy-implications","title":"Privacy Implications","text":"

    When attachments are inlined, they enjoy the same security and transmission guarantees as all agent communication. However, given the right context, a large inlined attachment may be recognizable by its size, even if it is carefully encrypted.

    If attachment content is fetched from an external source, then new complications arise. The security guarantees may change. Data streamed from a CDN may be observable in flight. URIs may be correlating. Content may not be immutable or tamper-resistant.

    However, these issues are not necessarily a problem. If a DIDComm message wants to attach a 4 GB ISO file of a linux distribution, it may be perfectly fine to do so in the clear. Downloading it is unlikely to introduce strong correlation, encryption is unnecessary, and the torrent itself prevents malicious modification.

    Code that handles attachments will need to use wise policy to decide whether attachments are presented in a form that meets its needs.

    "},{"location":"aip2/0017-attachments/#reference","title":"Reference","text":""},{"location":"aip2/0017-attachments/#attachment-descriptor-structure","title":"Attachment Descriptor structure","text":""},{"location":"aip2/0017-attachments/#drawbacks","title":"Drawbacks","text":"

    By providing 3 different choices, we impose additional complexity on agents that will receive messages. They have to handle attachments in 3 different modes.

    "},{"location":"aip2/0017-attachments/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Originally, we only proposed the most flexible method of attaching--appending. However, feedback from the community suggested that stronger binding to schema was desirable. Inlining was independently invented, and is suggested by JSON-LD anyway. Embedding without appending eliminates some valuable ../../features such as unnamed and undeclared ad-hoc attachments. So we ended up wanting to support all 3 modes.

    "},{"location":"aip2/0017-attachments/#prior-art","title":"Prior art","text":"

    Multipart MIME (see RFCs 822, 1341, and 2045) defines a mechanism somewhat like this. Since we are using JSON instead of email messages as the core model, we can't use these mechanisms directly. However, they are an inspiration for what we are showing here.

    "},{"location":"aip2/0017-attachments/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0017-attachments/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Cloud Agent - Python in credential exchange Streetcred.id Commercial mobile and web app built using Aries Framework - .NET"},{"location":"aip2/0019-encryption-envelope/","title":"Aries RFC 0019: Encryption Envelope","text":""},{"location":"aip2/0019-encryption-envelope/#summary","title":"Summary","text":"

    There are two layers of messages that combine to enable interoperable self-sovereign agent-to-agent communication. At the highest level are DIDComm Plaintext Messages - messages sent between identities to accomplish some shared goal (e.g., establishing a connection, issuing a verifiable credential, sharing a chat). DIDComm Plaintext Messages are delivered via the second, lower layer of messaging - DIDComm Encrypted Envelopes. A DIDComm Encrypted Envelope is a wrapper (envelope) around a plaintext message to permit secure sending and routing. A plaintext message going from its sender to its receiver passes through many agents, and an encryption envelope is used for each hop of the journey.

    This RFC describes the DIDComm Encrypted Envelope format and the pack() and unpack() functions that implement this format.

    "},{"location":"aip2/0019-encryption-envelope/#motivation","title":"Motivation","text":"

    Encryption envelopes use a standard format built on JSON Web Encryption - RFC 7516. This format is not captive to Aries; it requires no special Aries worldview or Aries dependencies to implement. Rather, it is a general-purpose solution to the question of how to encrypt, decrypt, and route messages as they pass over any transport(s). By documenting the format here, we hope to provide a point of interoperability for developers of agents inside and outside the Aries ecosystem.

    We also document how Aries implements its support for the DIDComm Encrypted Envelope format through the pack() and unpack() functions. For developers of Aries, this is a sort of design doc; for those who want to implement the format in other tech stacks, it may be a useful reference.

    "},{"location":"aip2/0019-encryption-envelope/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0019-encryption-envelope/#assumptions","title":"Assumptions","text":"

    We assume that each sending agent knows:

    The assumptions can be made because either the message is being sent to an agent within the sending agent's domain and so the sender knows the internal configuration of agents, or the message is being sent outside the sending agent's domain and interoperability requirements are in force to define the sending agent's behaviour.

    "},{"location":"aip2/0019-encryption-envelope/#example-scenario","title":"Example Scenario","text":"

    The example of Alice and Bob's sovereign domains is used for illustrative purposes in defining this RFC.

    In the diagram above:

    For the purposes of this discussion we are defining the Encryption Envelope agent message flow to be:

    1 \u2192 2 \u2192 8 \u2192 9 \u2192 3 \u2192 4

    However, that flow is just one of several that could match this configuration. What we know for sure is that:

    "},{"location":"aip2/0019-encryption-envelope/#encrypted-envelopes","title":"Encrypted Envelopes","text":"

    An encrypted envelope is used to transport any plaintext message from one agent directly to another. In our example message flow above, there are five encrypted envelopes sent, one for each hop in the flow. The process to send an encrypted envelope consists of the following steps:

    This is repeated with each hop, but the encrypted envelopes are nested, such that the plaintext is never visible until it reaches its final recipient.

    "},{"location":"aip2/0019-encryption-envelope/#implementation","title":"Implementation","text":"

    We will describe the pack and unpack algorithms, and their output, in terms of Aries' initial implementation, which may evolve over time. Other implementations could be built, but they would need to emit and consume similar inputs and outputs.

    The data structures emitted and consumed by these algorithms are described in a formal schema.

    "},{"location":"aip2/0019-encryption-envelope/#authcrypt-mode-vs-anoncrypt-mode","title":"Authcrypt mode vs. Anoncrypt mode","text":"

    When packing and unpacking are done in a way that the sender is anonymous, we say that we are in anoncrypt mode. When the sender is revealed, we are in authcrypt mode. Authcrypt mode reveals the sender to the recipient only; it is not the same as a non-repudiable signature. See the RFC about non-repudiable signatures, and this discussion about the theory of non-repudiation.

    "},{"location":"aip2/0019-encryption-envelope/#pack-message","title":"Pack Message","text":""},{"location":"aip2/0019-encryption-envelope/#pack_message-interface","title":"pack_message() interface","text":"

    packed_message = pack_message(wallet_handle, message, receiver_verkeys, sender_verkey)

    "},{"location":"aip2/0019-encryption-envelope/#pack_message-params","title":"pack_message() Params:","text":""},{"location":"aip2/0019-encryption-envelope/#pack_message-return-value-authcrypt-mode","title":"pack_message() return value (Authcrypt mode)","text":"

    This is an example of an outputted message encrypting for two verkeys using Authcrypt.

    {\n    \"protected\": \"eyJlbmMiOiJ4Y2hhY2hhMjBwb2x5MTMwNV9pZXRmIiwidHlwIjoiSldNLzEuMCIsImFsZyI6IkF1dGhjcnlwdCIsInJlY2lwaWVudHMiOlt7ImVuY3J5cHRlZF9rZXkiOiJMNVhEaEgxNVBtX3ZIeFNlcmFZOGVPVEc2UmZjRTJOUTNFVGVWQy03RWlEWnl6cFJKZDhGVzBhNnFlNEpmdUF6IiwiaGVhZGVyIjp7ImtpZCI6IkdKMVN6b1d6YXZRWWZOTDlYa2FKZHJRZWpmenRONFhxZHNpVjRjdDNMWEtMIiwiaXYiOiJhOEltaW5zdFhIaTU0X0otSmU1SVdsT2NOZ1N3RDlUQiIsInNlbmRlciI6ImZ0aW13aWlZUkc3clJRYlhnSjEzQzVhVEVRSXJzV0RJX2JzeERxaVdiVGxWU0tQbXc2NDE4dnozSG1NbGVsTThBdVNpS2xhTENtUkRJNHNERlNnWkljQVZYbzEzNFY4bzhsRm9WMUJkREk3ZmRLT1p6ckticUNpeEtKaz0ifX0seyJlbmNyeXB0ZWRfa2V5IjoiZUFNaUQ2R0RtT3R6UkVoSS1UVjA1X1JoaXBweThqd09BdTVELTJJZFZPSmdJOC1ON1FOU3VsWXlDb1dpRTE2WSIsImhlYWRlciI6eyJraWQiOiJIS1RBaVlNOGNFMmtLQzlLYU5NWkxZajRHUzh1V0NZTUJ4UDJpMVk5Mnp1bSIsIml2IjoiRDR0TnRIZDJyczY1RUdfQTRHQi1vMC05QmdMeERNZkgiLCJzZW5kZXIiOiJzSjdwaXU0VUR1TF9vMnBYYi1KX0pBcHhzYUZyeGlUbWdwWmpsdFdqWUZUVWlyNGI4TVdtRGR0enAwT25UZUhMSzltRnJoSDRHVkExd1Z0bm9rVUtvZ0NkTldIc2NhclFzY1FDUlBaREtyVzZib2Z0d0g4X0VZR1RMMFE9In19XX0=\",\n    \"iv\": \"ZqOrBZiA-RdFMhy2\",\n    \"ciphertext\": \"K7KxkeYGtQpbi-gNuLObS8w724mIDP7IyGV_aN5AscnGumFd-SvBhW2WRIcOyHQmYa-wJX0MSGOJgc8FYw5UOQgtPAIMbSwVgq-8rF2hIniZMgdQBKxT_jGZS06kSHDy9UEYcDOswtoLgLp8YPU7HmScKHSpwYY3vPZQzgSS_n7Oa3o_jYiRKZF0Gemamue0e2iJ9xQIOPodsxLXxkPrvvdEIM0fJFrpbeuiKpMk\",\n    \"tag\": \"kAuPl8mwb0FFVyip1omEhQ==\"\n}\n

    The base64URL encoded protected decodes to this:

    {\n    \"enc\": \"xchacha20poly1305_ietf\",\n    \"typ\": \"JWM/1.0\",\n    \"alg\": \"Authcrypt\",\n    \"recipients\": [\n        {\n            \"encrypted_key\": \"L5XDhH15Pm_vHxSeraY8eOTG6RfcE2NQ3ETeVC-7EiDZyzpRJd8FW0a6qe4JfuAz\",\n            \"header\": {\n                \"kid\": \"GJ1SzoWzavQYfNL9XkaJdrQejfztN4XqdsiV4ct3LXKL\",\n                \"iv\": \"a8IminstXHi54_J-Je5IWlOcNgSwD9TB\",\n                \"sender\": \"ftimwiiYRG7rRQbXgJ13C5aTEQIrsWDI_bsxDqiWbTlVSKPmw6418vz3HmMlelM8AuSiKlaLCmRDI4sDFSgZIcAVXo134V8o8lFoV1BdDI7fdKOZzrKbqCixKJk=\"\n            }\n        },\n        {\n            \"encrypted_key\": \"eAMiD6GDmOtzREhI-TV05_Rhippy8jwOAu5D-2IdVOJgI8-N7QNSulYyCoWiE16Y\",\n            \"header\": {\n                \"kid\": \"HKTAiYM8cE2kKC9KaNMZLYj4GS8uWCYMBxP2i1Y92zum\",\n                \"iv\": \"D4tNtHd2rs65EG_A4GB-o0-9BgLxDMfH\",\n                \"sender\": \"sJ7piu4UDuL_o2pXb-J_JApxsaFrxiTmgpZjltWjYFTUir4b8MWmDdtzp0OnTeHLK9mFrhH4GVA1wVtnokUKogCdNWHscarQscQCRPZDKrW6boftwH8_EYGTL0Q=\"\n            }\n        }\n    ]\n}\n

    "},{"location":"aip2/0019-encryption-envelope/#pack-output-format-authcrypt-mode","title":"pack output format (Authcrypt mode)","text":"
        {\n        \"protected\": \"b64URLencoded({\n            \"enc\": \"xchachapoly1305_ietf\",\n            \"typ\": \"JWM/1.0\",\n            \"alg\": \"Authcrypt\",\n            \"recipients\": [\n                {\n                    \"encrypted_key\": base64URLencode(libsodium.crypto_box(my_key, their_vk, cek, cek_iv))\n                    \"header\": {\n                          \"kid\": \"base58encode(recipient_verkey)\",\n                           \"sender\" : base64URLencode(libsodium.crypto_box_seal(their_vk, base58encode(sender_vk)),\n                            \"iv\" : base64URLencode(cek_iv)\n                }\n            },\n            ],\n        })\",\n        \"iv\": <b64URLencode(iv)>,\n        \"ciphertext\": b64URLencode(encrypt_detached({'@type'...}, protected_value_encoded, iv, cek),\n        \"tag\": <b64URLencode(tag)>\n    }\n
    "},{"location":"aip2/0019-encryption-envelope/#authcrypt-pack-algorithm","title":"Authcrypt pack algorithm","text":"
    1. generate a content encryption key (symmetrical encryption key)
    2. encrypt the CEK for each recipient's public key using Authcrypt (steps below)
      1. set encrypted_key value to base64URLencode(libsodium.crypto_box(my_key, their_vk, cek, cek_iv))
        • Note it this step we're encrypting the cek, so it can be decrypted by the recipient
      2. set sender value to base64URLencode(libsodium.crypto_box_seal(their_vk, sender_vk_string))
        • Note in this step we're encrypting the sender_verkey to protect sender anonymity
      3. base64URLencode(cek_iv) and set to iv value in the header
        • Note the cek_iv in the header is used for the encrypted_key where as iv is for ciphertext
    3. base64URLencode the protected value
    4. encrypt the message using libsodium.crypto_aead_chacha20poly1305_ietf_encrypt_detached(message, protected_value_encoded, iv, cek) this is the ciphertext.
    5. base64URLencode the iv, ciphertext, and tag then serialize the format into the output format listed above.

    For a reference implementation, see https://github.com/hyperledger/indy-sdk/blob/master/libindy/src/commands/crypto.rs

    "},{"location":"aip2/0019-encryption-envelope/#pack_message-return-value-anoncrypt-mode","title":"pack_message() return value (Anoncrypt mode)","text":"

    This is an example of an outputted message encrypted for two verkeys using Anoncrypt.

    {\n    \"protected\": \"eyJlbmMiOiJ4Y2hhY2hhMjBwb2x5MTMwNV9pZXRmIiwidHlwIjoiSldNLzEuMCIsImFsZyI6IkFub25jcnlwdCIsInJlY2lwaWVudHMiOlt7ImVuY3J5cHRlZF9rZXkiOiJYQ044VjU3UTF0Z2F1TFcxemdqMVdRWlEwV0RWMFF3eUVaRk5Od0Y2RG1pSTQ5Q0s1czU4ZHNWMGRfTlpLLVNNTnFlMGlGWGdYRnZIcG9jOGt1VmlTTV9LNWxycGJNU3RqN0NSUHNrdmJTOD0iLCJoZWFkZXIiOnsia2lkIjoiR0oxU3pvV3phdlFZZk5MOVhrYUpkclFlamZ6dE40WHFkc2lWNGN0M0xYS0wifX0seyJlbmNyeXB0ZWRfa2V5IjoiaG5PZUwwWTl4T3ZjeTVvRmd0ZDFSVm05ZDczLTB1R1dOSkN0RzRsS3N3dlljV3pTbkRsaGJidmppSFVDWDVtTU5ZdWxpbGdDTUZRdmt2clJEbkpJM0U2WmpPMXFSWnVDUXY0eVQtdzZvaUE9IiwiaGVhZGVyIjp7ImtpZCI6IjJHWG11Q04ySkN4U3FNUlZmdEJITHhWSktTTDViWHl6TThEc1B6R3FRb05qIn19XX0=\",\n    \"iv\": \"M1GneQLepxfDbios\",\n    \"ciphertext\": \"iOLSKIxqn_kCZ7Xo7iKQ9rjM4DYqWIM16_vUeb1XDsmFTKjmvjR0u2mWFA48ovX5yVtUd9YKx86rDVDLs1xgz91Q4VLt9dHMOfzqv5DwmAFbbc9Q5wHhFwBvutUx5-lDZJFzoMQHlSAGFSBrvuApDXXt8fs96IJv3PsL145Qt27WLu05nxhkzUZz8lXfERHwAC8FYAjfvN8Fy2UwXTVdHqAOyI5fdKqfvykGs6fV\",\n    \"tag\": \"gL-lfmD-MnNj9Pr6TfzgLA==\"\n}\n

    The protected data decodes to this:

    {\n    \"enc\": \"xchacha20poly1305_ietf\",\n    \"typ\": \"JWM/1.0\",\n    \"alg\": \"Anoncrypt\",\n    \"recipients\": [\n        {\n            \"encrypted_key\": \"XCN8V57Q1tgauLW1zgj1WQZQ0WDV0QwyEZFNNwF6DmiI49CK5s58dsV0d_NZK-SMNqe0iFXgXFvHpoc8kuViSM_K5lrpbMStj7CRPskvbS8=\",\n            \"header\": {\n                \"kid\": \"GJ1SzoWzavQYfNL9XkaJdrQejfztN4XqdsiV4ct3LXKL\"\n            }\n        },\n        {\n            \"encrypted_key\": \"hnOeL0Y9xOvcy5oFgtd1RVm9d73-0uGWNJCtG4lKswvYcWzSnDlhbbvjiHUCX5mMNYulilgCMFQvkvrRDnJI3E6ZjO1qRZuCQv4yT-w6oiA=\",\n            \"header\": {\n                \"kid\": \"2GXmuCN2JCxSqMRVftBHLxVJKSL5bXyzM8DsPzGqQoNj\"\n            }\n        }\n    ]\n}\n
    "},{"location":"aip2/0019-encryption-envelope/#pack-output-format-anoncrypt-mode","title":"pack output format (Anoncrypt mode)","text":"
        {\n         \"protected\": \"b64URLencoded({\n            \"enc\": \"xchachapoly1305_ietf\",\n            \"typ\": \"JWM/1.0\",\n            \"alg\": \"Anoncrypt\",\n            \"recipients\": [\n                {\n                    \"encrypted_key\": base64URLencode(libsodium.crypto_box_seal(their_vk, cek)),\n                    \"header\": {\n                        \"kid\": base58encode(recipient_verkey),\n                    }\n                },\n            ],\n         })\",\n         \"iv\": b64URLencode(iv),\n         \"ciphertext\": b64URLencode(encrypt_detached({'@type'...}, protected_value_encoded, iv, cek),\n         \"tag\": b64URLencode(tag)\n    }\n
    "},{"location":"aip2/0019-encryption-envelope/#anoncrypt-pack-algorithm","title":"Anoncrypt pack algorithm","text":"
    1. generate a content encryption key (symmetrical encryption key)
    2. encrypt the CEK for each recipient's public key using Anoncrypt (steps below)
      1. set encrypted_key value to base64URLencode(libsodium.crypto_box_seal(their_vk, cek))
        • Note it this step we're encrypting the cek, so it can be decrypted by the recipient
    3. base64URLencode the protected value
    4. encrypt the message using libsodium.crypto_aead_chacha20poly1305_ietf_encrypt_detached(message, protected_value_encoded, iv, cek) this is the ciphertext.
    5. base64URLencode the iv, ciphertext, and tag then serialize the format into the output format listed above.

    For a reference implementation, see https://github.com/hyperledger/indy-sdk/blob/master/libindy/src/commands/crypto.rs

    "},{"location":"aip2/0019-encryption-envelope/#unpack-message","title":"Unpack Message","text":""},{"location":"aip2/0019-encryption-envelope/#unpack_message-interface","title":"unpack_message() interface","text":"

    unpacked_message = unpack_message(wallet_handle, jwe)

    "},{"location":"aip2/0019-encryption-envelope/#unpack_message-params","title":"unpack_message() Params","text":""},{"location":"aip2/0019-encryption-envelope/#unpack-algorithm","title":"Unpack Algorithm","text":"
    1. seralize data, so it can be used
      • For example, in rust-lang this has to be seralized as a struct.
    2. Lookup the kid for each recipient in the wallet to see if the wallet possesses a private key associated with the public key listed
    3. Check if a sender field is used.
      • If a sender is included use auth_decrypt to decrypt the encrypted_key by doing the following:
        1. decrypt sender verkey using libsodium.crypto_box_seal_open(my_private_key, base64URLdecode(sender))
        2. decrypt cek using libsodium.crypto_box_open(my_private_key, sender_verkey, encrypted_key, cek_iv)
        3. decrypt ciphertext using libsodium.crypto_aead_chacha20poly1305_ietf_open_detached(base64URLdecode(ciphertext_bytes), base64URLdecode(protected_data_as_bytes), base64URLdecode(nonce), cek)
        4. return message, recipient_verkey and sender_verkey following the authcrypt format listed below
      • If a sender is NOT included use anon_decrypt to decrypt the encrypted_key by doing the following:
        1. decrypt encrypted_key using libsodium.crypto_box_seal_open(my_private_key, encrypted_key)
        2. decrypt ciphertext using libsodium.crypto_aead_chacha20poly1305_ietf_open_detached(base64URLdecode(ciphertext_bytes), base64URLdecode(protected_data_as_bytes), base64URLdecode(nonce), cek)
        3. return message and recipient_verkey following the anoncrypt format listed below

    NOTE: In the unpack algorithm, the base64url decode implementation used MUST correctly decode padded and unpadded base64URL encoded data.

    For a reference unpack implementation, see https://github.com/hyperledger/indy-sdk/blob/master/libindy/src/commands/crypto.rs

    "},{"location":"aip2/0019-encryption-envelope/#unpack_message-return-values-authcrypt-mode","title":"unpack_message() return values (authcrypt mode)","text":"
    {\n    \"message\": \"{ \\\"@id\\\": \\\"123456780\\\",\\\"@type\\\":\\\"https://didcomm.org/basicmessage/1.0/message\\\",\\\"sent_time\\\": \\\"2019-01-15 18:42:01Z\\\",\\\"content\\\": \\\"Your hovercraft is full of eels.\\\"}\",\n    \"recipient_verkey\": \"HKTAiYM8cE2kKC9KaNMZLYj4GS8uWCYMBxP2i1Y92zum\",\n    \"sender_verkey\": \"DWwLsbKCRAbYtfYnQNmzfKV7ofVhMBi6T4o3d2SCxVuX\"\n}\n
    "},{"location":"aip2/0019-encryption-envelope/#unpack_message-return-values-anoncrypt-mode","title":"unpack_message() return values (anoncrypt mode)","text":"
    {\n    \"message\": \"{ \\\"@id\\\": \\\"123456780\\\",\\\"@type\\\":\\\"https://didcomm.org/basicmessage/1.0/message\\\",\\\"sent_time\\\": \\\"2019-01-15 18:42:01Z\\\",\\\"content\\\": \\\"Your hovercraft is full of eels.\\\"}\",\n    \"recipient_verkey\": \"2GXmuCN2JCxSqMRVftBHLxVJKSL5bXyzM8DsPzGqQoNj\"\n}\n
    "},{"location":"aip2/0019-encryption-envelope/#additional-notes","title":"Additional Notes","text":""},{"location":"aip2/0019-encryption-envelope/#drawbacks","title":"Drawbacks","text":"

    The current implementation of the pack() message is currently Hyperledger Aries specific. It is based on common crypto libraries (NaCl), but the wrappers are not commonly used outside of Aries. There's currently work being done to fine alignment on a cross-ecosystem interoperable protocol, but this hasn't been achieved yet. This work will hopefully bridge this gap.

    "},{"location":"aip2/0019-encryption-envelope/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    As the JWE standard currently stands, it does not follow this format. We're actively working with the lead writer of the JWE spec to find alignment and are hopeful the changes needed can be added.

    We've also looked at using the Message Layer Security (MLS) specification. This specification shows promise for adoption later on with more maturity. Additionally because they aren't hiding metadata related to the sender (Sender Anonymity), we would need to see some changes made to the specification before we could adopt this spec.

    "},{"location":"aip2/0019-encryption-envelope/#prior-art","title":"Prior art","text":"

    The JWE family of encryption methods.

    "},{"location":"aip2/0019-encryption-envelope/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0019-encryption-envelope/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community Aries Framework - .NET .NET framework for building agents of all types Streetcred.id Commercial mobile and web app built using Aries Framework - .NET Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases. Aries Framework - Go For building agents, hubs and other DIDComm ../../features in GoLang. Aries Protocol Test Suite"},{"location":"aip2/0019-encryption-envelope/schema/","title":"Schema","text":"

    This spec is according JSON Schema v0.7

    {\n    \"id\": \"https://github.com/hyperledger/indy-agent/wiremessage.json\",\n    \"$schema\": \"http://json-schema.org/draft-07/schema#\",\n    \"title\": \"Json Web Message format\",\n    \"type\": \"object\",\n    \"required\": [\"ciphertext\", \"iv\", \"protected\", \"tag\"],\n    \"properties\": {\n        \"protected\": {\n            \"type\": \"object\",\n            \"description\": \"Additional authenticated message data base64URL encoded, so it can be verified by the recipient using the tag\",\n            \"required\": [\"enc\", \"typ\", \"alg\", \"recipients\"],\n            \"properties\": {\n                \"enc\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"xchacha20poly1305_ietf\"],\n                    \"description\": \"The authenticated encryption algorithm used to encrypt the ciphertext\"\n                },\n                \"typ\": { \n                    \"type\": \"string\",\n                    \"description\": \"The message type. Ex: JWM/1.0\"\n                },\n                \"alg\": {\n                    \"type\": \"string\",\n                    \"enum\": [ \"authcrypt\", \"anoncrypt\"]\n                },\n                \"recipients\": {\n                    \"type\": \"array\",\n                    \"description\": \"A list of the recipients who the message is encrypted for\",\n                    \"items\": {\n                        \"type\": \"object\",\n                        \"required\": [\"encrypted_key\", \"header\"],\n                        \"properties\": {\n                            \"encrypted_key\": {\n                                \"type\": \"string\",\n                                \"description\": \"The key used for encrypting the ciphertext. This is also referred to as a cek\"\n                            },\n                            \"header\": {\n                                \"type\": \"object\",\n                                \"required\": [\"kid\"],\n                                \"description\": \"The recipient to whom this message will be sent\",\n                                \"properties\": {\n                                    \"kid\": {\n                                        \"type\": \"string\",\n                                        \"description\": \"base58 encoded verkey of the recipient.\"\n                                    }\n                                }\n                            }\n                        }\n                    }\n                 },     \n            },\n        },\n        \"iv\": {\n            \"type\": \"string\",\n            \"description\": \"base64 URL encoded nonce used to encrypt ciphertext\"\n        },\n        \"ciphertext\": {\n            \"type\": \"string\",\n            \"description\": \"base64 URL encoded authenticated encrypted message\"\n        },\n        \"tag\": {\n            \"type\": \"string\",\n            \"description\": \"Integrity checksum/tag base64URL encoded to check ciphertext, protected, and iv\"\n        }\n    }\n}\n

    "},{"location":"aip2/0020-message-types/","title":"Aries RFC 0020: Message Types","text":""},{"location":"aip2/0020-message-types/#summary","title":"Summary","text":"

    Define structure of message type strings used in agent to agent communication, describe their resolution to documentation URIs, and offer guidelines for protocol specifications.

    "},{"location":"aip2/0020-message-types/#motivation","title":"Motivation","text":"

    A clear convention to follow for agent developers is necessary for interoperability and continued progress as a community.

    "},{"location":"aip2/0020-message-types/#tutorial","title":"Tutorial","text":"

    A \"Message Type\" is a required attribute of all communications sent between parties. The message type instructs the receiving agent how to interpret the content and what content to expect as part of a given message.

    Types are specified within a message using the @type attribute:

    {\n    \"@type\": \"<message type string>\",\n    // other attributes\n}\n

    Message types are URIs that may resolve to developer documentation for the message type, as described in Protocol URIs. We recommend that message type URIs be HTTP URLs.

    "},{"location":"aip2/0020-message-types/#aries-core-message-namespace","title":"Aries Core Message Namespace","text":"

    https://didcomm.org/ is used to namespace protocols defined by the community as \"core protocols\" or protocols that agents should minimally support.

    The didcomm.org DNS entry is currently controlled by the Decentralized Identity Foundation (DIF) based on their role in standardizing the DIDComm Messaging specification.

    "},{"location":"aip2/0020-message-types/#protocols","title":"Protocols","text":"

    Protocols provide a logical grouping for message types. These protocols, along with each type belonging to that protocol, are to be defined in future RFCs or through means appropriate to subprojects.

    "},{"location":"aip2/0020-message-types/#protocol-versioning","title":"Protocol Versioning","text":"

    Version numbering should essentially follow Semantic Versioning 2.0.0, excluding patch version number. To summarize, a change in the major protocol version number indicates a breaking change while the minor protocol version number indicates non-breaking additions.

    "},{"location":"aip2/0020-message-types/#message-type-design-guidelines","title":"Message Type Design Guidelines","text":"

    These guidelines are guidelines on purpose. There will be situations where a good design will have to choose between conflicting points, or ignore all of them. The goal should always be clear and good design.

    "},{"location":"aip2/0020-message-types/#respect-reserved-attribute-names","title":"Respect Reserved Attribute Names","text":"

    Reserved attributes are prefixed with an @ sign, such as @type. Don't use this prefix for an attribute, even if use of that specific attribute is undefined.

    "},{"location":"aip2/0020-message-types/#avoid-ambiguous-attribute-names","title":"Avoid ambiguous attribute names","text":"

    Data, id, and package, are often terrible names. Adjust the name to enhance meaning. For example, use message_id instead of id.

    "},{"location":"aip2/0020-message-types/#avoid-names-with-special-characters","title":"Avoid names with special characters","text":"

    Technically, attribute names can be any valid json key (except prefixed with @, as mentioned above). Practically, you should avoid using special characters, including those that need to be escaped. Underscores and dashes [_,-] are totally acceptable, but you should avoid quotation marks, punctuation, and other symbols.

    "},{"location":"aip2/0020-message-types/#use-attributes-consistently-within-a-protocol","title":"Use attributes consistently within a protocol","text":"

    Be consistent with attribute names between the different types within a protocol. Only use the same attribute name for the same data. If the attribute values are similar, but not exactly the same, adjust the name to indicate the difference.

    "},{"location":"aip2/0020-message-types/#nest-attributes-only-when-useful","title":"Nest Attributes only when useful","text":"

    Attributes do not need to be nested under a top level attribute, but can be to organize related attributes. Nesting all message attributes under one top level attribute is usually not a good idea.

    "},{"location":"aip2/0020-message-types/#design-examples","title":"Design Examples","text":""},{"location":"aip2/0020-message-types/#example-1","title":"Example 1","text":"
    {\n    \"@type\": \"did:example:00000;spec/pizzaplace/1.0/pizzaorder\",\n    \"content\": {\n        \"id\": 15,\n        \"name\": \"combo\",\n        \"prepaid?\": true,\n        \"ingredients\": [\"pepperoni\", \"bell peppers\", \"anchovies\"]\n    }\n}\n

    Suggestions: Ambiguous names, unnecessary nesting, symbols in names.

    "},{"location":"aip2/0020-message-types/#example-1-fixed","title":"Example 1 Fixed","text":"
    {\n    \"@type\": \"did:example:00000;spec/pizzaplace/1.0/pizzaorder\",\n    \"table_id\": 15,\n    \"pizza_name\": \"combo\",\n    \"prepaid\": true,\n    \"ingredients\": [\"pepperoni\", \"bell peppers\", \"anchovies\"]\n}\n
    "},{"location":"aip2/0020-message-types/#reference","title":"Reference","text":""},{"location":"aip2/0020-message-types/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community Aries Framework - .NET .NET framework for building agents of all types Streetcred.id Commercial mobile and web app built using Aries Framework - .NET Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases. Aries Framework - Go For building agents, hubs and other DIDComm ../../features in GoLang. Connect.Me Free mobile app from Evernym. Installed via app store on iOS and Android. Verity Commercially licensed enterprise agent, SaaS or on-prem."},{"location":"aip2/0023-did-exchange/","title":"Aries RFC 0023: DID Exchange Protocol 1.0","text":""},{"location":"aip2/0023-did-exchange/#summary","title":"Summary","text":"

    This RFC describes the protocol to exchange DIDs between agents when establishing a DID based relationship.

    "},{"location":"aip2/0023-did-exchange/#motivation","title":"Motivation","text":"

    Aries agent developers want to create agents that are able to establish relationships with each other and exchange secure information using keys and endpoints in DID Documents. For this to happen there must be a clear protocol to exchange DIDs.

    "},{"location":"aip2/0023-did-exchange/#tutorial","title":"Tutorial","text":"

    We will explain how DIDs are exchanged, with the roles, states, and messages required.

    "},{"location":"aip2/0023-did-exchange/#roles","title":"Roles","text":"

    The DID Exchange Protocol uses two roles: requester and responder.

    The requester is the party that initiates this protocol after receiving an invitation message (using RFC 0434 Out of Band) or by using an implied invitation from a public DID. For example, a verifier might get the DID of the issuer of a credential they are verifying, and use information in the DIDDoc for that DID as the basis for initiating an instance of this protocol.

    Since the requester receiving an explicit invitation may not have an Aries agent, it is desirable, but not strictly, required that sender of the invitation (who has the responder role in this protocol) have the ability to help the requester with the process and/or costs associated with acquiring an agent capable of participating in the ecosystem. For example, the sender of an invitation may often be sponsoring institutions.

    The responder, who is the sender of an explicit invitation or the publisher of a DID with an implicit invitation, must have an agent capable of interacting with other agents via DIDComm.

    In cases where both parties already possess SSI capabilities, deciding who plays the role of requester and responder might be a casual matter of whose phone is handier.

    "},{"location":"aip2/0023-did-exchange/#states","title":"States","text":""},{"location":"aip2/0023-did-exchange/#requester","title":"Requester","text":"

    The requester goes through the following states per the State Machine Tables below

    "},{"location":"aip2/0023-did-exchange/#responder","title":"Responder","text":"

    The responder goes through the following states per the State Machine Tables below

    "},{"location":"aip2/0023-did-exchange/#state-machine-tables","title":"State Machine Tables","text":"

    The following are the requester and responder state machines.

    The invitation-sent and invitation-received are technically outside this protocol, but are useful to show in the state machine as the invitation is the trigger to start the protocol and is referenced from the protocol as the parent thread (pthid). This is discussed in more detail below.

    The abandoned and completed states are terminal states and there is no expectation that the protocol can be continued (or even referenced) after reaching those states.

    "},{"location":"aip2/0023-did-exchange/#errors","title":"Errors","text":"

    After receiving an explicit invitation, the requester may send a problem-report to the responder using the information in the invitation to either restart the invitation process (returning to the start state) or to abandon the protocol. The problem-report may be an adopted Out of Band protocol message or an adopted DID Exchange protocol message, depending on where in the processing of the invitation the error was detected.

    During the request / response part of the protocol, there are two protocol-specific error messages possible: one for an active rejection and one for an unknown error. These errors are sent using a problem_report message type specific to the DID Exchange Protocol. These errors do not transition the protocol to the abandoned state. The following list details problem-codes that may be sent in these cases:

    request_not_accepted - The error indicates that the request message has been rejected for a reason listed in the error_report. Typical reasons include not accepting the method of the provided DID, unknown endpoint protocols, etc. The request can be resent after the appropriate corrections have been made.

    request_processing_error - This error is sent when the responder was processing the request with the intent to accept the request, but some processing error occurred. This error indicates that the request should be resent as-is.

    response_not_accepted - The error indicates that the response has been rejected for a reason listed in the error_report. Typical reasons include not accepting the method of the provided DID, unknown endpoint protocols, invalid signature, etc. The response can be resent after the appropriate corrections have been made.

    response_processing_error - This error is sent when the requester was processing the response with the intent to accept the response, but some processing error occurred. This error indicates that the response should be resent as-is.

    If other errors occur, the corresponding party may send a problem-report to inform the other party they are abandoning the protocol.

    No errors are sent in timeout situations. If the requester or responder wishes to retract the messages they sent, they record so locally and return a request_not_accepted or response_not_accepted error when the other party sends a request or response.

    "},{"location":"aip2/0023-did-exchange/#error-message-example","title":"Error Message Example","text":"
    {\n  \"@type\": \"https://didcomm.org/didexchange/1.0/problem_report\",\n  \"@id\": \"5678876542345\",\n  \"~thread\": { \"thid\": \"<@id of message related to problem>\" },\n  \"~l10n\": { \"locale\": \"en\"},\n  \"problem-code\": \"request_not_accepted\", // matches codes listed above\n  \"explain\": \"Unsupported DID method for provided DID.\"\n}\n
    "},{"location":"aip2/0023-did-exchange/#error-message-attributes","title":"Error Message Attributes","text":""},{"location":"aip2/0023-did-exchange/#flow-overview","title":"Flow Overview","text":""},{"location":"aip2/0023-did-exchange/#implicit-and-explicit-invitations","title":"Implicit and Explicit Invitations","text":"

    The DID Exchange Protocol is preceded by - either knowledge of a resolvable DID (an implicit invitation) - or by a out-of-band/%VER/invitation message from the Out Of Band Protocols RFC.

    The information needed to construct the request message to start the protocol is used - either from the resolved DID Document - or the service element of the handshake_protocols attribute of the invitation.

    "},{"location":"aip2/0023-did-exchange/#1-exchange-request","title":"1. Exchange Request","text":"

    The request message is used to communicate the DID document of the requester to the responder using the provisional service information present in the (implicit or explicit) invitation.

    The requester may provision a new DID according to the DID method spec. For a Peer DID, this involves creating a matching peer DID and key. The newly provisioned DID and DID Doc is presented in the request message as follows:

    "},{"location":"aip2/0023-did-exchange/#request-message-example","title":"Request Message Example","text":"
    {\n  \"@id\": \"5678876542345\",\n  \"@type\": \"https://didcomm.org/didexchange/1.0/request\",\n  \"~thread\": { \n      \"thid\": \"5678876542345\",\n      \"pthid\": \"<id of invitation>\"\n  },\n  \"label\": \"Bob\",\n  \"goal_code\": \"aries.rel.build\",\n  \"goal\": \"To create a relationship\",\n  \"did\": \"B.did@B:A\",\n  \"did_doc~attach\": {\n      \"@id\": \"d2ab6f2b-5646-4de3-8c02-762f553ab804\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n         \"base64\": \"eyJ0eXAiOiJKV1Qi... (bytes omitted)\",\n         \"jws\": {\n            \"header\": {\n               \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n            },\n            \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n            \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n            }\n      }\n   }\n}\n
    "},{"location":"aip2/0023-did-exchange/#request-message-attributes","title":"Request Message Attributes","text":"

    The label property was intended to be declared as an optional property, but was added to the RFC as a required property. If an agent wishes to not use a label in the request, an empty string (\"\") or the set value Unspecified may be used to indicate a non-value. This approach ensures existing AIP 2.0 implementations do not break.

    "},{"location":"aip2/0023-did-exchange/#correlating-requests-to-invitations","title":"Correlating requests to invitations","text":"

    An invitation is presented in one of two forms:

    When a request responds to an explicit invitation, its ~thread.pthid MUST be equal to the @id property of the invitation as described in the out-of-band RFC.

    When a request responds to an implicit invitation, its ~thread.pthid MUST contain a DID URL that resolves to the specific service on a DID document that contains the invitation.

    "},{"location":"aip2/0023-did-exchange/#example-referencing-an-explicit-invitation","title":"Example Referencing an Explicit Invitation","text":"
    {\n  \"@id\": \"a46cdd0f-a2ca-4d12-afbf-2e78a6f1f3ef\",\n  \"@type\": \"https://didcomm.org/didexchange/1.0/request\",\n  \"~thread\": { \n      \"thid\": \"a46cdd0f-a2ca-4d12-afbf-2e78a6f1f3ef\",\n      \"pthid\": \"032fbd19-f6fd-48c5-9197-ba9a47040470\" \n  },\n  \"label\": \"Bob\",\n  \"goal_code\": \"aries.rel.build\",\n  \"goal\": \"To create a relationship\",\n  \"did\": \"B.did@B:A\",\n  \"did_doc~attach\": {\n      \"@id\": \"d2ab6f2b-5646-4de3-8c02-762f553ab804\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n         \"base64\": \"eyJ0eXAiOiJKV1Qi... (bytes omitted)\",\n         \"jws\": {\n            \"header\": {\n               \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n            },\n            \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n            \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n            }\n      }\n   }\n}\n
    "},{"location":"aip2/0023-did-exchange/#example-referencing-an-implicit-invitation","title":"Example Referencing an Implicit Invitation","text":"
    {\n  \"@id\": \"a46cdd0f-a2ca-4d12-afbf-2e78a6f1f3ef\",\n  \"@type\": \"https://didcomm.org/didexchange/1.0/request\",\n  \"~thread\": { \n      \"thid\": \"a46cdd0f-a2ca-4d12-afbf-2e78a6f1f3ef\",\n      \"pthid\": \"did:example:21tDAKCERh95uGgKbJNHYp#didcomm\" \n  },\n  \"label\": \"Bob\",\n  \"goal_code\": \"aries.rel.build\",\n  \"goal\": \"To create a relationship\",\n  \"did\": \"B.did@B:A\",\n  \"did_doc~attach\": {\n      \"@id\": \"d2ab6f2b-5646-4de3-8c02-762f553ab804\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n         \"base64\": \"eyJ0eXAiOiJKV1Qi... (bytes omitted)\",\n         \"jws\": {\n            \"header\": {\n               \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n            },\n            \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n            \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n            }\n      }\n   }\n}\n
    "},{"location":"aip2/0023-did-exchange/#request-transmission","title":"Request Transmission","text":"

    The request message is encoded according to the standards of the Encryption Envelope, using the recipientKeys present in the invitation.

    If the routingKeys attribute was present and non-empty in the invitation, each key must be used to wrap the message in a forward request, then encoded in an Encryption Envelope. This processing is in order of the keys in the list, with the last key in the list being the one for which the serviceEndpoint possesses the private key.

    The message is then transmitted to the serviceEndpoint.

    The requester is in the request-sent state. When received, the responder is in the request-received state.

    "},{"location":"aip2/0023-did-exchange/#request-processing","title":"Request processing","text":"

    After receiving the exchange request, the responder evaluates the provided DID and DID Doc according to the DID Method Spec.

    The responder should check the information presented with the keys used in the wire-level message transmission to ensure they match.

    The responder MAY look up the corresponding invitation identified in the request's ~thread.pthid to determine whether it should accept this exchange request.

    If the responder wishes to continue the exchange, they will persist the received information in their wallet. They will then either update the provisional service information to rotate the key, or provision a new DID entirely. The choice here will depend on the nature of the DID used in the invitation.

    The responder will then craft an exchange response using the newly updated or provisioned information.

    "},{"location":"aip2/0023-did-exchange/#request-errors","title":"Request Errors","text":"

    See Error Section above for message format details.

    "},{"location":"aip2/0023-did-exchange/#request-rejected","title":"Request Rejected","text":"

    Possible reasons:

    "},{"location":"aip2/0023-did-exchange/#request-processing-error","title":"Request Processing Error","text":""},{"location":"aip2/0023-did-exchange/#2-exchange-response","title":"2. Exchange Response","text":"

    The exchange response message is used to complete the exchange. This message is required in the flow, as it updates the provisional information presented in the invitation.

    "},{"location":"aip2/0023-did-exchange/#response-message-example","title":"Response Message Example","text":"
    {\n  \"@type\": \"https://didcomm.org/didexchange/1.0/response\",\n  \"@id\": \"12345678900987654321\",\n  \"~thread\": {\n    \"thid\": \"<The Thread ID is the Message ID (@id) of the first message in the thread>\"\n  },\n  \"did\": \"B.did@B:A\",\n  \"did_doc~attach\": {\n      \"@id\": \"d2ab6f2b-5646-4de3-8c02-762f553ab804\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n         \"base64\": \"eyJ0eXAiOiJKV1Qi... (bytes omitted)\",\n         \"jws\": {\n            \"header\": {\n               \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n            },\n            \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n            \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n            }\n      }\n   }\n}\n

    The invitation's recipientKeys should be dedicated to envelopes authenticated encryption throughout the exchange. These keys are usually defined in the KeyAgreement DID verification relationship.

    "},{"location":"aip2/0023-did-exchange/#response-message-attributes","title":"Response Message Attributes","text":"

    In addition to a new DID, the associated DID Doc might contain a new endpoint. This new DID and endpoint are to be used going forward in the relationship.

    "},{"location":"aip2/0023-did-exchange/#response-transmission","title":"Response Transmission","text":"

    The message should be packaged in the encrypted envelope format, using the keys from the request, and the new keys presented in the internal did doc.

    When the message is sent, the responder are now in the response-sent state. On receipt, the requester is in the response-received state.

    "},{"location":"aip2/0023-did-exchange/#response-processing","title":"Response Processing","text":"

    When the requester receives the response message, they will decrypt the authenticated envelope which confirms the source's authenticity. After decryption validation, they will update their wallet with the new information, and use that information in sending the complete message.

    "},{"location":"aip2/0023-did-exchange/#response-errors","title":"Response Errors","text":"

    See Error Section above for message format details.

    "},{"location":"aip2/0023-did-exchange/#response-rejected","title":"Response Rejected","text":"

    Possible reasons:

    "},{"location":"aip2/0023-did-exchange/#response-processing-error","title":"Response Processing Error","text":""},{"location":"aip2/0023-did-exchange/#3-exchange-complete","title":"3. Exchange Complete","text":"

    The exchange complete message is used to confirm the exchange to the responder. This message is required in the flow, as it marks the exchange complete. The responder may then invoke any protocols desired based on the context expressed via the pthid in the DID Exchange protocol.

    "},{"location":"aip2/0023-did-exchange/#complete-message-example","title":"Complete Message Example","text":"
    {\n  \"@type\": \"https://didcomm.org/didexchange/1.0/complete\",\n  \"@id\": \"12345678900987654321\",\n  \"~thread\": {\n    \"thid\": \"<The Thread ID is the Message ID (@id) of the first message in the thread>\",\n    \"pthid\": \"<pthid used in request message>\"\n  }\n}\n

    The pthid is required in this message, and must be identical to the pthid used in the request message.

    After a complete message is sent, the requester is in the completed terminal state. Receipt of the message puts the responder into the completed state.

    "},{"location":"aip2/0023-did-exchange/#complete-errors","title":"Complete Errors","text":"

    See Error Section above for message format details.

    "},{"location":"aip2/0023-did-exchange/#complete-rejected","title":"Complete Rejected","text":"

    This is unlikely to occur with other than an unknown processing error (covered below), so no possible reasons are listed. As experience is gained with the protocol, possible reasons may be added.

    "},{"location":"aip2/0023-did-exchange/#complete-processing-error","title":"Complete Processing Error","text":""},{"location":"aip2/0023-did-exchange/#next-steps","title":"Next Steps","text":"

    The exchange between the requester and the responder has been completed. This relationship has no trust associated with it. The next step should be to increase the trust to a sufficient level for the purpose of the relationship, such as through an exchange of proofs.

    "},{"location":"aip2/0023-did-exchange/#peer-did-maintenance","title":"Peer DID Maintenance","text":"

    When Peer DIDs are used in an exchange, it is likely that both the requester and responder will want to perform some relationship maintenance such as key rotations. Future RFC updates will add these maintenance ../../features.

    "},{"location":"aip2/0023-did-exchange/#reference","title":"Reference","text":""},{"location":"aip2/0023-did-exchange/#drawbacks","title":"Drawbacks","text":"

    N/A at this time

    "},{"location":"aip2/0023-did-exchange/#prior-art","title":"Prior art","text":""},{"location":"aip2/0023-did-exchange/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0023-did-exchange/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Trinsic.id Commercial mobile and web app built using Aries Framework - .NET MISSING test results"},{"location":"aip2/0025-didcomm-transports/","title":"Aries RFC 0025: DIDComm Transports","text":""},{"location":"aip2/0025-didcomm-transports/#summary","title":"Summary","text":"

    This RFC Details how different transports are to be used for Agent Messaging.

    "},{"location":"aip2/0025-didcomm-transports/#motivation","title":"Motivation","text":"

    Agent Messaging is designed to be transport independent, including message encryption and agent message format. Each transport does have unique ../../features, and we need to standardize how the transport ../../features are (or are not) applied.

    "},{"location":"aip2/0025-didcomm-transports/#reference","title":"Reference","text":"

    Standardized transport methods are detailed here.

    "},{"location":"aip2/0025-didcomm-transports/#https","title":"HTTP(S)","text":"

    HTTP(S) is the first, and most used transport for DID Communication that has received heavy attention.

    While it is recognized that all DIDComm messages are secured through strong encryption, making HTTPS somewhat redundant, it will likely cause issues with mobile clients because venders (Apple and Google) are limiting application access to the HTTP protocol. For example, on iOS 9 or above where [ATS])(https://developer.apple.com/documentation/bundleresources/information_property_list/nsapptransportsecurity) is in effect, any URLs using HTTP must have an exception hard coded in the application prior to uploading to the iTunes Store. This makes DIDComm unreliable as the agent initiating the the request provides an endpoint for communication that the mobile client must use. If the agent provides a URL using the HTTP protocol it will likely be unusable due to low level operating system limitations.

    As a best practice, when HTTP is used in situations where a mobile client (iOS or Android) may be involved it is highly recommended to use the HTTPS protocol, specifically TLS 1.2 or above.

    Other important notes on the subject of using HTTP(S) include:

    "},{"location":"aip2/0025-didcomm-transports/#known-implementations","title":"Known Implementations","text":"

    Aries Cloud Agent - Python Aries Framework - .NET

    "},{"location":"aip2/0025-didcomm-transports/#websocket","title":"Websocket","text":"

    Websockets are an efficient way to transmit multiple messages without the overhead of individual requests.

    "},{"location":"aip2/0025-didcomm-transports/#known-implementations_1","title":"Known Implementations","text":"

    Aries Cloud Agent - Python Aries Framework - .NET

    "},{"location":"aip2/0025-didcomm-transports/#xmpp","title":"XMPP","text":"

    XMPP is an effective transport for incoming DID-Communication messages directly to mobile agents, like smartphones.

    "},{"location":"aip2/0025-didcomm-transports/#known-implementations_2","title":"Known Implementations","text":"

    XMPP is implemented in the Openfire Server open source project. Integration with DID Communication agents is work-in-progress.

    "},{"location":"aip2/0025-didcomm-transports/#other-transports","title":"Other Transports","text":"

    Other transports may be used for Agent messaging. As they are developed, this RFC should be updated with appropriate standards for the transport method. A PR should be raised against this doc to facilitate discussion of the proposed additions and/or updates. New transports should highlight the common elements of the transport (such as an HTTP response code for the HTTP transport) and how they should be applied.

    "},{"location":"aip2/0025-didcomm-transports/#message-routing","title":"Message Routing","text":"

    The transports described here are used between two agents. In the case of message routing, a message will travel across multiple agent connections. Each intermediate agent (see Mediators and Relays) may use a different transport. These transport details are not made known to the sender, who only knows the keys of Mediators and the first endpoint of the route.

    "},{"location":"aip2/0025-didcomm-transports/#message-context","title":"Message Context","text":"

    The transport used from a previous agent can be recorded in the message trust context. This is particularly true of controlled network environments, where the transport may have additional security considerations not applicable on the public internet. The transport recorded in the message context only records the last transport used, and not any previous routing steps as described in the Message Routing section of this document.

    "},{"location":"aip2/0025-didcomm-transports/#transport-testing","title":"Transport Testing","text":"

    Transports which operate on IP based networks can be tested by an Agent Test Suite through a transport adapter. Some transports may be more difficult to test in a general sense, and may need specialized testing frameworks. An agent with a transport not yet supported by any testing suites may have non-transport testing performed by use of a routing agent.

    "},{"location":"aip2/0025-didcomm-transports/#drawbacks","title":"Drawbacks","text":"

    Setting transport standards may prevent some uses of each transport method.

    "},{"location":"aip2/0025-didcomm-transports/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"aip2/0025-didcomm-transports/#prior-art","title":"Prior art","text":"

    Several agent implementations already exist that follow similar conventions.

    "},{"location":"aip2/0025-didcomm-transports/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0025-didcomm-transports/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"aip2/0035-report-problem/","title":"Aries RFC 0035: Report Problem Protocol 1.0","text":""},{"location":"aip2/0035-report-problem/#summary","title":"Summary","text":"

    Describes how to report errors and warnings in a powerful, interoperable way. All implementations of SSI agent or hub technology SHOULD implement this RFC.

    "},{"location":"aip2/0035-report-problem/#motivation","title":"Motivation","text":"

    Effective reporting of errors and warnings is difficult in any system, and particularly so in decentralized systems such as remotely collaborating agents. We need to surface problems, and their supporting context, to people who want to know about them (and perhaps separately, to people who can actually fix them). This is especially challenging when a problem is detected well after and well away from its cause, and when multiple parties may need to cooperate on a solution.

    Interoperability is perhaps more crucial with problem reporting than with any other aspect of DIDComm, since an agent written by one developer MUST be able to understand an error reported by an entirely different team. Notice how different this is from normal enterprise software development, where developers only need to worry about understanding their own errors.

    The goal of this RFC is to provide agents with tools and techniques possible to address these challenges. It makes two key contributions:

    "},{"location":"aip2/0035-report-problem/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0035-report-problem/#error-vs-warning-vs-problem","title":"\"Error\" vs. \"Warning\" vs. \"Problem\"","text":"

    The distinction between \"error\" and \"warning\" is often thought of as one of severity -- errors are really bad, and warnings are only somewhat bad. This is reinforced by the way logging platforms assign numeric constants to ERROR vs. WARN log events, and by the way compilers let warnings be suppressed but refuse to ignore errors.

    However, any cybersecurity professional will tell you that warnings sometimes signal deep and scary problems that should not be ignored, and most veteran programmers can tell war stories that reinforce this wisdom. A deeper analysis of warnings reveals that what truly differentiates them from errors is not their lesser severity, but rather their greater ambiguity. Warnings are problems that require human judgment to evaluate, whereas errors are unambiguously bad.

    The mechanism for reporting problems in DIDComm cannot make a simplistic assumption that all agents are configured to run with a particular verbosity or debug level. Each agent must let other agents decide for themselves, based on policy or user preference, what do do about various issues. For this reason, we use the generic term \"problem\" instead of the more specific and semantically opinionated term \"error\" (or \"warning\") to describe the general situation we're addressing. \"Problem\" includes any deviation from the so-called \"happy path\" of an interaction. This could include situations where the severity is unknown and must be evaluated by a human, as well as surprising events (e.g., a decision by a human to alter the basis for in-flight messaging by moving from one device to another).

    "},{"location":"aip2/0035-report-problem/#specific-challenges","title":"Specific Challenges","text":"

    All of the following challenges need to be addressed.

    1. Report problems to external parties interacting with us. For example, AliceCorp has to be able to tell Bob that it can\u2019t issue the credential he requested because his payment didn\u2019t go through.
    2. Report problems to other entities inside our own domain. For example, AliceCorp\u2019s agent #1 has to be able to report to AliceCorp agent #2 that it is out of disk space.
    3. Report in a way that provides human beings with useful context and guidance to troubleshoot. Most developers know of cases where error reporting was technically correct but completely useless. Bad communication about problems is one of the most common causes of UX debacles. Humans using agents will speak different languages, have differing degrees of technical competence, and have different software and hardware resources. They may lack context about what their agents are doing, such as when a DIDComm interaction occurs as a result of scheduled or policy-driven actions. This makes context and guidance crucial.
    4. Map a problem backward in time, space, and circumstances, so when it is studied, its original context is available. This is particularly difficult in DIDComm, which is transport-agnostic and inherently asynchronous, and which takes place on an inconsistently connected digital landscape.
    5. Support localization using techniques in the l10n RFC.
    6. Provide consistent, locale-independent problem codes, not just localized text, so problems can be researched in knowledge bases, on Stack Overflow, and in other internet forums, regardless of the natural language in which a message displays. This also helps meaning remain stable as wording is tweaked.
    7. Provide a registry of well known problem codes that are carefully defined and localized, to maximize shared understanding. Maintaining an exhaustive list of all possible things that can go wrong with all possible agents in all possible interactions is completely unrealistic. However, it may be possible to maintain a curated subset. While we can't enumerate everything that can go wrong in a financial transaction, a code for \"insufficient funds\" might have near-universal usefulness. Compare the posix error inventory in errorno.h.
    8. Facilitate automated problem handling by agents, not just manual handling by humans. Perfect automation may be impossible, but high levels of automation should be doable.
    9. Clarify how the problem affects an in-progress interaction. Does a failure to process payment reset the interaction to the very beginning of the protocol, or just back to the previous step, where payment was requested? This requires problems to be matched in a formal way to the state machine of a protocol underway.
    "},{"location":"aip2/0035-report-problem/#the-report-problem-protocol","title":"The report-problem protocol","text":"

    Reporting problems uses a simple one-step notification protocol. Its official PIURI is:

    https://didcomm.org/report-problem/1.0\n

    The protocol includes the standard notifier and notified roles. It defines a single message type problem-report, introduced here. It also adopts the ack message from the ACK 1.0 protocol, to accommodate the possibility that the ~please_ack decorator may be used on the notification.

    A problem-report communicates about a problem when an agent-to-agent message is possible and a recipient for the problem report is known. This covers, for example, cases where a Sender's message gets to an intended Recipient, but the Recipient is unable to process the message for some reason and wants to notify the Sender. It may also be relevant in cases where the recipient of the problem-report is not a message Sender. Of course, a reporting technique that depends on message delivery doesn't apply when the error reporter can't identify or communicate with the proper recipient.

    "},{"location":"aip2/0035-report-problem/#the-problem-report-message-type","title":"The problem-report message type","text":"

    Only description.code is required, but a maximally verbose problem-report could contain all of the following:

    {\n  \"@type\"            : \"https://didcomm.org/report-problem/1.0/problem-report\",\n  \"@id\"              : \"an identifier that can be used to discuss this error message\",\n  \"~thread\"          : \"info about the threading context in which the error occurred (if any)\",\n  \"description\"      : { \"en\": \"localized message\", \"code\": \"symbolic-name-for-error\" },\n  \"problem_items\"    : [ {\"<item descrip>\": \"value\"} ],\n  \"who_retries\"      : \"enum: you | me | both | none\",\n  \"fix_hint\"         : { \"en\": \"localized error-instance-specific hint of how to fix issue\"},\n  \"impact\"           : \"enum: message | thread | connection\",\n  \"where\"            : \"enum: you | me | other - enum: cloud | edge | wire | agency | ..\",\n  \"noticed_time\"     : \"<time>\",\n  \"tracking_uri\"     : \"\",\n  \"escalation_uri\"   : \"\"\n}\n
    "},{"location":"aip2/0035-report-problem/#field-reference","title":"Field Reference","text":"

    Some fields will be relevant and useful in many use cases, but not always. Including empty or null fields is discouraged; best practice is to include as many fields as you can fill with useful data, and to omit the others.

    @id: An identifier for this message, as described in the message threading RFC. This decorator is STRONGLY recommended, because enables a dialog about the problem itself in a branched thread (e.g., suggest a retry, report a resolution, ask for more information).

    ~thread: A thread decorator that places the problem-report into a thread context. If the problem was triggered in the processing of a message, then the triggering message is the head of a new thread of which the problem report is the second member (~thread.sender_order = 0). In such cases, the ~thread.pthid (parent thread id) here would be the @id of the triggering message. If the problem-report is unrelated to a message, the thread decorator is mostly redundant, as ~thread.thid must equal @id.

    description: Contains human-readable, localized alternative string(s) that explain the problem. It is highly recommended that the message follow use the guidance in the l10n RFC, allowing the error to be searched on the web and documented formally.

    description.code: Required. Contains the code that indicates the problem being communicated. Codes are described in protocol RFCs and other relevant places. New Codes SHOULD follow the Problem Code naming convention detailed in the DIDComm v2 spec.

    problem_items: A list of one or more key/value pairs that are parameters about the problem. Some examples might be:

    All items should have in common the fact that they exemplify the problem described by the code (e.g., each is an invalid param, or each is an unresponsive URL, or each is an unrecognized crypto algorithm, etc).

    Each item in the list must be a tagged pair (a JSON {key:value}, where the key names the parameter or item, and the value is the actual problem text/number/value. For example, to report that two different endpoints listed in party B\u2019s DID Doc failed to respond when they were contacted, the code might contain \"endpoint-not-responding\", and the problem_items property might contain:

    [\n  {\"endpoint1\": \"http://agency.com/main/endpoint\"},\n  {\"endpoint2\": \"http://failover.agency.com/main/endpoint\"}\n]\n

    who_retries: value is the string \"you\", the string \"me\", the string \"both\", or the string \"none\". This property tells whether a problem is considered permanent and who the sender of the problem report believes should have the responsibility to resolve it by retrying. Rules about how many times to retry, and who does the retry, and under what circumstances, are not enforceable and not expressed in the message text. This property is thus not a strong commitment to retry--only a recommendation of who should retry, with the assumption that retries will often occur if they make sense.

    [TODO: figure out how to identify parties > 2 in n-wise interaction]

    fix_hint: Contains human-readable, localized suggestions about how to fix this instance of the problem. If present, this should be viewed as overriding general hints found in a message catalog.

    impact: A string describing the breadth of impact of the problem. An enumerated type:

    where: A string that describes where the error happened, from the perspective of the reporter, and that uses the \"you\" or \"me\" or \"other\" prefix, followed by a suffix like \"cloud\", \"edge\", \"wire\", \"agency\", etc.

    noticed_time: Standard time entry (ISO-8601 UTC with at least day precision and up to millisecond precision) of when the problem was detected.

    [TODO: should we refer to timestamps in a standard way (\"date\"? \"time\"? \"timestamp\"? \"when\"?)]

    tracking_uri: Provides a URI that allows the recipient to track the status of the error. For example, if the error is related to a service that is down, the URI could be used to monitor the status of the service, so its return to operational status could be automatically discovered.

    escalation_uri: Provides a URI where additional help on the issue can be received. For example, this might be a \"mailto\" and email address for the Help Desk associated with a currently down service.

    "},{"location":"aip2/0035-report-problem/#sample","title":"Sample","text":"
    {\n  \"@type\": \"https://didcomm.org/notification/1.0/problem-report\",\n  \"@id\": \"7c9de639-c51c-4d60-ab95-103fa613c805\",\n  \"~thread\": {\n    \"pthid\": \"1e513ad4-48c9-444e-9e7e-5b8b45c5e325\",\n    \"sender_order\": 1\n  },\n  \"~l10n\"            : {\"catalog\": \"https://didcomm.org/error-codes\"},\n  \"description\"      : \"Unable to find a route to the specified recipient.\",\n  \"description~l10n\" : {\"code\": \"cant-find-route\" },\n  \"problem_items\"    : [\n      { \"recipient\": \"did:sov:C805sNYhMrjHiqZDTUASHg\" }\n  ],\n  \"who_retries\"      : \"you\",\n  \"impact\"           : \"message\",\n  \"noticed_time\"     : \"2019-05-27 18:23:06Z\"\n}\n
    "},{"location":"aip2/0035-report-problem/#categorized-examples-of-errors-and-current-best-practice-handling","title":"Categorized Examples of Errors and (current) Best Practice Handling","text":"

    The following is a categorization of a number of examples of errors and (current) Best Practice handling for those types of errors. The new problem-report message type is used for some of these categories, but not all.

    "},{"location":"aip2/0035-report-problem/#unknown-error","title":"Unknown Error","text":"

    Errors of a known error code will be processed according to the understanding of what the code means. Support of a protocol includes support and proper processing of the error codes detailed within that protocol.

    Any unknown error code that starts with w. in the DIDComm v2 style may be considered a warning, and the flow of the active protocol SHOULD continue. All other unknown error codes SHOULD be considered to be an end to the active protocol.

    "},{"location":"aip2/0035-report-problem/#error-while-processing-a-received-message","title":"Error While Processing a Received Message","text":"

    An Agent Message sent by a Sender and received by its intended Recipient cannot be processed.

    "},{"location":"aip2/0035-report-problem/#examples","title":"Examples:","text":""},{"location":"aip2/0035-report-problem/#recommended-handling","title":"Recommended Handling","text":"

    The Recipient should send the Sender a problem-report Agent Message detailing the issue.

    The last example deserves an additional comment about whether there should be a response sent at all. Particularly in cases where trust in the message sender is low (e.g. when establishing the connection), an Agent may not want to send any response to a rejected message as even a negative response could reveal correlatable information. That said, if a response is provided, the problem-report message type should be used.

    "},{"location":"aip2/0035-report-problem/#error-while-routing-a-message","title":"Error While Routing A Message","text":"

    An Agent in the routing flow of getting a message from a Sender to the Agent Message Recipient cannot route the message.

    "},{"location":"aip2/0035-report-problem/#examples_1","title":"Examples:","text":""},{"location":"aip2/0035-report-problem/#recommended-handling_1","title":"Recommended Handling","text":"

    If the Sender is known to the Agent having the problem, send a problem-report Agent Message detailing at least that a blocking issue occurred, and if relevant (such as in the first example), some details about the issue. If the message is valid, and the problem is related to a lack of resources (e.g. the second issue), also send a problem-report message to an escalation point within the domain.

    Alternatively, the capabilities described in 0034: Message Tracing could be used to inform others of the fact that an issue occurred.

    "},{"location":"aip2/0035-report-problem/#messages-triggered-about-a-transaction","title":"Messages Triggered about a Transaction","text":""},{"location":"aip2/0035-report-problem/#examples_2","title":"Examples:","text":""},{"location":"aip2/0035-report-problem/#recommended-handling_2","title":"Recommended Handling","text":"

    These types of error scenarios represent a gray error in handling between using the generic problem-report message format, or a message type that is part of the current transaction's message family. For example, the \"Your credential has been revoked\" might well be included as a part of the (TBD) standard Credentials Exchange message family. The \"more information\" example might be a generic error across a number of message families and so should trigger a problem-report) or, might be specific to the ongoing thread (e.g. Credential Exchange) and so be better handled by a defined message within that thread and that message family.

    The current advice on which to use in a given scenario is to consider how the recipient will handle the message. If the handler will need to process the response in a specific way for the transaction, then a message family-specific message type should be used. If the error is cross-cutting such that a common handler can be used across transaction contexts, then a generic problem-report should be used.

    \"Current advice\" implies that as we gain more experience with Agent To Agent messaging, the recommendations could get more precise.

    "},{"location":"aip2/0035-report-problem/#messaging-channel-settings","title":"Messaging Channel Settings","text":""},{"location":"aip2/0035-report-problem/#examples_3","title":"Examples","text":""},{"location":"aip2/0035-report-problem/#recommended-handling_3","title":"Recommended Handling","text":"

    These types of messages might or might not be triggered during the receipt and processing of a message, but either way, they are unrelated to the message and are really about the communication channel between the entities. In such cases, the recommended approach is to use a (TBD) standard message family to notify and rectify the issue (e.g. change the attributes of a connection). The definition of that message family is outside the scope of this RFC.

    "},{"location":"aip2/0035-report-problem/#timeouts","title":"Timeouts","text":"

    A special generic class of errors that deserves mention is the timeout, where a Sender sends out a message and does not receive back a response in a given time. In a distributed environment such as Agent to Agent messaging, these are particularly likely - and particularly difficult to handle gracefully. The potential reasons for timeouts are numerous:

    "},{"location":"aip2/0035-report-problem/#recommended-handling_4","title":"Recommended Handling","text":"

    Appropriate timeout handling is extremely contextual, with two key parameters driving the handling - the length of the waiting period before triggering the timeout and the response to a triggered timeout.

    The time to wait for a response should be dynamic by at least type of message, and ideally learned through experience. Messages requiring human interaction should have an inherently longer timeout period than a message expected to be handled automatically. Beyond that, it would be good for Agents to track response times by message type (and perhaps other parameters) and adjust timeouts to match observed patterns.

    When a timeout is received there are three possible responses, handled automatically or based on feedback from the user:

    An automated \"wait longer\" response might be used when first interacting with a particular message type or identity, as the response cadence is learned.

    If the decision is to retry, it would be good to have support in areas covered by other RFCs. First, it would be helpful (and perhaps necessary) for the threading decorator to support the concept of retries, so that a Recipient would know when a message is a retry of an already sent message. Next, on \"forward\" message types, Agents might want to know that a message was a retry such that they can consider refreshing DIDDoc/encryption key cache before sending the message along. It could also be helpful for a retry to interact with the Tracing facility so that more information could be gathered about why messages are not getting to their destination.

    Excessive retrying can exacerbate an existing system issue. If the reason for the timeout is because there is a \"too many messages to be processed\" situation, then sending retries simply makes the problem worse. As such, a reasonable backoff strategy should be used (e.g. exponentially increasing times between retries). As well, a strategy used at Uber is to flag and handle retries differently from regular messages. The analogy with Uber is not pure - that is a single-vendor system - but the notion of flagging retries such that retry messages can be handled differently is a good approach.

    "},{"location":"aip2/0035-report-problem/#caveat-problem-report-loops","title":"Caveat: Problem Report Loops","text":"

    Implementers should consider and mitigate the risk of an endless loop of error messages. For example:

    "},{"location":"aip2/0035-report-problem/#recommended-handling_5","title":"Recommended Handling","text":"

    How agents mitigate the risk of this problem is implementation specific, balancing loop-tracking overhead versus the likelihood of occurrence. For example, an agent implementation might have a counter on a connection object that is incremented when certain types of Problem Report messages are sent on that connection, and reset when any other message is sent. The agent could stop sending those types of Problem Report messages after the counter reaches a given value.

    "},{"location":"aip2/0035-report-problem/#reference","title":"Reference","text":"

    TBD

    "},{"location":"aip2/0035-report-problem/#drawbacks","title":"Drawbacks","text":"

    In many cases, a specific problem-report message is necessary, so formalizing the format of the message is also preferred over leaving it to individual implementations. There is no drawback to specifying that format now.

    As experience is gained with handling distributed errors, the recommendations provided in this RFC will have to evolve.

    "},{"location":"aip2/0035-report-problem/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    The error type specification mechanism builds on the same approach used by the message type specifications. It's possible that additional capabilities could be gained by making runtime use of the error type specification - e.g. for the broader internationalization of the error messages.

    The main alternative to a formally defined error type format is leaving it to individual implementations to handle error notifications, which will not lead to an effective solution.

    "},{"location":"aip2/0035-report-problem/#prior-art","title":"Prior art","text":"

    A brief search was done for error handling in messaging systems with few useful results found. Perhaps the best was the Uber article referenced in the \"Timeout\" section above.

    "},{"location":"aip2/0035-report-problem/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0035-report-problem/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes RFC 0036: Issue Credential Protocol The problem-report message is adopted by this protocol. MISSING test results RFC 0037: Present Proof Protocol The problem-report message is adopted by this protocol. MISSING test results Trinsic.id Commercial mobile and web app built using Aries Framework - .NET MISSING test results"},{"location":"aip2/0044-didcomm-file-and-mime-types/","title":"Aries RFC 0044: DIDComm File and MIME Types","text":""},{"location":"aip2/0044-didcomm-file-and-mime-types/#summary","title":"Summary","text":"

    Defines the media (MIME) types and file types that hold DIDComm messages in encrypted, signed, and plaintext forms. Covers DIDComm V1, plus a little of V2 to clarify how DIDComm versions are detected.

    "},{"location":"aip2/0044-didcomm-file-and-mime-types/#motivation","title":"Motivation","text":"

    Most work on DIDComm so far has assumed HTTP as a transport. However, we know that DID communication is transport-agnostic. We should be able to say the same thing no matter which channel we use.

    An incredibly important channel or transport for messages is digital files. Files can be attached to messages in email or chat, can be carried around on a thumb drive, can be backed up, can be distributed via CDN, can be replicated on distributed file systems like IPFS, can be inserted in an object store or in content-addressable storage, can be viewed and modified in editors, and support a million other uses.

    We need to define how files and attachments can contain DIDComm messages, and what the semantics of processing such files will be.

    "},{"location":"aip2/0044-didcomm-file-and-mime-types/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0044-didcomm-file-and-mime-types/#media-types","title":"Media Types","text":"

    Media types are based on the conventions of RFC6838. Similar to RFC7515, the application/ prefix MAY be omitted and the recipient MUST treat media types not containing / as having the application/ prefix present.

    "},{"location":"aip2/0044-didcomm-file-and-mime-types/#didcomm-v1-encrypted-envelope-dee","title":"DIDComm v1 Encrypted Envelope (*.dee)","text":"

    The raw bytes of an encrypted envelope may be persisted to a file without any modifications whatsoever. In such a case, the data will be encrypted and packaged such that only specific receiver(s) can process it. However, the file will contain a JOSE-style header that can be used by magic bytes algorithms to detect its type reliably.

    The file extension associated with this filetype is dee, giving a globbing pattern of *.dee; this should be be read as \"STAR DOT D E E\" or as \"D E E\" files.

    The name of this file format is \"DIDComm V1 Encrypted Envelope.\" We expect people to say, \"I am looking at a DIDComm V1 Encrypted Envelope\", or \"This file is in DIDComm V1 Encrypted Envelope format\", or \"Does my editor have a DIDComm V1 Encrypted Envelope plugin?\"

    Although the format of encrypted envelopes is derived from JSON and the JWT/JWE family of specs, no useful processing of these files will take place by viewing them as JSON, and viewing them as generic JWEs will greatly constrain which semantics are applied. Therefore, the recommended MIME type for *.dee files is application/didcomm-envelope-enc, with application/jwe as a fallback, and application/json as an even less desirable fallback. (In this, we are making a choice similar to the one that views *.docx files primarily as application/msword instead of application/xml.) If format evolution takes place, the version could become a parameter as described in RFC 1341: application/didcomm-envelope-enc;v=2.

    A recipient using the media type value MUST treat it as if \u201capplication/\u201d were prepended to any \"typ\" or \"cty\" value not containing a \u2018/\u2019 in compliance with the JWE /JWS family of specs.

    The default action for DIDComm V1 Encrypted Envelopes (what happens when a user double-clicks one) should be Handle (that is, process the message as if it had just arrived by some other transport), if the software handling the message is an agent. In other types of software, the default action might be to view the file. Other useful actions might include Send, Attach (to email, chat, etc), Open with agent, and Decrypt to *.dp.

    NOTE: The analog to this content type in DIDComm v2 is called a \"DIDComm Encrypted Message.\" Its format is slightly different. For more info, see Detecting DIDComm Versions below.

    "},{"location":"aip2/0044-didcomm-file-and-mime-types/#didcomm-v1-signed-envelopes-dse","title":"DIDComm V1 Signed Envelopes (*.dse)","text":"

    When DIDComm messages are signed, the signing uses a JWS signing envelope. Often signing is unnecessary, since authenticated encryption proves the sender of the message to the recipient(s), but sometimes when non-repudiation is required, this envelope is used. It is also required when the recipient of a message is unknown, but tamper-evidence is still required, as in the case of a public invitation.

    By convention, DIDComm Signed Envelopes contain plaintext; if encryption is used in combination with signing, the DSE goes inside the DEE.

    The file extension associated with this filetype is dse, giving a globbing pattern of *.dse; this should be be read as \"STAR DOT D S E\" or as \"D S E\" files.

    The name of this file format is \"DIDComm V1 Signed Envelope.\" We expect people to say, \"I am looking at a DIDComm V1 Signed Envelope\", or \"This file is in DIDComm V1 Signed Envelope format\", or \"Does my editor have a DIDComm V1 Signed Envelope plugin?\"

    As with *.dee files, the best way to hande *.dse files is to map them to a custom MIME type. The recommendation is application/didcomm-sig-env, with application/jws as a fallback, and application/json as an even less desirable fallback.

    A recipient using the media type value MUST treat it as if \u201capplication/\u201d were prepended to any \"typ\" or \"cty\" value not containing a \u2018/\u2019 in compliance with the JWE /JWS family of specs.

    The default action for DIDComm V1 Signed Envelopes (what happens when a user double-clicks one) should be Validate (that is, process the signature to see if it is valid.

    NOTE: The analog to this content type in DIDComm v2 is called a \"DIDComm Signed Message.\" Its format is slightly different. For more info, see Detecting DIDComm Versions below.

    "},{"location":"aip2/0044-didcomm-file-and-mime-types/#didcomm-v1-messages-dm","title":"DIDComm V1 Messages (*.dm)","text":"

    The plaintext representation of a DIDComm message--something like a credential offer, a proof request, a connection invitation, or anything else worthy of a DIDComm protocol--is JSON. As such, it should be editable by anything that expects JSON.

    However, all such files have some additional conventions, over and above the simple requirements of JSON. For example, key decorators have special meaning ( @id, ~thread, @trace , etc). Nonces may be especially significant. The format of particular values such as DID and DID+key references is important. Therefore, we refer to these messages generically as JSON, but we also define a file format for tools that are aware of the additional semantics.

    The file extension associated with this filetype is *.dm, and should be read as \"STAR DOT D M\" or \"D M\" files. If a format evolution takes place, a subsequent version could be noted by appending a digit, as in *.dm2 for second-generation dm files.

    The name of this file format is \"DIDComm V1 Message.\" We expect people to say, \"I am looking at a DIDComm V1 Message\", or \"This file is in DIDComm V1 Message format\", or \"Does my editor have a DIDComm V1 Message plugin?\" For extra clarity, it is acceptable to add the adjective \"plaintext\", as in \"DIDComm V1 Plaintext Message.\"

    The most specific MIME type of *.dm files is application/json;flavor=didcomm-msg--or, if more generic handling is appropriate, just application/json.

    A recipient using the media type value MUST treat it as if \u201capplication/\u201d were prepended to any \"typ\" or \"cty\" value not containing a \u2018/\u2019 in compliance with the JWE /JWS family of specs.

    The default action for DIDComm V1 Messages should be to View or Validate them. Other interesting actions might be Encrypt to *.dee, Sign to *.dse, and Find definition of protocol.

    NOTE: The analog to this content type in DIDComm v2 is called a \"DIDComm Plaintext Message.\" Its format is slightly different. For more info, see Detecting DIDComm Versions below.

    As a general rule, DIDComm messages that are being sent in production use cases of DID communication should be stored in encrypted form (*.dee) at rest. There are cases where this might not be preferred, e.g., providing documentation of the format of message or during a debugging scenario using message tracing. However, these are exceptional cases. Storing meaningful *.dm files decrypted is not a security best practice, since it replaces all the privacy and security guarantees provided by the DID communication mechanism with only the ACLs and other security barriers that are offered by the container.

    "},{"location":"aip2/0044-didcomm-file-and-mime-types/#native-object-representation","title":"Native Object representation","text":"

    This is not a file format, but rather an in-memory form of a DIDComm Message using whatever object hierarchy is natural for a programming language to map to and from JSON. For example, in python, the natural Native Object format is a dict that contains properties indexed by strings. This is the representation that python's json library expects when converting to JSON, and the format it produces when converting from JSON. In Java, Native Object format might be a bean. In C++, it might be a std::map<std::string, variant>...

    There can be more than one Native Object representation for a given programming language.

    Native Object forms are never rendered directly to files; rather, they are serialized to DIDComm Plaintext Format and then persisted (likely after also encrypting to DIDComm V1 Encrypted Envelope).

    "},{"location":"aip2/0044-didcomm-file-and-mime-types/#negotiating-compatibility","title":"Negotiating Compatibility","text":"

    When parties want to communicate via DIDComm, a number of mechanisms must align. These include:

    1. The type of service endpoint used by each party
    2. The key types used for encryption and/or signing
    3. The format of the encryption and/or signing envelopes
    4. The encoding of plaintext messages
    5. The protocol used to forward and route
    6. The protocol embodied in the plaintext messages

    Although DIDComm allows flexibility in each of these choices, it is not expected that a given DIDComm implementation will support many permutations. Rather, we expect a few sets of choices that commonly go together. We call a set of choices that work well together a profile. Profiles are identified by a string that matches the conventions of IANA media types, but they express choices about plaintext, encryption, signing, and routing in a single value. The following profile identifiers are defined in this version of the RFC:

    "},{"location":"aip2/0044-didcomm-file-and-mime-types/#defined-profiles","title":"Defined Profiles","text":"

    Profiles are named in the accept section of a DIDComm service endpoint and in an out-of-band message. When Alice declares that she accepts didcomm/aip2;env=rfc19, she is making a declaration about more than her own endpoint. She is saying that all publicly visible steps in an inbound route to her will use the didcomm/aip2;env=rfc19 profile, such that a sender only has to use didcomm/aip2;env=rfc19 choices to get the message from Alice's outermost mediator to Alice's edge. It is up to Alice to select and configure mediators and internal routing in such a way that this is true for the sender.

    "},{"location":"aip2/0044-didcomm-file-and-mime-types/#detecting-didcomm-versions","title":"Detecting DIDComm Versions","text":"

    Because media types differ from DIDComm V1 to V2, and because media types are easy to communicate in headers and message fields, they are a convenient way to detect which version of DIDComm applies in a given context:

    Nature of Content V1 V2 encrypted application/didcomm-envelope-encDIDComm V1 Encrypted Envelope*.dee application/didcomm-encrypted+jsonDIDComm Encrypted Message*.dcem signed application/didcomm-sig-envDIDComm V1 Signed Envelope*.dse application/didcomm-signed+jsonDIDComm Signed Message*.dcsm plaintext application/json;flavor=didcomm-msgDIDComm V1 Message*.dm application/didcomm-plain+jsonDIDComm Plaintext Message*.dcpm

    It is also recommended that agents implementing Discover Features Protocol v2 respond to queries about supported DIDComm versions using the didcomm-version feature name. This allows queries about what an agent is willing to support, whereas the media type mechanism describes what is in active use. The values that should be returned from such a query are URIs that tell where DIDComm versions are developed:

    Version URI V1 https://github.com/hyperledger/aries-rfcs V2 https://github.com/decentralized-identity/didcomm-messaging"},{"location":"aip2/0044-didcomm-file-and-mime-types/#what-it-means-to-implement-this-rfc","title":"What it means to \"implement\" this RFC","text":"

    For the purposes of Aries Interop Profiles, an agent \"implements\" this RFC when:

    "},{"location":"aip2/0044-didcomm-file-and-mime-types/#reference","title":"Reference","text":"

    The file extensions and MIME types described here are also accompanied by suggested graphics. Vector forms of these graphics are available.

    "},{"location":"aip2/0044-didcomm-file-and-mime-types/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"aip2/0046-mediators-and-relays/","title":"Aries RFC 0046: Mediators and Relays","text":""},{"location":"aip2/0046-mediators-and-relays/#summary","title":"Summary","text":"

    The mental model for agent-to-agent messaging (A2A) messaging includes two important communication primitives that have a meaning unique to our ecosystem: mediator and relay.

    A mediator is a participant in agent-to-agent message delivery that must be modeled by the sender. It has its own keys and will deliver messages only after decrypting an outer envelope to reveal a forward request. Many types of mediators may exist, but two important ones should be widely understood, as they commonly manifest in DID Docs:

    1. A service that hosts many cloud agents at a single endpoint to provide herd privacy (an \"agency\") is a mediator.
    2. A cloud-based agent that routes between/among the edges of a sovereign domain is a mediator.

    A relay is an entity that passes along agent-to-agent messages, but that can be ignored when the sender considers encryption choices. It does not decrypt anything. Relays can be used to change the transport for a message (e.g., accept an HTTP POST, then turn around and emit an email; accept a Bluetooth transmission, then turn around and emit something in a message queue). Mix networks like TOR are an important type of relay.

    Read on to explore how agent-to-agent communication can model complex topologies and flows using these two primitives.

    "},{"location":"aip2/0046-mediators-and-relays/#motivation","title":"Motivation","text":"

    When we describe agent-to-agent communication, it is convenient to think of an interaction only in terms of Alice and Bob and their agents. We say things like: \"Alice's agent sends a message to Bob's agent\" -- or perhaps \"Alice's edge agent sends a message to Bob's cloud agent, which forwards it to Bob's edge agent\".

    Such statements adopt a useful level of abstraction--one that's highly recommended for most discussions. However, they make a number of simplifications. By modeling the roles of mediators and relays in routing, we can support routes that use multiple transports, routes that are not fully known (or knowable) to the sender, routes that pass through mix networks, and other advanced and powerful ../../concepts.

    "},{"location":"aip2/0046-mediators-and-relays/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0046-mediators-and-relays/#key-concepts","title":"Key Concepts","text":"

    Let's define mediators and relays by exploring how they manifest in a series of communication scenarios between Alice and Bob.

    "},{"location":"aip2/0046-mediators-and-relays/#scenario-1-base","title":"Scenario 1 (base)","text":"

    Alice and Bob are both employees of a large corporation. They work in the same office, but have never met. The office has a rule that all messages between employees must be encrypted. They use paper messages and physical delivery as the transport. Alice writes a note, encrypts it so only Bob can read it, puts it in an envelope addressed to Bob, and drops the envelope on a desk that she has been told belongs to Bob. This desk is in fact Bob's, and he later picks up the message, decrypts it, and reads it.

    In this scenario, there is no mediator, and no relay.

    "},{"location":"aip2/0046-mediators-and-relays/#scenario-2-a-gatekeeper","title":"Scenario 2: a gatekeeper","text":"

    Imagine that Bob hires an executive assistant, Carl, to filter his mail. Bob won't open any mail unless Carl looks at it and decides that it's worthy of Bob's attention.

    Alice has to change her behavior. She continues to package a message for Bob, but now she must account for Carl as well. She take the envelope for Bob, and places it inside a new envelope addressed to Carl. Inside the outer envelope, and next to the envelope destined for Bob, Alice writes Carl an encrypted note: \"This inner envelope is for Bob. Please forward.\"

    Here, Carl is acting as a mediator. He is mostly just passing messages along. But because he is processing a message himself, and because Carl is interposed between Alice and Bob, he affects the behavior of the sender. He is a known entity in the route.

    "},{"location":"aip2/0046-mediators-and-relays/#scenario-3-transparent-indirection","title":"Scenario 3: transparent indirection","text":"

    All is the same as the base scenario (Carl has been fired), except that Bob is working from home when Alice's message lands on his desk. Bob has previously arranged with his friend Darla, who lives near him, to pick up any mail that's on his desk and drop it off at his house at the end of the work day. Darla sees Alice's note and takes it home to Bob.

    In this scenario, Darla is acting as a relay. Note that Bob arranges for Darla to do this without notifying Alice, and that Alice does not need to adjust her behavior in any way for the relay to work.

    "},{"location":"aip2/0046-mediators-and-relays/#scenario-4-more-indirection","title":"Scenario 4: more indirection","text":"

    Like scenario 3, Darla brings Bob his mail at home. However, Bob isn't at home when his mail arrives. He's had to rush out on an errand, but he's left instructions with his son, Emil, to open any work mail, take a photo of the letter, and text him the photo. Emil intends to do this, but the camera on his phone misfires, so he convinces his sister, Francis, to take the picture on her phone and email it to him. Then he texts the photo to Bob, as arranged.

    Here, Emil and Francis are also acting as relays. Note that nobody knows about the full route. Alice thinks she's delivering directly to Bob. So does Darla. Bob knows about Darla and Emil, but not about Francis.

    Note, too, how the transport is changing from physical mail to email to text.

    To the party immediately upstream (closer to the sender), a relay is indistinguishable from the next party downstream (closer to the recipient). A party anywhere in the chain can insert one or more relays upstream from themselves, as long as those relays are not upstream of another named party (sender or mediator).

    "},{"location":"aip2/0046-mediators-and-relays/#more-scenarios","title":"More Scenarios","text":"

    Mediators and relays can be combined in any order and any amount in variations on our fictional scenario. Bob could employ Carl as a mediator, and Carl could work from home and arrange delivery via George, then have his daughter Hannah run messages back to Bob's desk at work. Carl could hire his own mediator. Darla could arrange or Ivan to substitute for her when she goes on vacation. And so forth.

    "},{"location":"aip2/0046-mediators-and-relays/#more-traditional-usage","title":"More Traditional Usage","text":"

    The scenarios used above are somewhat artificial. Our most familiar agent-to-agent scenarios involve edge agents running on mobile devices and accessible through bluetooth or push notification, and cloud agents that use electronic protocols as their transport. Let's see how relays and mediators apply there.

    "},{"location":"aip2/0046-mediators-and-relays/#scenario-5-traditional-base","title":"Scenario 5 (traditional base)","text":"

    Alice's cloud agent wants to talk to Bob's cloud agent. Bob's cloud agent is listening at http://bob.com/agent. Alice encrypts a message for Bob and posts it to that URL.

    In this scenario, we are using a direct transport with neither a mediator nor a relay.

    If you are familiar with common routing patterns and you are steeped in HTTP, you are likely objecting at this point, pointing out ways that this description diverges from best practice, including what's prescribed in other RFC. You may be eager to explain why this is a privacy problem, for example.

    You are not wrong, exactly. But please suspend those concerns and hang with me. This is about what's theoretically possible in the mental model. Besides, I would note that virtually the same diagram could be used for a Bluetooth agent conversation:

    "},{"location":"aip2/0046-mediators-and-relays/#scenario-6-herd-hosting","title":"Scenario 6: herd hosting","text":"

    Let's tweak Scenario 5 slightly by saying that Bob's agent is one of thousands that are hosted at the same URL. Maybe the URL is now http://agents-r-us.com/inbox. Now if Alice wants to talk to Bob's cloud agent, she has to cope with a mediator. She wraps the encrypted message for Bob's cloud agent inside a forward message that's addressed to and encrypted for the agent of agents-r-us that functions as a gatekeeper.

    This scenario is one that highlights an external mediator--so-called because the mediator lives outside the sovereign domain of the final recipient.

    "},{"location":"aip2/0046-mediators-and-relays/#scenario-7-intra-domain-dispatch","title":"Scenario 7: intra-domain dispatch","text":"

    Now let's subtract agents-r-us. We're back to Bob's cloud agent listening directly at http://bob.com/agent. However, let's say that Alice has a different goal--now she wants to talk to the edge agent running on Bob's mobile device. This agent doesn't have a permanent IP address, so Bob uses his own cloud agent as a mediator. He tells Alice that his mobile device agent can only be reached via his cloud agent.

    Once again, this causes Alice to modify her behavior. Again, she wraps her encrypted message. The inner message is enclosed in an outer envelope, and the outer envelope is passed to the mediator.

    This scenario highlights an internal mediator. Internal and external mediators introduce similar ../../features and similar constraints; the relevant difference is that internal mediators live within the sovereign domain of the recipient, and may thus be worthy of greater trust.

    "},{"location":"aip2/0046-mediators-and-relays/#scenario-8-double-mediation","title":"Scenario 8: double mediation","text":"

    Now let's combine. Bob's cloud agent is hosted at agents-r-us, AND Alice wants to reach Bob's mobile:

    This is a common pattern with HTTP-based cloud agents plus mobile edge agents, which is the most common deployment pattern we expect for many users of self-sovereign identity. Note that the properties of the agency and the routing agent are not particularly special--they are just an external and an internal mediator, respectively.

    "},{"location":"aip2/0046-mediators-and-relays/#related-concepts","title":"Related Concepts","text":""},{"location":"aip2/0046-mediators-and-relays/#routes-are-one-way-not-duplex","title":"Routes are One-Way (not duplex)","text":"

    In all of this discussion, note that we are analyzing only a flow from Alice to Bob. How Bob gets a message back to Alice is a completely separate question. Just because Carl, Darla, Emil, Francis, and Agents-R-Us may be involved in how messages flow from Alice to Bob, does not mean they are involved in flow the opposite direction.

    Note how this breaks the simple assumptions of pure request-response technologies like HTTP, that assume the channel in (request) is also the channel out (response). Duplex request-response can be modeled with A2A, but doing so requires support that may not always be available, plus cooperative behavior governed by the ~thread decorator.

    "},{"location":"aip2/0046-mediators-and-relays/#conventions-on-direction","title":"Conventions on Direction","text":"

    For any given one-way route, the direction of flow is always from sender to receiver. We could use many different metaphors to talk about the \"closer to sender\" and \"closer to receiver\" directions -- upstream and downstream, left and right, before and after, in and out. We've chosen to standardize on two:

    "},{"location":"aip2/0046-mediators-and-relays/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Connect.Me Free mobile app from Evernym. Installed via app store on iOS and Android. Verity Commercially licensed enterprise agent, SaaS or on-prem. DIDComm mediator Open source cloud-based mediator with Firebase support."},{"location":"aip2/0047-json-ld-compatibility/","title":"Aries RFC 0047: JSON-LD Compatibility","text":""},{"location":"aip2/0047-json-ld-compatibility/#summary","title":"Summary","text":"

    Explains the goals of DID Communication with respect to JSON-LD, and how Aries proposes to accomplish them.

    "},{"location":"aip2/0047-json-ld-compatibility/#motivation","title":"Motivation","text":"

    JSON-LD is a familiar body of conventions that enriches the expressive power of plain JSON. It is natural for people who arrive in the DID Communication (DIDComm) ecosystem to wonder whether we are using JSON-LD--and if so, how. We need a coherent answer that clarifies our intentions and that keeps us true to those intentions as the ecosystem evolves.

    "},{"location":"aip2/0047-json-ld-compatibility/#tutorial","title":"Tutorial","text":"

    The JSON-LD spec is a recommendation work product of the W3C RDF Working Group Since it was formally recommended as version 1.0 in 2014, the JSON for Linking Data Community Group has taken up not-yet-standards-track work on a 1.1 update.

    JSON-LD has significant gravitas in identity circles. It gives to JSON some capabilities that are sorely needed to model the semantic web, including linking, namespacing, datatyping, signing, and a strong story for schema (partly through the use of JSON-LD on schema.org).

    However, JSON-LD also comes with some conceptual and technical baggage. It can be hard for developers to master its subtleties; it requires very flexible parsing behavior after built-in JSON support is used to deserialize; it references a family of related specs that have their own learning curve; the formality of its test suite and libraries may get in the way of a developer who just wants to read and write JSON and \"get stuff done.\"

    In addition, the problem domain of DIDComm is somewhat different from the places where JSON-LD has the most traction. The sweet spot for DIDComm is small, relatively simple JSON documents where code behavior is strongly bound to the needs of a specific interaction. DIDComm needs to work with extremely simple agents on embedded platforms. Such agents may experience full JSON-LD support as an undue burden when they don't even have a familiar desktop OS. They don't need arbitrary semantic complexity.

    If we wanted to use email technology to send a verifiable credential, we would model the credential as an attachment, not enrich the schema of raw email message bodies. DIDComm invites a similar approach.

    "},{"location":"aip2/0047-json-ld-compatibility/#goal","title":"Goal","text":"

    The DIDComm messaging effort that began in the Indy community wants to benefit from the accessibility of ordinary JSON, but leave an easy path for more sophisticated JSON-LD-driven patterns when the need arises. We therefore set for ourselves this goal:

    Be compatible with JSON-LD, such that advanced use cases can take advantage of it where it makes sense, but impose no dependencies on the mental model or the tooling of JSON-LD for the casual developer.

    "},{"location":"aip2/0047-json-ld-compatibility/#what-the-casual-developer-needs-to-know","title":"What the Casual Developer Needs to Know","text":"

    That's it.

    "},{"location":"aip2/0047-json-ld-compatibility/#details","title":"Details","text":"

    Compatibility with JSON-LD was evaluated against version 1.1 of the JSON-LD spec, current in early 2019. If material changes in the spec are forthcoming, a new analysis may be worthwhile. Our current understanding follows.

    "},{"location":"aip2/0047-json-ld-compatibility/#type","title":"@type","text":"

    The type of a DIDComm message, and its associated route or handler in dispatching code, is given by the JSON-LD @type property at the root of a message. JSON-LD requires this value to be an IRI. DIDComm DID references are fully compliant. Instances of @type on any node other than a message root have JSON-LD meaning, but no predefined relevance in DIDComm.

    "},{"location":"aip2/0047-json-ld-compatibility/#id","title":"@id","text":"

    The identifier for a DIDComm message is given by the JSON-LD @id property at the root of a message. JSON-LD requires this value to be an IRI. DIDComm message IDs are relative IRIs, and can be converted to absolute form as described in RFC 0217: Linkable Message Paths. Instances of @id on any node other than a message root have JSON-LD meaning, but no predefined relevance in DIDComm.

    "},{"location":"aip2/0047-json-ld-compatibility/#context","title":"@context","text":"

    This is JSON-LD\u2019s namespacing mechanism. It is active in DIDComm messages, but can be ignored for simple processing, in the same way namespaces in XML are often ignored for simple tasks.

    Every DIDComm message has an associated @context, but we have chosen to follow the procedure described in section 6 of the JSON-LD spec, which focuses on how ordinary JSON can be intepreted as JSON-LD by communicating @context out of band.

    DIDComm messages communicate the context out of band by specifying it in the protocol definition (e.g., RFC) for the associated message type; thus, the value of @type indirectly gives the relevant @context. In advanced use cases, @context may appear in a DIDComm message, supplementing this behavior.

    "},{"location":"aip2/0047-json-ld-compatibility/#ordering","title":"Ordering","text":"

    JSON-LD specifies that the order of items in arrays is NOT significant, and notes (correctly) that this is the opposite of the standard assumption for plain JSON. This makes sense when viewed through the lens of JSON-LD\u2019s role as a transformation of RDF.

    Since we want to violate as few assumptions as possible for a developer with general knowledge of JSON, DIDComm messages reverse this default, making arrays an ordered construct, as if all DIDComm message @contexts contained something like:

    \"each field\": { \"@container\": \"@list\"}\n
    To contravene the default, use a JSON-LD construction like this in @context:

    \"myfield\": { \"@container\": \"@set\"}\n
    "},{"location":"aip2/0047-json-ld-compatibility/#decorators","title":"Decorators","text":"

    Decorators are JSON fragments that can be included in any DIDComm message. They enter the formally defined JSON-LD namespace via a JSON-LD fragment that is automatically imputed to every DIDComm message:

    \"@context\": {\n  \"@vocab\": \"https://github.com/hyperledger/aries-rfcs/\"\n}\n

    All decorators use the reserved prefix char ~ (tilde). For more on decorators, see the Decorator RFC.

    "},{"location":"aip2/0047-json-ld-compatibility/#signing","title":"Signing","text":"

    JSON-LD is associated but not strictly bound to a signing mechanism, LD-Signatures. It\u2019s a good mechanism, but it comes with some baggage: you must canonicalize, which means you must resolve every \u201cterm\u201d (key name) to its fully qualified form by expanding contexts before signing. This raises the bar for JSON-LD sophistication and library dependencies.

    The DIDComm community is not opposed to using LD Signatures for problems that need them, but has decided not to adopt the mechanism across the board. There is another signing mechanism that is far simpler, and adequate for many scenarios. We\u2019ll use whichever scheme is best suited to circumstances.

    "},{"location":"aip2/0047-json-ld-compatibility/#type-coercion","title":"Type Coercion","text":"

    DIDComm messages generally do not need this feature of JSON-LD, because there are well understood conventions around date-time datatypes, and individual RFCs that define each message type can further clarify such subtleties. However, it is available on a message-type-definition basis (not ad hoc).

    "},{"location":"aip2/0047-json-ld-compatibility/#node-references","title":"Node References","text":"

    JSON-LD lets one field reference another. See example 93 (note that the ref could have just been \u201c#me\u201d instead of the fully qualified IRI). We may need this construct at some point in DIDComm, but it is not in active use yet.

    "},{"location":"aip2/0047-json-ld-compatibility/#internationalization-and-localization","title":"Internationalization and Localization","text":"

    JSON-LD describes a mechanism for this. It has approximately the same ../../features as the one described in Aries RFC 0043, with a few exceptions:

    Because of these misalignments, the DIDComm ecosystem plans to use its own solution to this problem.

    "},{"location":"aip2/0047-json-ld-compatibility/#additional-json-ld-constructs","title":"Additional JSON-LD Constructs","text":"

    The following JSON-LD keywords may be useful in DIDComm at some point in the future: @base, @index, @container (cf @list and @set), @nest, @value, @graph, @prefix, @reverse, @version.

    "},{"location":"aip2/0047-json-ld-compatibility/#drawbacks","title":"Drawbacks","text":"

    By attempting compatibility but only lightweight usage of JSON-LD, we are neither all-in on JSON-LD, nor all-out. This could cause confusion. We are making the bet that most developers won't need to know or care about the details; they'll simply learn that @type and @id are special, required fields on messages. Designers of protocols will need to know a bit more.

    "},{"location":"aip2/0047-json-ld-compatibility/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"aip2/0047-json-ld-compatibility/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0047-json-ld-compatibility/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"aip2/0048-trust-ping/","title":"Aries RFC 0048: Trust Ping Protocol 1.0","text":""},{"location":"aip2/0048-trust-ping/#summary","title":"Summary","text":"

    Describe a standard way for agents to test connectivity, responsiveness, and security of a pairwise channel.

    "},{"location":"aip2/0048-trust-ping/#motivation","title":"Motivation","text":"

    Agents are distributed. They are not guaranteed to be connected or running all the time. They support a variety of transports, speak a variety of protocols, and run software from many different vendors.

    This can make it very difficult to prove that two agents have a functional pairwise channel. Troubleshooting connectivity, responsivenes, and security is vital.

    "},{"location":"aip2/0048-trust-ping/#tutorial","title":"Tutorial","text":"

    This protocol is analogous to the familiar ping command in networking--but because it operates over agent-to-agent channels, it is transport agnostic and asynchronous, and it can produce insights into privacy and security that a regular ping cannot.

    "},{"location":"aip2/0048-trust-ping/#roles","title":"Roles","text":"

    There are two parties in a trust ping: the sender and the receiver. The sender initiates the trust ping. The receiver responds. If the receiver wants to do a ping of their own, they can, but this is a new interaction in which they become the sender.

    "},{"location":"aip2/0048-trust-ping/#messages","title":"Messages","text":"

    The trust ping interaction begins when sender creates a ping message like this:

    {\n  \"@type\": \"https://didcomm.org/trust_ping/1.0/ping\",\n  \"@id\": \"518be002-de8e-456e-b3d5-8fe472477a86\",\n  \"~timing\": {\n    \"out_time\": \"2018-12-15 04:29:23Z\",\n    \"expires_time\": \"2018-12-15 05:29:23Z\",\n    \"delay_milli\": 0\n  },\n  \"comment\": \"Hi. Are you listening?\",\n  \"response_requested\": true\n}\n

    Only @type and @id are required; ~timing.out_time, ~timing.expires_time, and ~timing.delay_milli are optional message timing decorators, and comment follows the conventions of localizable message fields. If present, it may be used to display a human-friendly description of the ping to a user that gives approval to respond. (Whether an agent responds to a trust ping is a decision for each agent owner to make, per policy and/or interaction with their agent.)

    The response_requested field deserves special mention. The normal expectation of a trust ping is that it elicits a response. However, it may be desirable to do a unilateral trust ping at times--communicate information without any expecation of a reaction. In this case, \"response_requested\": false may be used. This might be useful, for example, to defeat correlation between request and response (to generate noise). Or agents A and B might agree that periodically A will ping B without a response, as a way of evidencing that A is up and functional. If response_requested is false, then the receiver MUST NOT respond.

    When the message arrives at the receiver, assuming that response_requested is not false, the receiver should reply as quickly as possible with a ping_response message that looks like this:

    {\n  \"@type\": \"https://didcomm.org/trust_ping/1.0/ping_response\",\n  \"@id\": \"e002518b-456e-b3d5-de8e-7a86fe472847\",\n  \"~thread\": { \"thid\": \"518be002-de8e-456e-b3d5-8fe472477a86\" },\n  \"~timing\": { \"in_time\": \"2018-12-15 04:29:28Z\", \"out_time\": \"2018-12-15 04:31:00Z\"},\n  \"comment\": \"Hi yourself. I'm here.\"\n}\n

    Here, @type and ~thread are required, and the rest is optional.

    "},{"location":"aip2/0048-trust-ping/#trust","title":"Trust","text":"

    This is the \"trust ping protocol\", not just the \"ping protocol.\" The \"trust\" in its name comes from several ../../features that the interaction gains by virtue of its use of standard agent-to-agent conventions:

    1. Messages should be associated with a message trust context that allows sender and receiver to evaluate how much trust can be placed in the channel. For example, both sender and receiver can check whether messages are encrypted with suitable algorithms and keys.

    2. Messages may be targeted at any known agent in the other party's sovereign domain, using cross-domain routing conventions, and may be encrypted and packaged to expose exactly and only the information desired, at each hop along the way. This allows two parties to evaluate the completeness of a channel and the alignment of all agents that maintain it.

    3. This interaction may be traced using the general message tracing mechanism.

    "},{"location":"aip2/0048-trust-ping/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community; MISSING test results Aries Framework - .NET .NET framework for building agents of all types; MISSING test results Streetcred.id Commercial mobile and web app built using Aries Framework - .NET; MISSING test results Aries Cloud Agent - Python Contributed by the government of British Columbia.; MISSING test results Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases.; MISSING test results Aries Protocol Test Suite MISSING test results"},{"location":"aip2/0050-wallets/","title":"Aries RFC 0050: Wallets","text":""},{"location":"aip2/0050-wallets/#summary","title":"Summary","text":"

    Specify the external interfaces of identity wallets in the Indy ecosystem, as well as some background ../../concepts, theory, tradeoffs, and internal implementation guidelines.

    "},{"location":"aip2/0050-wallets/#motivation","title":"Motivation","text":"

    Wallets are a familiar component metaphor that SSI has adopted from the world of cryptocurrencies. The translation isn't perfect, though; crypto wallets have only a subset of the ../../features that an identity wallet needs. This causes problems, as coders may approach wallets in Indy with assumptions that are more narrow than our actual design target.

    Since wallets are a major vector for hacking and cybersecurity issues, casual or fuzzy wallet requirements are a recipe for frustration or disaster. Divergent and substandard implementations could undermine security more broadly. This argues for as much design guidance and implementation help as possible.

    Wallets are also a unit of identity portability--if an identity owner doesn't like how her software is working, she should be able to exercise her self- sovereignty by taking the contents of her wallet to a new service. This implies that wallets need certain types of interoperability in the ecosystem, if they are to avoid vendor lock-in.

    All of these reasons--to clarify design scope, to provide uniform high security, and to guarantee interop--suggest that we need a formal RFC to document wallet architecture.

    "},{"location":"aip2/0050-wallets/#tutorial","title":"Tutorial","text":"

    (For a slide deck that gives a simplified overview of all the content in this RFC, please see http://bit.ly/2JUcIiT. The deck also includes a link to a recorded presentation, if you prefer something verbal and interactive.)

    "},{"location":"aip2/0050-wallets/#what-is-an-identity-wallet","title":"What Is an Identity Wallet?","text":"

    Informally, an identity wallet (preferably not just \"wallet\") is a digital container for data that's needed to control a self-sovereign identity. We borrow this metaphor from physical wallets:

    Notice that we do not carry around in a physical wallet every document, key, card, photo, piece of currency, or credential that we possess. A wallet is a mechanism of convenient control, not an exhaustive repository. A wallet is portable. A wallet is worth safeguarding. Good wallets are organized so we can find things easily. A wallet has a physical location.

    What does suggest about identity wallets?

    "},{"location":"aip2/0050-wallets/#types-of-sovereign-data","title":"Types of Sovereign Data","text":"

    Before we give a definitive answer to that question, let's take a detour for a moment to consider digital data. Actors in a self-sovereign identity ecosystem may own or control many different types of data:

    ...and much more. Different subsets of data may be worthy of different protection efforts:

    The data can also show huge variety in its size and in its richness:

    Because of the sensitivity difference, the size and richness difference, joint ownership, and different needs for access in different circumstances, we may store digital data in many different locations, with different backup regimes, different levels of security, and different cost profiles.

    "},{"location":"aip2/0050-wallets/#whats-out-of-scope","title":"What's Out of Scope","text":""},{"location":"aip2/0050-wallets/#not-a-vault","title":"Not a Vault","text":"

    This variety suggests that an identity wallet as a loose grab-bag of all our digital \"stuff\" will give us a poor design. We won't be able to make good tradeoffs that satisfy everybody; some will want rigorous, optimized search; others will want to minimize storage footprint; others will be concerned about maximizing security.

    We reserve the term vault to refer to the complex collection of all an identity owner's data:

    Note that a vault can contain an identity wallet. A vault is an important construct, and we may want to formalize its interface. But that is not the subject of this spec.

    "},{"location":"aip2/0050-wallets/#not-a-cryptocurrency-wallet","title":"Not A Cryptocurrency Wallet","text":"

    The cryptocurrency community has popularized the term \"wallet\"--and because identity wallets share with crypto wallets both high-tech crypto and a need to store secrets, it is tempting to equate these two ../../concepts. However, an identity wallet can hold more than just cryptocurrency keys, just as a physical wallet can hold more than paper currency. Also, identity wallets may need to manage hundreds of millions of relationships (in the case of large organizations), whereas most crypto wallets manage a small number of keys:

    "},{"location":"aip2/0050-wallets/#not-a-gui","title":"Not a GUI","text":"

    As used in this spec, an identity wallet is not a visible application, but rather a data store. Although user interfaces (superb ones!) can and should be layered on top of wallets, from indy's perspective the wallet itself consists of a container and its data; its friendly face is a separate construct. We may casually refer to an application as a \"wallet\", but what we really mean is that the application provides an interface to the underlying wallet.

    This is important because if a user changes which app manages his identity, he should be able to retain the wallet data itself. We are aiming for a better portability story than browsers offer (where if you change browsers, you may be able to export+import your bookmarks, but you have to rebuild all sessions and logins from scratch).

    "},{"location":"aip2/0050-wallets/#personas","title":"Personas","text":"

    Wallets have many stakeholders. However, three categories of wallet users are especially impactful on design decisions, so we define a persona for each.

    "},{"location":"aip2/0050-wallets/#alice-individual-identity-owner","title":"Alice (individual identity owner)","text":"

    Alice owns several devices, and she has an agent in the cloud. She has a thousand relationships--some with institutions, some with other people. She has a couple hundred credentials. She owns three different types of cryptocurrency. She doesn\u2019t issue or revoke credentials--she just uses them. She receives proofs from other entities (people and orgs). Her main tool for exercising a self-sovereign identity is an app on a mobile device.

    "},{"location":"aip2/0050-wallets/#faber-intitutional-identity-owner","title":"Faber (intitutional identity owner)","text":"

    Faber College has an on-prem data center as well as many resources and processes in public and private clouds. It has relationships with a million students, alumni, staff, former staff, applicants, business partners, suppliers, and so forth. Faber issues credentials and must manage their revocation. Faber may use crypto tokens to sell and buy credentials and proofs.

    "},{"location":"aip2/0050-wallets/#the-org-book-trust-hub","title":"The Org Book (trust hub)","text":"

    The Org Book holds credentials (business licenses, articles of incorporation, health permits, etc) issued by various government agencies, about millions of other business entities. It needs to index and search credentials quickly. Its data is public. It serves as a reference for many relying parties--thus its trust hub role.

    "},{"location":"aip2/0050-wallets/#use-cases","title":"Use Cases","text":"

    The specific uses cases for an identity wallet are too numerous to fully list, but we can summarize them as follows:

    As an identity owner (any of the personas above), I want to manage identity and its relationships in a way that guarantees security and privacy:

    "},{"location":"aip2/0050-wallets/#managing-secrets","title":"Managing Secrets","text":"

    Certain sensitive things require special handling. We would never expect to casually lay an ebola zaire sample on the counter in our bio lab; rather, it must never leave a special controlled isolation chamber.

    Cybersecurity in wallets can be greatly enhanced if we take a similar tack with high-value secrets. We prefer to generate such secrets in their final resting place, possibly using a seed if we need determinism. We only use such secrets in their safe place, instead of passing them out to untrusted parties.

    TPMs, HSMs, and so forth follow these rules. Indy\u2019s current wallet interface does, too. You can\u2019t get private keys out.

    "},{"location":"aip2/0050-wallets/#composition","title":"Composition","text":"

    The foregoing discussions about cybersecurity, the desirability of design guidance and careful implementation, and wallet data that includes but is not limited to secrets motivates the following logical organization of identity wallets in Indy:

    The world outside a wallet interfaces with the wallet through a public interface provided by indy-sdk, and implemented only once. This is the block labeled encryption, query (wallet core) in the diagram. The implementation in this layer guarantees proper encryption and secret-handling. It also provides some query ../../features. Records (items) to be stored in a wallet are referenced by a public handle if they are secrets. This public handle might be a public key in a key pair, for example. Records that are not secrets can be returned directly across the API boundary.

    Underneath, this common wallet code in libindy is supplemented with pluggable storage-- a technology that provides persistence and query ../../features. This pluggable storage could be a file system, an object store, an RDBMS, a NoSQL DB, a Graph DB, a key~value store, or almost anything similar. The pluggable storage is registered with the wallet layer by providing a series of C-callable functions (callbacks). The storage layer doesn't have to worry about encryption at all; by the time data reaches it, it is encrypted robustly, and the layer above the storage takes care of translating queries to and from encrypted form for external consumers of the wallet.

    "},{"location":"aip2/0050-wallets/#tags-and-queries","title":"Tags and Queries","text":"

    Searchability in wallets is facilitated with a tagging mechanism. Each item in a wallet can be associated with zero or more tags, where a tag is a key=value pair. Items can be searched based on the tags associated with them, and tag values can be strings or numbers. With a good inventory of tags in a wallet, searching can be robust and efficient--but there is no support for joins, subqueries, and other RDBMS-like constructs, as this would constrain the type of storage plugin that could be written.

    An example of the tags on a wallet item that is a credential might be:

      item-name = \"My Driver's License\"\n  date-issued = \"2018-05-23\"\n  issuer-did = \"ABC\"\n  schema = \"DEF\"\n

    Tag names and tag values are both case-sensitive.

    Because tag values are normally encrypted, most tag values can only be tested using the $eq, $neq or $in operators (see Wallet Query Language, next). However, it is possible to force a tag to be stored in the wallet as plain text by naming it with a special prefix, ~ (tilde). This enables operators like $gt, $lt, and $like. Such tags lose their security guarantees but provide for richer queries; it is up to applications and their users to decide whether the tradeoff is appropriate.

    "},{"location":"aip2/0050-wallets/#wallet-query-language","title":"Wallet Query Language","text":"

    Wallets can be searched and filtered using a simple, JSON-based query language. We call this Wallet Query Language (WQL). WQL is designed to require no fancy parsing by storage plugins, and to be easy enough for developers to learn in just a few minutes. It is inspired by MongoDB's query syntax, and can be mapped to SQL, GraphQL, and other query languages supported by storage backends, with minimal effort.

    Formal definition of WQL language is the following:

    query = {subquery}\nsubquery = {subquery, ..., subquery} // means subquery AND ... AND subquery\nsubquery = $or: [{subquery},..., {subquery}] // means subquery OR ... OR subquery\nsubquery = $not: {subquery} // means NOT (subquery)\nsubquery = \"tagName\": tagValue // means tagName == tagValue\nsubquery = \"tagName\": {$neq: tagValue} // means tagName != tagValue\nsubquery = \"tagName\": {$gt: tagValue} // means tagName > tagValue\nsubquery = \"tagName\": {$gte: tagValue} // means tagName >= tagValue\nsubquery = \"tagName\": {$lt: tagValue} // means tagName < tagValue\nsubquery = \"tagName\": {$lte: tagValue} // means tagName <= tagValue\nsubquery = \"tagName\": {$like: tagValue} // means tagName LIKE tagValue\nsubquery = \"tagName\": {$in: [tagValue, ..., tagValue]} // means tagName IN (tagValue, ..., tagValue)\n
    "},{"location":"aip2/0050-wallets/#sample-wql-query-1","title":"Sample WQL Query 1","text":"

    Get all credentials where subject like \u2018Acme%\u2019 and issue_date > last week. (Note here that the name of the issue date tag begins with a tilde, telling the wallet to store its value unencrypted, which makes the $gt operator possible.)

    {\n  \"~subject\": {\"$like\": \"Acme%\"},\n  \"~issue_date\": {\"$gt\": 2018-06-01}\n}\n
    "},{"location":"aip2/0050-wallets/#sample-wql-query-2","title":"Sample WQL Query 2","text":"

    Get all credentials about me where schema in (a, b, c) and issuer in (d, e, f).

    {\n  \"schema_id\": {\"$in\": [\"a\", \"b\", \"c\"]},\n  \"issuer_id\": {\"$in\": [\"d\", \"e\", \"f\"]},\n  \"holder_role\": \"self\"\n}\n
    "},{"location":"aip2/0050-wallets/#encryption","title":"Encryption","text":"

    Wallets need very robust encryption. However, they must also be searchable, and the encryption must be equally strong regardless of which storage technology is used. We want to be able to hide data patterns in the encrypted data, such that an attacker cannot see common prefixes on keys, or common fragments of data in encrypted values. And we want to rotate the key that protects a wallet without having to re-encrypt all its content. This suggests that a trivial encryption scheme, where we pick a symmetric key and encrypt everything with it, is not adequate.

    Instead, wallet encryption takes the following approach:

    The 7 \"column\" keys are concatenated and encrypted with a wallet master key, then saved into the metadata of the wallet. This allows the master key to be rotated without re-encrypting all the items in the wallet.

    Today, all encryption is done using ChaCha20-Poly1305, with HMAC-SHA256. This is a solid, secure encryption algorithm, well tested and widely supported. However, we anticipate the desire to use different cipher suites, so in the future we will make the cipher suite pluggable.

    The way the individual fields are encrypted is shown in the following diagram. Here, data is shown as if stored in a relational database with tables. Wallet storage may or may not use tables, but regardless of how the storage distributes and divides the data, the logical relationships and the encryption shown in the diagram apply.

    "},{"location":"aip2/0050-wallets/#pluggable-storage","title":"Pluggable Storage","text":"

    Although Indy infrastructure will provide only one wallet implementation it will allow to plug different storages for covering of different use cases. Default storage shipped with libindy will be sqlite based and well suited for agents running on edge devices. The API endpoint register_wallet_storage will allow Indy Developers to register a custom storage implementation as a set of handlers.

    A storage implementation does not need any special security ../../features. It stores data that was already encrypted by libindy (or data that needs no encryption/protection, in the case of unencrypted tag values). It searches data in whatever form it is persisted, without any translation. It returns data as persisted, and lets the common wallet infrastructure in libindy decrypt it before return it to the user.

    "},{"location":"aip2/0050-wallets/#secure-enclaves","title":"Secure Enclaves","text":"

    Secure Enclaves are purposely designed to manage, generate, and securely store cryptographic material. Enclaves can be either specially designed hardware (e.g. HSM, TPM) or trusted execution environments (TEE) that isolate code and data from operating systems (e.g. Intel SGX, AMD SVE, ARM Trustzone). Enclaves can replace common cryptographic operations that wallets perform (e.g. encryption, signing). Some secrets cannot be stored in wallets like the key that encrypts the wallet itself or keys that are backed up. These cannot be stored in enclaves as keys stored in enclaves cannot be extracted. Enclaves can still protect these secrets via a mechanism called wrapping.

    "},{"location":"aip2/0050-wallets/#enclave-wrapping","title":"Enclave Wrapping","text":"

    Suppose I have a secret, X, that needs maximum protection. However, I can\u2019t store X in my secure enclave because I need to use it for operations that the enclave can\u2019t do for me; I need direct access. So how to I extend enclave protections to encompass my secret?

    I ask the secure enclave to generate a key, Y, that will be used to protect X. Y is called a wrapping key. I give X to the secure enclave and ask that it be encrypted with wrapping key Y. The enclave returns X\u2019 (ciphertext of X, now called a wrapped secret), which I can leave on disk with confidence; it cannot be decrypted to X without involving the secure enclave. Later, when I want to decrypt, I give wrapped secret X\u2019 to the secure enclave and ask it to give me back X by decrypting with wrapping key Y.

    You could ask whether this really increases security. If you can get into the enclave, you can wrap or unwrap at will.

    The answer is that an unwrapped secret is protected by only one thing--whatever ACLs exist on the filesystem or storage where it resides. A wrapped secret is protected by two things--the ACLs and the enclave. OS access may breach either one, but pulling a hard drive out of a device will not breach the enclave.

    "},{"location":"aip2/0050-wallets/#paper-wallets","title":"Paper Wallets","text":"

    It is possible to persist wallet data to physical paper (or, for that matter, to etched metal or other physical media) instead of a digital container. Such data has attractive storage properties (e.g., may survive natural disasters, power outages, and other challenges that would destroy digital data). Of course, by leaving the digital realm, the data loses its accessibility over standard APIs.

    We anticipate that paper wallets will play a role in backup and recovery, and possibly in enabling SSI usage by populations that lack easy access to smartphones or the internet. Our wallet design should be friendly to such usage, but physical persistence of data is beyond the scope of Indy's plugin storage model and thus not explored further in this RFC.

    "},{"location":"aip2/0050-wallets/#backup-and-recovery","title":"Backup and Recovery","text":"

    Wallets need a backup and recovery feature, and also a way to export data and import it. Indy's wallet API includes an export function and an import function that may be helpful in such use cases. Today, the export is unfiltered--all data is exported. The import is also all-or-nothing and must be to an empty wallet; it is not possible to import selectively or to update existing records during import.

    A future version of import and export may add filtering, overwrite, and progress callbacks. It may also allow supporting or auxiliary data (other than what the wallet directly persists) to be associated with the export/import payload.

    For technical details on how export and import work, please see the internal design docs.

    "},{"location":"aip2/0050-wallets/#reference","title":"Reference","text":""},{"location":"aip2/0050-wallets/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    We could implement wallets purely as built already in the cryptocurrency world. This would give us great security (except for crypto wallets that are cloud based), and perhaps moderately good usability.

    However, it would also mean we could not store credentials in wallets. Indy would then need an alternate mechanism to scan some sort of container when trying to satisfy a proof request. And it would mean that a person's identity would not be portable via a single container; rather, if you wanted to take your identity to a new place, you'd have to copy all crypto keys in your crypto wallet, plus copy all your credentials using some other mechanism. It would also fragment the places where you could maintain an audit trail of your SSI activities.

    "},{"location":"aip2/0050-wallets/#prior-art","title":"Prior art","text":"

    See comment about crypto wallets, above.

    "},{"location":"aip2/0050-wallets/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0050-wallets/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy SDK Most agents that implement wallets get their wallet support from Indy SDK. These are not listed separately."},{"location":"aip2/0092-transport-return-route/","title":"Aries RFC 0092: Transports Return Route","text":""},{"location":"aip2/0092-transport-return-route/#summary","title":"Summary","text":"

    Agents can indicate that an inbound message transmission may also be used as a return route for messages. This allows for transports of increased efficiency as well as agents without an inbound route.

    "},{"location":"aip2/0092-transport-return-route/#motivation","title":"Motivation","text":"

    Inbound HTTP and Websockets are used only for receiving messages by default. Return messages are sent using their own outbound connections. Including a decorator allows the receiving agent to know that using the inbound connection as a return route is acceptable. This allows two way communication with agents that may not have an inbound route available. Agents without an inbound route include mobile agents, and agents that use a client (and not a server) for communication.

    This decorator is intended to facilitate message communication between a client based agent (an agent that can only operate as a client, not a server) and the server based agents they communicate directly with. Use on messages that will be forwarded is not allowed.

    "},{"location":"aip2/0092-transport-return-route/#tutorial","title":"Tutorial","text":"

    When you send a message through a connection, you can use the ~transport decorator on the message and specify return_route. The value of return_route is discussed in the Reference section of this document.

    {\n    \"~transport\": {\n        \"return_route\": \"all\"\n    }\n}\n
    "},{"location":"aip2/0092-transport-return-route/#reference","title":"Reference","text":"

    The ~transport decorator should be processed after unpacking and prior to routing the message to a message handler.

    For HTTP transports, the presence of this message decorator indicates that the receiving agent MAY hold onto the connection and use it to return messages as designated. HTTP transports will only be able to receive at most one message at a time. Websocket transports are capable of receiving multiple messages.

    Compliance with this indicator is optional for agents generally, but required for agents wishing to connect with client based agents.

    "},{"location":"aip2/0092-transport-return-route/#drawbacks","title":"Drawbacks","text":""},{"location":"aip2/0092-transport-return-route/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"aip2/0092-transport-return-route/#prior-art","title":"Prior art","text":"

    The Decorators RFC describes scope of decorators. Transport isn't one of the scopes listed.

    "},{"location":"aip2/0092-transport-return-route/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Protocol Test Suite Used in Tests"},{"location":"aip2/0094-cross-domain-messaging/","title":"Aries RFC 0094: Cross-Domain Messaging","text":""},{"location":"aip2/0094-cross-domain-messaging/#summary","title":"Summary","text":"

    There are two layers of messages that combine to enable interoperable self-sovereign identity DIDcomm (formerly called Agent-to-Agent) communication. At the highest level are Agent Messages - messages sent between Identities to accomplish some shared goal. For example, establishing a connection between identities, issuing a Verifiable Credential from an Issuer to a Holder or even the simple delivery of a text Instant Message from one person to another. Agent Messages are delivered via the second, lower layer of messaging - encryption envelopes. An encryption envelope is a wrapper (envelope) around an Agent Message to enable the secure delivery of a message from one Agent directly to another Agent. An Agent Message going from its Sender to its Receiver may be passed through a number of Agents, and an encryption envelope is used for each hop of the journey.

    This RFC addresses Cross Domain messaging to enable interoperability. This is one of a series of related RFCs that address interoperability, including DIDDoc Conventions, Agent Messages and Encryption Envelope. Those RFCs should be considered together in understanding DIDcomm messaging.

    In order to send a message from one Identity to another, the sending Identity must know something about the Receiver's domain - the Receiver's configuration of Agents. This RFC outlines how a domain MUST present itself to enable the Sender to know enough to be able to send a message to an Agent in the domain. In support of that, a DIDcomm protocol (currently consisting of just one Message Type) is introduced to route messages through a network of Agents in both the Sender and Receiver's domain. This RFC provides the specification of the \"Forward\" Agent Message Type - an envelope that indicates the destination of a message without revealing anything about the message.

    The goal of this RFC is to define the rules that domains MUST follow to enable the delivery of Agent messages from a Sending Agent to a Receiver Agent in a secure and privacy-preserving manner.

    "},{"location":"aip2/0094-cross-domain-messaging/#motivation","title":"Motivation","text":"

    The purpose of this RFC and its related RFCs is to define a layered messaging protocol such that we can ignore the delivery of messages as we discuss the much richer Agent Messaging types and interactions. That is, we can assume that there is no need to include in an Agent message anything about how to route the message to the Receiver - it just magically happens. Alice (via her App Agent) sends a message to Bob, and (because of implementations based on this series of RFCs) we can ignore how the actual message got to Bob's App Agent.

    Put another way - these RFCs are about envelopes. They define a way to put a message - any message - into an envelope, put it into an outbound mailbox and have it magically appear in the Receiver's inbound mailbox in a secure and privacy-preserving manner. Once we have that, we can focus on letters and not how letters are sent.

    Most importantly for Agent to Agent interoperability, this RFC clearly defines the assumptions necessary to deliver a message from one domain to another - e.g. what exactly does Alice have to know about Bob's domain to send Bob a message?

    "},{"location":"aip2/0094-cross-domain-messaging/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0094-cross-domain-messaging/#core-messaging-goals","title":"Core Messaging Goals","text":"

    These are vital design goals for this RFC:

    1. Sender Encapsulation: We SHOULD minimize what the Receiver has to know about the domain (routing tree or agent infrastructure) of the Sender in order for them to communicate.
    2. Receiver Encapsulation: We SHOULD minimize what the Sender has to know about the domain (routing tree or agent infrastructure) of the Receiver in order for them to communicate.
    3. Independent Keys: Private signing keys SHOULD NOT be shared between agents; each agent SHOULD be separately identifiable for accounting and authorization/revocation purposes.
    4. Need To Know Information Sharing: Information made available to intermediary agents between the Sender and Receiver SHOULD be minimized to what is needed to perform the agent's role in the process.
    "},{"location":"aip2/0094-cross-domain-messaging/#assumptions","title":"Assumptions","text":"

    The following are assumptions upon which this RFC is predicated.

    "},{"location":"aip2/0094-cross-domain-messaging/#terminology","title":"Terminology","text":"

    The following terms are used in this RFC with the following meanings:

    "},{"location":"aip2/0094-cross-domain-messaging/#diddoc","title":"DIDDoc","text":"

    The term \"DIDDoc\" is used in this RFC as it is defined in the DID Specification:

    A DID can be resolved to get its corresponding DIDDoc by any Agent that needs access to the DIDDoc. This is true whether talking about a DID on a Public Ledger, or a pairwise DID (using the did:peer method) persisted only to the parties of the relationship. In the case of pairwise DIDs, it's the (implementation specific) domain's responsibility to ensure such resolution is available to all Agents requiring it within the domain.

    "},{"location":"aip2/0094-cross-domain-messaging/#messages-are-private","title":"Messages are Private","text":"

    Agent Messages sent from a Sender to a Receiver SHOULD be private. That is, the Sender SHOULD encrypt the message with a public key for the Receiver. Any agent in between the Sender and Receiver will know only to whom the message is intended (by DID and possibly keyname within the DID), not anything about the message.

    "},{"location":"aip2/0094-cross-domain-messaging/#the-sender-knows-the-receiver","title":"The Sender Knows The Receiver","text":"

    This RFC assumes that the Sender knows the Receiver's DID and, within the DIDDoc for that DID, the keyname to use for the Receiver's Agent. How the Sender knows the DID and keyname to send the message is not defined within this RFC - that is a higher level concern.

    The Receiver's DID MAY be a public or pairwise DID, and MAY be on a Public Ledger or only shared between the parties of the relationship.

    "},{"location":"aip2/0094-cross-domain-messaging/#example-domain-and-diddoc","title":"Example: Domain and DIDDoc","text":"

    The following is an example of an arbitrary pair of domains that will be helpful in defining the requirements in this RFC.

    In the diagram above:

    "},{"location":"aip2/0094-cross-domain-messaging/#bobs-did-for-his-relationship-with-alice","title":"Bob's DID for his Relationship with Alice","text":"

    Bob\u2019s domain has 3 devices he uses for processing messages - two phones (4 and 5) and a cloud-based agent (6). However, in Bob's relationship with Alice, he ONLY uses one phone (4) and the cloud-based agent (6). Thus the key for device 5 is left out of the DIDDoc (see below).

    Note that the keyname for the Routing Agent (3) is called \"routing\". This is an example of the kind of convention needed to allow the Sender's agents to know the keys for Agents with a designated role in the receiving domain - as defined in the DIDDoc Conventions RFC.

    {\n  \"@context\": \"https://w3id.org/did/v1\",\n  \"id\": \"did:sov:1234abcd\",\n  \"publicKey\": [\n    {\"id\": \"routing\", \"type\": \"RsaVerificationKey2018\",  \"owner\": \"did:sov:1234abcd\",\"publicKeyPem\": \"-----BEGIN PUBLIC X\u2026\"},\n    {\"id\": \"4\", \"type\": \"RsaVerificationKey2018\",  \"owner\": \"did:sov:1234abcd\",\"publicKeyPem\": \"-----BEGIN PUBLIC 9\u2026\"},\n    {\"id\": \"6\", \"type\": \"RsaVerificationKey2018\",  \"owner\": \"did:sov:1234abcd\",\"publicKeyPem\": \"-----BEGIN PUBLIC A\u2026\"}\n  ],\n  \"authentication\": [\n    {\"type\": \"RsaSignatureAuthentication2018\", \"publicKey\": \"did:sov:1234abcd#4\"}\n  ],\n  \"service\": [\n    {\n      \"id\": \"did:example:123456789abcdefghi;did-communication\",\n      \"type\": \"did-communication\",\n      \"priority\" : 0,\n      \"recipientKeys\" : [ \"did:example:1234abcd#4\" ],\n      \"routingKeys\" : [ \"did:example:1234abcd#3\" ],\n      \"serviceEndpoint\" : \"did:example:xd45fr567794lrzti67;did-communication\"\n    }\n  ]\n}\n

    For the purposes of this discussion we are defining the message flow to be:

    1 \u2192 2 \u2192 8 \u2192 9 \u2192 3 \u2192 4

    However, that flow is arbitrary and only one hop is actually required:

    "},{"location":"aip2/0094-cross-domain-messaging/#encryption-envelopes","title":"Encryption Envelopes","text":"

    An encryption envelope is used to transport any Agent Message from one Agent directly to another. In our example message flow above, there are five encryption envelopes sent, one for each hop in the flow. The separate Encryption Envelope RFC covers those details.

    "},{"location":"aip2/0094-cross-domain-messaging/#agent-message-format","title":"Agent Message Format","text":"

    An Agent Message defines the format of messages processed by Agents. Details about the general form of Agent Messages can be found in the Agent Messages RFC.

    This RFC specifies (below) the \"Forward\" message type, a part of the \"Routing\" family of Agent Messages.

    "},{"location":"aip2/0094-cross-domain-messaging/#did-diddoc-and-routing","title":"DID, DIDDoc and Routing","text":"

    A DID owned by the Receiver is resolvable by the Sender as a DIDDoc using either a Public Ledger or using pairwise DIDs based on the did:peer method. The related DIDcomm DIDDoc Conventions RFC defines the required contents of a DIDDoc created by the receiving entity. Notably, the DIDDoc given to the Sender by the Receiver specifies the required routing of the message through an optional set of mediators.

    "},{"location":"aip2/0094-cross-domain-messaging/#cross-domain-interoperability","title":"Cross Domain Interoperability","text":"

    A key goal for interoperability is that we want other domains to know just enough about the configuration of a domain to which they are delivering a message, but no more. The following walks through those minimum requirements.

    "},{"location":"aip2/0094-cross-domain-messaging/#required-the-did-and-diddoc","title":"Required: The DID and DIDDoc","text":"

    As noted above, the Sender of an Agent to Agent Message has the DID of the Receiver, and knows the key(s) from the DIDDoc to use for the Receiver's Agent(s).

    Example: Alice wants to send a message from her phone (1) to Bob's phone (4). She has Bob's B:did@A:B, the DID/DIDDoc Bob created and gave to Alice to use for their relationship. Alice created A:did@A:B and gave that to Bob, but we don't need to use that in this example. The content of the DIDDoc for B:did@A:B is presented above.

    "},{"location":"aip2/0094-cross-domain-messaging/#required-end-to-end-encryption-of-the-agent-message","title":"Required: End-to-End encryption of the Agent Message","text":"

    The Agent Message from the Sender SHOULD be hidden from all Agents other than the Receiver. Thus, it SHOULD be encrypted with the public key of the Receiver. Based on our assumptions, the Sender can get the public key of the Receiver agent because they know the DID#keyname string, can resolve the DID to the DIDDoc and find the public key associated with DID#keyname in the DIDDoc. In our example above, that is the key associated with \"did:sov:1234abcd#4\".

    Most Sender-to-Receiver messages will be sent between parties that have shared pairwise DIDs (using the did:peer method). When that is true, the Sender will (usually) AuthCrypt the message. If that is not the case, or for some other reason the Sender does not want to AuthCrypt the message, AnonCrypt will be used. In either case, the Indy-SDK pack() function handles the encryption.

    If there are mediators specified in the DID service endpoint for the Receiver agent, the Sender must wrap the message for the Receiver in a 'Forward' message for each mediator. It is assumed that the Receiver can determine the from did based on the to DID (or the sender's verkey) using their pairwise relationship.

    {\n  \"@type\" : \"https://didcomm.org/routing/1.0/forward\",\n  \"@id\": \"54ad1a63-29bd-4a59-abed-1c5b1026e6fd\",\n  \"to\"   : \"did:sov:1234abcd#4\",\n  \"msg\"  : { json object from <pack(AgentMessage,valueOf(did:sov:1234abcd#4), privKey(A.did@A:B#1))> }\n}\n

    Notes

    The bullet above about the unpack() function returning the signer's public key deserves some additional attention. The Receiver of the message knows from the \"to\" field the DID to which the message was sent. From that, the Receiver is expected to be able to determine the DID of the Sender, and from that, access the Sender's DIDDoc. However, knowing the DIDDoc is not enough to know from whom the message was sent - which key was used to send the message, and hence, which Agent controls the Sending private key. This information MUST be made known to the Receiver (from unpack()) when AuthCrypt is used so that the Receiver knows which key was used to the send the message and can, for example, use that key in responding to the arriving Message.

    The Sender can now send the Forward Agent Message on its way via the first of the encryption envelope. In our example, the Sender sends the Agent Message to 2 (in the Sender's domain), who in turn sends it to 8. That of course, is arbitrary - the Sender's Domain could have any configuration of Agents for outbound messages. The Agent Message above is passed unchanged, with each Agent able to see the @type, to and msg fields as described above. This continues until the outer forward message gets to the Receiver's first mediator or the Receiver's agent (if there are no mediators). Each agent decrypts the received encrypted envelope and either forwards it (if a mediator) or processes it (if the Receiver Agent). Per the Encryption Envelope RFC, between Agents the Agent Message is pack()'d and unpack()'d as appropriate or required.

    The diagram below shows an example use of the forward messages to encrypt the message all the way to the Receiver with two mediators in between - a shared domain endpoint (aka https://agents-r-us.com) and a routing agent owned by the receiving entity.

    "},{"location":"aip2/0094-cross-domain-messaging/#required-cross-domain-encryption","title":"Required: Cross Domain Encryption","text":"

    While within a domain the Agents MAY choose to use encryption or not when sending messages from Agent to Agent, encryption MUST be used when sending a message into the Receiver's domain. The endpoint agent unpack()'s the encryption envelope and processes the message - usually a forward. Note that within a domain, the agents may use arbitrary relays for messages, unknown to the sender. How the agents within the domain knows where to send the message is implementation specific - likely some sort of dynamic DID-to-Agent routing table. If the path to the receiving agent includes mediators, the message must go through those mediators in order (for example, through 3 in our example) as the message being forwarded has been encrypted for the mediators.

    "},{"location":"aip2/0094-cross-domain-messaging/#required-mediators-process-forward-messages","title":"Required: Mediators Process Forward Messages","text":"

    When a mediator (eventually) receives the message, it determines it is the target of the (current) outer forward Agent Message and so decrypts the message's msg value to reveal the inner \"Forward\" message. Mediators use their (implementation specific) knowledge to map from the to field to deliver the message to the physical endpoint of the next agent to process the message on it's way to the Receiver.

    "},{"location":"aip2/0094-cross-domain-messaging/#required-the-receiver-app-agent-decryptsprocesses-the-agent-message","title":"Required: The Receiver App Agent Decrypts/Processes the Agent Message","text":"

    When the Receiver Agent receives the message, it determines it is the target of the forward message, decrypts the payload and processes the message.

    "},{"location":"aip2/0094-cross-domain-messaging/#exposed-data","title":"Exposed Data","text":"

    The following summarizes the information needed by the Sender's agents:

    The DIDDoc will have a public key entry for each additional Agent message Receiver and each mediator.

    In many cases, the entry for the endpoint agent should be a public DID, as it will likely be operated by an agency (for example, https://agents-r-us.com) rather than by the Receiver entity (for example, a person). By making that a public DID in that case, the agency can rotate its public key(s) for receiving messages in a single operation, rather than having to notify each identity owner and in turn having them update the public key in every pairwise DID that uses that endpoint.

    "},{"location":"aip2/0094-cross-domain-messaging/#data-not-exposed","title":"Data Not Exposed","text":"

    Given the sequence specified above, the following data is NOT exposed to the Sender's agents:

    "},{"location":"aip2/0094-cross-domain-messaging/#message-types","title":"Message Types","text":"

    The following Message Types are defined in this RFC.

    "},{"location":"aip2/0094-cross-domain-messaging/#corerouting10forward","title":"Core:Routing:1.0:Forward","text":"

    The core message type \"forward\", version 1.0 of the \"routing\" family is defined in this RFC. An example of the message is the following:

    {\n  \"@type\" : \"https://didcomm.org/routing/1.0/forward\",\n  \"@id\": \"54ad1a63-29bd-4a59-abed-1c5b1026e6fd\",\n  \"to\"   : \"did:sov:1234abcd#4\",\n  \"msg\"  : { json object from <pack(AgentMessage,valueOf(did:sov:1234abcd#4), privKey(A.did@A:B#1))> }\n}\n

    The to field is required and takes one of two forms:

    The first form is used when sending forward messages across one or more agents that do not need to know the details of a domain. The Receiver of the message is the designated Routing Agent in the Receiver Domain, as it controls the key used to decrypt messages sent to the domain, but not to a specific Agent.

    The second form is used when the precise key (and hence, the Agent controlling that key) is used to encrypt the Agent Message placed in the msg field.

    The msg field calls the Indy-SDK pack() function to encrypt the Agent Message to be forwarded. The Sender calls the pack() with the suitable arguments to AnonCrypt or AuthCrypt the message. The pack() and unpack() functions are described in more detail in the Encryption Envelope RFC.

    "},{"location":"aip2/0094-cross-domain-messaging/#reference","title":"Reference","text":"

    See the other RFCs referenced in this document:

    "},{"location":"aip2/0094-cross-domain-messaging/#drawbacks","title":"Drawbacks","text":"

    N/A

    "},{"location":"aip2/0094-cross-domain-messaging/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    A number of discussions were held about this RFC. In those discussions, the rationale for the RFC evolved into the text, and the alternatives were eliminated. See prior versions of the superseded HIPE (in status section, above) for details.

    A suggestion was made that the following optional parameters could be defined in the \"routing/1.0/forward\" message type:

    The optional parameters have been left off for now, but could be added in this RFC or to a later version of the message type.

    "},{"location":"aip2/0094-cross-domain-messaging/#prior-art","title":"Prior art","text":"

    N/A

    "},{"location":"aip2/0094-cross-domain-messaging/#unresolved-questions","title":"Unresolved questions","text":"

    N/A

    "},{"location":"aip2/0094-cross-domain-messaging/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"aip2/0095-basic-message/","title":"Aries RFC 0095: Basic Message Protocol 1.0","text":""},{"location":"aip2/0095-basic-message/#summary","title":"Summary","text":"

    The BasicMessage protocol describes a stateless, easy to support user message protocol. It has a single message type used to communicate.

    "},{"location":"aip2/0095-basic-message/#motivation","title":"Motivation","text":"

    It is a useful feature to be able to communicate human written messages. BasicMessage is the most basic form of this written message communication, explicitly excluding advanced ../../features to make implementation easier.

    "},{"location":"aip2/0095-basic-message/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0095-basic-message/#roles","title":"Roles","text":"

    There are two roles in this protocol: sender and receiver. It is anticipated that both roles are supported by agents that provide an interface for humans, but it is possible for an agent to only act as a sender (do not process received messages) or a receiver (will never send messages).

    "},{"location":"aip2/0095-basic-message/#states","title":"States","text":"

    There are not really states in this protocol, as sending a message leaves both parties in the same state they were before.

    "},{"location":"aip2/0095-basic-message/#out-of-scope","title":"Out of Scope","text":"

    There are many useful ../../features of user messaging systems that we will not be adding to this protocol. We anticipate the development of more advanced and full-featured message protocols to fill these needs. Features that are considered out of scope for this protocol include:

    "},{"location":"aip2/0095-basic-message/#reference","title":"Reference","text":"

    Protocol: https://didcomm.org/basicmessage/1.0/

    message

    Example:

    {\n    \"@id\": \"123456780\",\n    \"@type\": \"https://didcomm.org/basicmessage/1.0/message\",\n    \"~l10n\": { \"locale\": \"en\" },\n    \"sent_time\": \"2019-01-15 18:42:01Z\",\n    \"content\": \"Your hovercraft is full of eels.\"\n}\n
    "},{"location":"aip2/0095-basic-message/#drawbacks","title":"Drawbacks","text":""},{"location":"aip2/0095-basic-message/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"aip2/0095-basic-message/#prior-art","title":"Prior art","text":"

    BasicMessage has parallels to SMS, which led to the later creation of MMS and even the still-under-development RCS.

    "},{"location":"aip2/0095-basic-message/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0095-basic-message/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community; MISSING test results Aries Framework - .NET .NET framework for building agents of all types; MISSING test results Streetcred.id Commercial mobile and web app built using Aries Framework - .NET; MISSING test results Aries Cloud Agent - Python Contributed by the government of British Columbia.; MISSING test results Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases.; MISSING test results Aries Protocol Test Suite ; MISSING test results"},{"location":"aip2/0183-revocation-notification/","title":"Aries RFC 0183: Revocation Notification 1.0","text":""},{"location":"aip2/0183-revocation-notification/#summary","title":"Summary","text":"

    This RFC defines the message format which an issuer uses to notify a holder that a previously issued credential has been revoked.

    "},{"location":"aip2/0183-revocation-notification/#motivation","title":"Motivation","text":"

    We need a standard protocol for an issuer to notify a holder that a previously issued credential has been revoked.

    For example, suppose a passport agency revokes Alice's passport. The passport agency (an issuer) may want to notify Alice (a holder) that her passport has been revoked so that she knows that she will be unable to use her passport to travel.

    "},{"location":"aip2/0183-revocation-notification/#tutorial","title":"Tutorial","text":"

    The Revocation Notification protocol is a very simple protocol consisting of a single message:

    This simple protocol allows an issuer to choose to notify a holder that a previously issued credential has been revoked.

    It is the issuer's prerogative whether or not to notify the holder that a credential has been revoked. It is not a security risk if the issuer does not notify the holder that the credential has been revoked, nor if the message is lost. The holder will still be unable to use a revoked credential without this notification.

    "},{"location":"aip2/0183-revocation-notification/#roles","title":"Roles","text":"

    There are two parties involved in a Revocation Notification: issuer and holder. The issuer sends the revoke message to the holder.

    "},{"location":"aip2/0183-revocation-notification/#messages","title":"Messages","text":"

    The revoke message sent by the issuer to the holder is as follows:

    {\n  \"@type\": \"https://didcomm.org/revocation_notification/1.0/revoke\",\n  \"@id\": \"<uuid-revocation-notification>\",\n  \"~please_ack\": [\"RECEIPT\",\"OUTCOME\"],\n  \"thread_id\": \"<thread_id>\",\n  \"comment\": \"Some comment\"\n}\n

    Description of fields:

    "},{"location":"aip2/0183-revocation-notification/#reference","title":"Reference","text":""},{"location":"aip2/0183-revocation-notification/#drawbacks","title":"Drawbacks","text":"

    If we later added support for more general event subscription and notification message flows, this would be redundant.

    "},{"location":"aip2/0183-revocation-notification/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"aip2/0183-revocation-notification/#prior-art","title":"Prior art","text":""},{"location":"aip2/0183-revocation-notification/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0183-revocation-notification/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"aip2/0211-route-coordination/","title":"0211: Mediator Coordination Protocol","text":""},{"location":"aip2/0211-route-coordination/#summary","title":"Summary","text":"

    A protocol to coordinate mediation configuration between a mediating agent and the recipient.

    "},{"location":"aip2/0211-route-coordination/#application-scope","title":"Application Scope","text":"

    This protocol is needed when using an edge agent and a mediator agent from different vendors. Edge agents and mediator agents from the same vendor may use whatever protocol they wish without sacrificing interoperability.

    "},{"location":"aip2/0211-route-coordination/#motivation","title":"Motivation","text":"

    Use of the forward message in the Routing Protocol requires an exchange of information. The Recipient must know which endpoint and routing key(s) to share, and the Mediator needs to know which keys should be routed via this relationship.

    "},{"location":"aip2/0211-route-coordination/#protocol","title":"Protocol","text":"

    Name: coordinate-mediation

    Version: 1.0

    Base URI: https://didcomm.org/coordinate-mediation/1.0/

    "},{"location":"aip2/0211-route-coordination/#roles","title":"Roles","text":"

    mediator - The agent that will be receiving forward messages on behalf of the recipient. recipient - The agent for whom the forward message payload is intended.

    "},{"location":"aip2/0211-route-coordination/#flow","title":"Flow","text":"

    A recipient may discover an agent capable of routing using the Feature Discovery Protocol. If protocol is supported with the mediator role, a recipient may send a mediate-request to initiate a routing relationship.

    First, the recipient sends a mediate-request message to the mediator. If the mediator is willing to route messages, it will respond with a mediate-grant message. The recipient will share the routing information in the grant message with other contacts.

    When a new key is used by the recipient, it must be registered with the mediator to enable route identification. This is done with a keylist-update message.

    The keylist-update and keylist-query methods are used over time to identify and remove keys that are no longer in use by the recipient.

    "},{"location":"aip2/0211-route-coordination/#reference","title":"Reference","text":"

    Note on terms: Early versions of this protocol included the concept of terms for mediation. This concept has been removed from this version due to a need for further discussion on representing terms in DIDComm in general and lack of use of these terms in current implementations.

    "},{"location":"aip2/0211-route-coordination/#mediation-request","title":"Mediation Request","text":"

    This message serves as a request from the recipient to the mediator, asking for the permission (and routing information) to publish the endpoint as a mediator.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/mediate-request\",\n}\n
    "},{"location":"aip2/0211-route-coordination/#mediation-deny","title":"Mediation Deny","text":"

    This message serves as notification of the mediator denying the recipient's request for mediation.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/mediate-deny\",\n}\n
    "},{"location":"aip2/0211-route-coordination/#mediation-grant","title":"Mediation Grant","text":"

    A route grant message is a signal from the mediator to the recipient that permission is given to distribute the included information as an inbound route.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/mediate-grant\",\n    \"endpoint\": \"http://mediators-r-us.com\",\n    \"routing_keys\": [\"did:key:z6Mkfriq1MqLBoPWecGoDLjguo1sB9brj6wT3qZ5BxkKpuP6\"]\n}\n

    endpoint: The endpoint reported to mediation client connections.

    routing_keys: List of keys in intended routing order. Key used as recipient of forward messages.

    "},{"location":"aip2/0211-route-coordination/#keylist-update","title":"Keylist Update","text":"

    Used to notify the mediator of keys in use by the recipient.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/keylist-update\",\n    \"updates\":[\n        {\n            \"recipient_key\": \"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\",\n            \"action\": \"add\"\n        }\n    ]\n}\n

    recipient_key: Key subject of the update.

    action: One of add or remove.

    "},{"location":"aip2/0211-route-coordination/#keylist-update-response","title":"Keylist Update Response","text":"

    Confirmation of requested keylist updates.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/keylist-update-response\",\n    \"updated\": [\n        {\n            \"recipient_key\": \"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\",\n            \"action\": \"\" // \"add\" or \"remove\"\n            \"result\": \"\" // [client_error | server_error | no_change | success]\n        }\n    ]\n}\n

    recipient_key: Key subject of the update.

    action: One of add or remove.

    result: One of client_error, server_error, no_change, success; describes the resulting state of the keylist update.

    "},{"location":"aip2/0211-route-coordination/#key-list-query","title":"Key List Query","text":"

    Query mediator for a list of keys registered for this connection.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/keylist-query\",\n    \"paginate\": {\n        \"limit\": 30,\n        \"offset\": 0\n    }\n}\n

    paginate is optional.

    "},{"location":"aip2/0211-route-coordination/#key-list","title":"Key List","text":"

    Response to key list query, containing retrieved keys.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/keylist\",\n    \"keys\": [\n        {\n            \"recipient_key\": \"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\"\n        }\n    ],\n    \"pagination\": {\n        \"count\": 30,\n        \"offset\": 30,\n        \"remaining\": 100\n    }\n}\n

    pagination is optional.

    "},{"location":"aip2/0211-route-coordination/#encoding-of-keys","title":"Encoding of keys","text":"

    All keys are encoded using the did:key method as per RFC0360.

    "},{"location":"aip2/0211-route-coordination/#prior-art","title":"Prior art","text":"

    There was an Indy HIPE that never made it past the PR process that described a similar approach. That HIPE led to a partial implementation of this inside the Aries Cloud Agent Python

    "},{"location":"aip2/0211-route-coordination/#future-considerations","title":"Future Considerations","text":""},{"location":"aip2/0211-route-coordination/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0211-route-coordination/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Cloud Agent - Python Added in ACA-Py 0.6.0 MISSING test results**** DIDComm mediator Open source cloud-based mediator."},{"location":"aip2/0360-use-did-key/","title":"Aries RFC 0360: did:key Usage","text":""},{"location":"aip2/0360-use-did-key/#summary","title":"Summary","text":"

    A number of RFCs that have been defined reference what amounts to a \"naked\" public key, such that the sender relies on the receiver knowing what type the key is and how it can be used. The application of this RFC will result in the replacement of \"naked\" verkeys (public keys) in some DIDComm/Aries protocols with the did:key ledgerless DID method, a format that concisely conveys useful information about the use of the key, including the public key type. While did:key is less a DID method than a transformation from a public key and type to an opinionated DIDDoc, it provides a versioning mechanism for supporting new/different cryptographic formats and its use makes clear how a public key is intended to be used. The method also enables support for using standard DID resolution mechanisms that may simplify the use of the key. The use of a DID to represent a public key is seen as odd by some in the community. Should a representation be found that is has better properties than a plain public key but is constrained to being \"just a key\", then we will consider changing from the did:key representation.

    To Do: Update link DID Key Method link (above) from Digital Bazaar to W3C repositories when they are created and populated.

    While it is well known in the Aries community that did:key is fundamentally different from the did:peer method that is the basis of Aries protocols, it must be re-emphasized here. This RFC does NOT imply any changes to the use of did:peer in Aries, nor does it change the content of a did:peer DIDDoc. This RFC only changes references to plain public keys in the JSON of some RFCs to use did:key in place of a plain text string.

    Should this RFC be ACCEPTED, a community coordinated update will be used to apply updates to the agent code bases and impacted RFCs.

    "},{"location":"aip2/0360-use-did-key/#motivation","title":"Motivation","text":"

    When one Aries agent inserts a public key into the JSON of an Aries message (for example, the ~service decorator), it assumes that the recipient agent will use the key in the intended way. At the time this RFC is being written, this is easy because only one key type is in use by all agents. However, in order to enable the use of different cryptography algorithms, the public references must be extended to at least include the key type. The preferred and concise way to do that is the use of the multicodec mechanism, which provides a registry of encodings for known key types that are prefixed to the public key in a standard and concise way. did:key extends that mechanism by providing a templated way to transform the combination of public key and key type into a DID-standard DIDDoc.

    At the cost of adding/building a did:key resolver we get a DID standard way to access the key and key type, including specific information on how the key can be used. The resolver may be trivial or complex. In a trivial version, the key type is assumed, and the key can be easily extracted from the string. In a more complete implementation, the key type can be checked, and standard DID URL handling can be used to extract parts of the DIDDoc for specific purposes. For example, in the ed25519 did:key DIDDoc, the existence of the keyAgreement entry implies that the key can be used in a Diffie-Hellman exchange, without the developer guessing, or using the key incorrectly.

    Note that simply knowing the key type is not necessarily sufficient to be able to use the key. The cryptography supporting the processing data using the key must also be available in the agent. However, the multicodec and did:key capabilities will simplify adding support for new key types in the future.

    "},{"location":"aip2/0360-use-did-key/#tutorial","title":"Tutorial","text":"

    An example of the use of the replacement of a verkey with did:key can be found in the ~service decorator RFC. Notably in the example at the beginning of the tutorial section, the verkeys in the recipientKeys and routingKeys items would be changed from native keys to use did:key as follows:

    {\n    \"@type\": \"somemessagetype\",\n    \"~service\": {\n        \"recipientKeys\": [\"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"],\n        \"routingKeys\": [\"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"]\n        \"serviceEndpoint\": \"https://example.com/endpoint\"\n    }\n}\n

    Thus, 8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K becomes did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th using the following transformations:

    The transformation above is only for illustration within this RFC. The did:key specification is the definitive source for the appropriate transformations.

    The did:key method uses the strings that are the DID, public key and key type to construct (\"resolve\") a DIDDoc based on a template defined by the did:key specification. Further, the did:key resolver generates, in the case of an ed25519 public signing key, a key that can be used as part of a Diffie-Hellman exchange appropriate for encryption in the keyAgreement section of the DIDDoc. Presumably, as the did:key method supports other key types, similar DIDDoc templates will become part of the specification. Key types that don't support a signing/key exchange transformation would not have a keyAgreement entry in the resolved DIDDoc.

    The following currently implemented RFCs would be affected by acceptance of this RFC. In these RFCs, the JSON items that currently contain naked public keys (mostly the items recipientKeys and routingKeys) would be changed to use did:key references where applicable. Note that in these items public DIDs could also be used if applicable for a given use case.

    Service entries in did:peer DIDDocs (such as in RFCs 0094-cross-domain-messaging and 0067-didcomm-diddoc-conventions) should NOT use a did:key public key representation. Instead, service entries in the DIDDoc should reference keys defined internally in the DIDDoc where appropriate.

    To Do: Discuss the use of did:key (or not) in the context of encryption envelopes. This will be part of the ongoing discussion about JWEs and the upcoming discussions about JWMs\u2014a soon-to-be-proposed specification. That conversation will likely go on in the DIF DIDComm Working Group.

    "},{"location":"aip2/0360-use-did-key/#reference","title":"Reference","text":"

    See the did:key specification. Note that the specification is still evolving.

    "},{"location":"aip2/0360-use-did-key/#drawbacks","title":"Drawbacks","text":"

    The did:key standard is not finalized.

    The DIDDoc \"resolved\" from a did:key probably has more entries in it than are needed for DIDComm. That said, the entries in the DIDDoc make it clear to a developer how they can use the public key.

    "},{"location":"aip2/0360-use-did-key/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    We should not stick with the status quo and assume that all agents will always know the type of keys being used and how to use them.

    We should at minimum move to a scheme like multicodecs such that the key is self documenting and supports the versioning of cryptographic algorithms. However, even if we do that, we still have to document for developers how they should (and not should not) use the public key.

    Another logical alternative is to use a JWK. However, that representation only adds the type of the key (same as multicodecs) at the cost of being significantly more verbose.

    "},{"location":"aip2/0360-use-did-key/#prior-art","title":"Prior art","text":"

    To do - there are other instances of this pattern being used. Insert those here.

    "},{"location":"aip2/0360-use-did-key/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0360-use-did-key/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes

    Name / Link Implementation Notes"},{"location":"aip2/0434-outofband/","title":"Aries RFC 0434: Out-of-Band Protocol 1.1","text":""},{"location":"aip2/0434-outofband/#summary","title":"Summary","text":"

    The Out-of-band protocol is used when you wish to engage with another agent and you don't have a DIDComm connection to use for the interaction.

    "},{"location":"aip2/0434-outofband/#motivation","title":"Motivation","text":"

    The use of the invitation in the Connection and DID Exchange protocols has been relatively successful, but has some shortcomings, as follows.

    "},{"location":"aip2/0434-outofband/#connection-reuse","title":"Connection Reuse","text":"

    A common pattern we have seen in the early days of Aries agents is a user with a browser getting to a point where a connection is needed between the website's (enterprise) agent and the user's mobile agent. A QR invitation is displayed, scanned and a protocol is executed to establish a connection. Life is good!

    However, with the current invitation processes, when the same user returns to the same page, the same process is executed (QR code, scan, etc.) and a new connection is created between the two agents. There is no way for the user's agent to say \"Hey, I've already got a connection with you. Let's use that one!\"

    We need the ability to reuse a connection.

    "},{"location":"aip2/0434-outofband/#connection-establishment-versioning","title":"Connection Establishment Versioning","text":"

    In the existing Connections and DID Exchange invitation handling, the inviter dictates what connection establishment protocol all invitee's will use. A more sustainable approach is for the inviter to offer the invitee a list of supported protocols and allow the invitee to use one that it supports.

    "},{"location":"aip2/0434-outofband/#handling-of-all-out-of-band-messages","title":"Handling of all Out-of-Band Messages","text":"

    We currently have two sets of out-of-band messages that cannot be delivered via DIDComm because there is no channel. We'd like to align those messages into a single \"out-of-band\" protocol so that their handling can be harmonized inside an agent, and a common QR code handling mechanism can be used.

    "},{"location":"aip2/0434-outofband/#urls-and-qr-code-handling","title":"URLs and QR Code Handling","text":"

    We'd like to have the specification of QR handling harmonized into a single RFC (this one).

    "},{"location":"aip2/0434-outofband/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0434-outofband/#key-concepts","title":"Key Concepts","text":"

    The Out-of-band protocol is used when an agent doesn't know if it has a connection with another agent. This could be because you are trying to establish a new connection with that agent, you have connections but don't know who the other party is, or if you want to have a connection-less interaction. Since there is no DIDComm connection to use for the messages of this protocol, the messages are plaintext and sent out-of-band, such as via a QR code, in an email message or any other available channel. Since the delivery of out-of-band messages will often be via QR codes, this RFC also covers the use of QR codes.

    Two well known use cases for using an out-of-band protocol are:

    In both cases, there is only a single out-of-band protocol message sent. The message responding to the out-of-band message is a DIDComm message from an appropriate protocol.

    Note that the website-to-agent model is not the only such interaction enabled by the out-of-band protocol, and a QR code is not the only delivery mechanism for out-of-band messages. However, they are useful as examples of the purpose of the protocol.

    "},{"location":"aip2/0434-outofband/#roles","title":"Roles","text":"

    The out-of-band protocol has two roles: sender and receiver.

    "},{"location":"aip2/0434-outofband/#sender","title":"sender","text":"

    The agent that generates the out-of-band message and makes it available to the other party.

    "},{"location":"aip2/0434-outofband/#receiver","title":"receiver","text":"

    The agent that receives the out-of-band message and decides how to respond. There is no out-of-band protocol message with which the receiver will respond. Rather, if they respond, they will use a message from another protocol that the sender understands.

    "},{"location":"aip2/0434-outofband/#states","title":"States","text":"

    The state machines for the sender and receiver are a bit odd for the out-of-band protocol because it consists of a single message that kicks off a co-protocol and ends when evidence of the co-protocol's launch is received, in the form of some response. In the following state machine diagrams we generically describe the response message from the receiver as being a DIDComm message.

    The sender state machine is as follows:

    Note the \"optional\" reference under the second event in the await-response state. That is to indicate that an out-of-band message might be a single use message with a transition to done, or reusable message (received by many receivers) with a transition back to await-response.

    The receiver state machine is as follows:

    Worth noting is the first event of the done state, where the receiver may receive the message multiple times. This represents, for example, an agent returning to the same website and being greeted with instances of the same QR code each time.

    "},{"location":"aip2/0434-outofband/#messages","title":"Messages","text":"

    The out-of-band protocol a single message that is sent by the sender.

    "},{"location":"aip2/0434-outofband/#invitation-httpsdidcommorgout-of-bandverinvitation","title":"Invitation: https://didcomm.org/out-of-band/%VER/invitation","text":"
    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/invitation\",\n  \"@id\": \"<id used for context as pthid>\",\n  \"label\": \"Faber College\",\n  \"goal_code\": \"issue-vc\",\n  \"goal\": \"To issue a Faber College Graduate credential\",\n  \"accept\": [\n    \"didcomm/aip2;env=rfc587\",\n    \"didcomm/aip2;env=rfc19\"\n  ],\n  \"handshake_protocols\": [\n    \"https://didcomm.org/didexchange/1.0\",\n    \"https://didcomm.org/connections/1.0\"\n  ],\n  \"requests~attach\": [\n    {\n      \"@id\": \"request-0\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"json\": \"<json of protocol message>\"\n      }\n    }\n  ],\n  \"services\": [\"did:sov:LjgpST2rjsoxYegQDRm7EL\"]\n}\n

    The items in the message are:

    If only the handshake_protocols item is included, the initial interaction will complete with the establishment (or reuse) of the connection. Either side may then use that connection for any purpose. A common use case (but not required) would be for the sender to initiate another protocol after the connection is established to accomplish some shared goal.

    If only the requests~attach item is included, no new connection is expected to be created, although one could be used if the receiver knows such a connection already exists. The receiver responds to one of the messages in the requests~attach array. The requests~attach item might include the first message of a protocol from the sender, or might be a please-play-the-role message requesting the receiver initiate a protocol. If the protocol requires a further response from the sender to the receiver, the receiver must include a ~service decorator for the sender to use in responding.

    If both the handshake_protocols and requests~attach items are included in the message, the receiver should first establish a connection and then respond (using that connection) to one of the messages in the requests~attach message. If a connection already exists between the parties, the receiver may respond immediately to the request-attach message using the established connection.

    "},{"location":"aip2/0434-outofband/#reuse-messages","title":"Reuse Messages","text":"

    While the receiver is expected to respond with an initiating message from a handshake_protocols or requests~attach item using an offered service, the receiver may be able to respond by reusing an existing connection. Specifically, if a connection they have was created from an out-of-band invitation from the same services DID of a new invitation message, the connection MAY be reused. The receiver may choose to not reuse the existing connection for privacy purposes and repeat a handshake protocol to receive a redundant connection.

    If a message has a service block instead of a DID in the services list, you may enable reuse by encoding the key and endpoint of the service block in a Peer DID numalgo 2 and using that DID instead of a service block.

    If the receiver desires to reuse the existing connection and a requests~attach item is included in the message, the receiver SHOULD respond to one of the attached messages using the existing connection.

    If the receiver desires to reuse the existing connection and no requests~attach item is included in the message, the receiver SHOULD attempt to do so with the reuse and reuse-accepted messages. This will notify the inviter that the existing connection should be used, along with the context that can be used for follow-on interactions.

    While the invitation message is passed unencrypted and out-of-band, both the handshake-reuse and handshake-reuse-accepted messages MUST be encrypted and transmitted as normal DIDComm messages.

    "},{"location":"aip2/0434-outofband/#reuse-httpsdidcommorgout-of-bandverhandshake-reuse","title":"Reuse: https://didcomm.org/out-of-band/%VER/handshake-reuse","text":"
    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/handshake-reuse\",\n  \"@id\": \"<id>\",\n  \"~thread\": {\n    \"thid\": \"<same as @id>\",\n    \"pthid\": \"<The @id of the Out-of-Band invitation>\"\n  }\n}\n

    The items in the message are:

    Sending or receiving this message does not change the state of the existing connection.

    When the inviter receives the handshake-reuse message, they MUST respond with a handshake-reuse-accepted message to notify that invitee that the request to reuse the existing connection is successful.

    "},{"location":"aip2/0434-outofband/#reuse-accepted-httpsdidcommorgout-of-bandverhandshake-reuse-accepted","title":"Reuse Accepted: https://didcomm.org/out-of-band/%VER/handshake-reuse-accepted","text":"
    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/handshake-reuse-accepted\",\n  \"@id\": \"<id>\",\n  \"~thread\": {\n    \"thid\": \"<The Message @id of the reuse message>\",\n    \"pthid\": \"<The @id of the Out-of-Band invitation>\"\n  }\n}\n

    The items in the message are:

    If this message is not received by the invitee, they should use the regular process. This message is a mechanism by which the invitee can detect a situation where the inviter no longer has a record of the connection and is unable to decrypt and process the handshake-reuse message.

    After sending this message, the inviter may continue any desired protocol interactions based on the context matched by the pthid present in the handshake-reuse message.

    "},{"location":"aip2/0434-outofband/#responses","title":"Responses","text":"

    The following table summarizes the different forms of the out-of-band invitation message depending on the presence (or not) of the handshake_protocols item, the requests~attach item and whether or not a connection between the agents already exists.

    handshake_protocols Present? requests~attach Present? Existing connection? Receiver action(s) No No No Impossible Yes No No Uses the first supported protocol from handshake_protocols to make a new connection using the first supported services entry. No Yes No Send a response to the first supported request message using the first supported services entry. Include a ~service decorator if the sender is expected to respond. No No Yes Impossible Yes Yes No Use the first supported protocol from handshake_protocols to make a new connection using the first supported services entry, and then send a response message to the first supported attachment message using the new connection. Yes No Yes Send a handshake-reuse message. No Yes Yes Send a response message to the first supported request message using the existing connection. Yes Yes Yes Send a response message to the first supported request message using the existing connection.

    Both the goal_code and goal fields SHOULD be used with the localization service decorator. The two fields are to enable both human and machine handling of the out-of-band message. goal_code is to specify a generic, protocol level outcome for sending the out-of-band message (e.g. issue verifiable credential, request proof, etc.) that is suitable for machine handling and possibly human display, while goal provides context specific guidance, targeting mainly a person controlling the receiver's agent. The list of goal_code values is provided in the Message Catalog section of this RFC.

    "},{"location":"aip2/0434-outofband/#the-services-item","title":"The services Item","text":"

    As mentioned in the description above, the services item array is intended to be analogous to the service block of a DIDDoc. When not reusing an existing connection, the receiver scans the array and selects (according to the rules described below) a service entry to use for the response to the out-of-band message.

    There are two forms of entries in the services item array:

    The following is an example of a two entry array, one of each form:

    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/invitation\",\n  \"@id\": \"<id used for context as pthid>\",\n  \"label\": \"Faber College\",\n  \"handshake_protocols\": [\"https://didcomm.org/didexchange/1.0\"],\n  \"services\": [\n    {\n      \"id\": \"#inline\",\n      \"type\": \"did-communication\",\n      \"recipientKeys\": [\"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\"],\n      \"routingKeys\": [],\n      \"serviceEndpoint\": \"https://example.com:5000\"\n    },\n    \"did:sov:LjgpST2rjsoxYegQDRm7EL\"\n  ]\n}\n

    The processing rules for the services block are:

    The attributes in the inline form parallel the attributes of a DID Document for increased meaning. The recipientKeys and routingKeys within the inline block decorator MUST be did:key references.

    As defined in the DIDComm Cross Domain Messaging RFC, if routingKeys is present and non-empty, additional forwarding wrapping are necessary in the response message.

    When considering routing and options for out-of-band messages, keep in mind that the more detail in the message, the longer the URL will be and (if used) the more dense (and harder to scan) the QR code will be.

    "},{"location":"aip2/0434-outofband/#service-endpoint","title":"Service Endpoint","text":"

    The service endpoint used to transmit the response is either present in the out-of-band message or available in the DID Document of a presented DID. If the endpoint is itself a DID, the serviceEndpoint in the DIDDoc of the resolved DID MUST be a URI, and the recipientKeys MUST contain a single key. That key is appended to the end of the list of routingKeys for processing. For more information about message forwarding and routing, see RFC 0094 Cross Domain Messaging.

    "},{"location":"aip2/0434-outofband/#adoption-messages","title":"Adoption Messages","text":"

    The problem_report message MAY be adopted by the out-of-band protocol if the agent wants to respond with problem reports to invalid messages, such as attempting to reuse a single-use invitation.

    "},{"location":"aip2/0434-outofband/#constraints","title":"Constraints","text":"

    An existing connection can only be reused based on a DID in the services list in an out-of-band message.

    "},{"location":"aip2/0434-outofband/#reference","title":"Reference","text":""},{"location":"aip2/0434-outofband/#messages-reference","title":"Messages Reference","text":"

    The full description of the message in this protocol can be found in the Tutorial section of this RFC.

    "},{"location":"aip2/0434-outofband/#localization","title":"Localization","text":"

    The goal_code and goal fields SHOULD have localization applied. See the purpose of those fields in the message type definitions section and the message catalog section (immediately below).

    "},{"location":"aip2/0434-outofband/#message-catalog","title":"Message Catalog","text":""},{"location":"aip2/0434-outofband/#goal_code","title":"goal_code","text":"

    The following values are defined for the goal_code field:

    Code (cd) English (en) issue-vc To issue a credential request-proof To request a proof create-account To create an account with a service p2p-messaging To establish a peer-to-peer messaging relationship"},{"location":"aip2/0434-outofband/#goal","title":"goal","text":"

    The goal localization values are use case specific and localization is left to the agent implementor to enable using the techniques defined in the ~l10n RFC.

    "},{"location":"aip2/0434-outofband/#roles-reference","title":"Roles Reference","text":"

    The roles are defined in the Tutorial section of this RFC.

    "},{"location":"aip2/0434-outofband/#states-reference","title":"States Reference","text":""},{"location":"aip2/0434-outofband/#initial","title":"initial","text":"

    No out-of-band messages have been sent.

    "},{"location":"aip2/0434-outofband/#await-response","title":"await-response","text":"

    The sender has shared an out-of-band message with the intended receiver(s), and the sender has not yet received all of the responses. For a single-use out-of-band message, there will be only one response; for a multi-use out-of-band message, there is no defined limit on the number of responses.

    "},{"location":"aip2/0434-outofband/#prepare-response","title":"prepare-response","text":"

    The receiver has received the out-of-band message and is preparing a response. The response will not be an out-of-band protocol message, but a message from another protocol chosen based on the contents of the out-of-band message.

    "},{"location":"aip2/0434-outofband/#done","title":"done","text":"

    The out-of-band protocol has been completed. Note that if the out-of-band message was intended to be available to many receivers (a multiple use message), the sender returns to the await-response state rather than going to the done state.

    "},{"location":"aip2/0434-outofband/#errors","title":"Errors","text":"

    There is an optional courtesy error message stemming from an out-of-band message that the sender could provide if they have sufficient recipient information. If the out-of-band message is a single use message and the sender receives multiple responses and each receiver's response includes a way for the sender to respond with a DIDComm message, all but the first MAY be answered with a problem_report.

    "},{"location":"aip2/0434-outofband/#error-message-example","title":"Error Message Example","text":"
    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/problem_report\",\n  \"@id\": \"5678876542345\",\n  \"~thread\": { \"pthid\": \"<@id of the OutofBand message>\" },\n  \"description\": {\n    \"en\": \"The invitation has expired.\",\n    \"code\": \"expired-invitation\"\n  },\n  \"impact\": \"thread\"\n}\n

    See the problem-report protocol for details on the items in the example.

    "},{"location":"aip2/0434-outofband/#flow-overview","title":"Flow Overview","text":"

    In an out-of-band message the sender gives information to the receiver about the kind of DIDComm protocol response messages it can handle and how to deliver the response. The receiver uses that information to determine what DIDComm protocol/message to use in responding to the sender, and (from the service item or an existing connection) how to deliver the response to the sender.

    The handling of the response is specified by the protocol used.

    To Do: Make sure that the following remains in the DID Exchange/Connections RFCs

    Any Published DID that expresses support for DIDComm by defining a service that follows the DIDComm conventions serves as an implicit invitation. If an invitee wishes to connect to any Published DID, they need not wait for an out-of-band invitation message. Rather, they can designate their own label and initiate the appropriate protocol (e.g. 0160-Connections or 0023-DID-Exchange) for establishing a connection.

    "},{"location":"aip2/0434-outofband/#standard-out-of-band-message-encoding","title":"Standard Out-of-Band Message Encoding","text":"

    Using a standard out-of-band message encoding allows for easier interoperability between multiple projects and software platforms. Using a URL for that standard encoding provides a built in fallback flow for users who are unable to automatically process the message. Those new users will load the URL in a browser as a default behavior, and may be presented with instructions on how to install software capable of processing the message. Already onboarded users will be able to process the message without loading in a browser via mobile app URL capture, or via capability detection after being loaded in a browser.

    The standard out-of-band message format is a URL with a Base64Url encoded json object as a query parameter.

    Please note the difference between Base64Url and Base64 encoding.

    The URL format is as follows, with some elements described below:

    https://<domain>/<path>?oob=<outofbandMessage>\n

    <domain> and <path> should be kept as short as possible, and the full URL SHOULD return human readable instructions when loaded in a browser. This is intended to aid new users. The oob query parameter is required and is reserved to contain the out-of-band message string. Additional path elements or query parameters are allowed, and can be leveraged to provide coupons or other promise of payment for new users.

    To do: We need to rationalize this approach https:// approach with the use of a special protocol (e.g. didcomm://) that will enable handling of the URL on mobile devices to automatically invoke an installed app on both Android and iOS. A user must be able to process the out-of-band message on the device of the agent (e.g. when the mobile device can't scan the QR code because it is on a web page on device).

    The <outofbandMessage> is an agent plaintext message (not a DIDComm message) that has been Base64Url encoded such that the resulting string can be safely used in a URL.

    outofband_message = base64UrlEncode(<outofbandMessage>)\n

    During Base64Url encoding, whitespace from the JSON string SHOULD be eliminated to keep the resulting out-of-band message string as short as possible.

    "},{"location":"aip2/0434-outofband/#example-out-of-band-message-encoding","title":"Example Out-of-Band Message Encoding","text":"

    Invitation:

    {\n  \"@type\": \"https://didcomm.org/out-of-band/1.0/invitation\",\n  \"@id\": \"69212a3a-d068-4f9d-a2dd-4741bca89af3\",\n  \"label\": \"Faber College\",\n  \"goal_code\": \"issue-vc\",\n  \"goal\": \"To issue a Faber College Graduate credential\",\n  \"handshake_protocols\": [\"https://didcomm.org/didexchange/1.0\", \"https://didcomm.org/connections/1.0\"],\n  \"services\": [\"did:sov:LjgpST2rjsoxYegQDRm7EL\"]\n}\n

    Whitespace removed:

    {\"@type\":\"https://didcomm.org/out-of-band/1.0/invitation\",\"@id\":\"69212a3a-d068-4f9d-a2dd-4741bca89af3\",\"label\":\"Faber College\",\"goal_code\":\"issue-vc\",\"goal\":\"To issue a Faber College Graduate credential\",\"handshake_protocols\":[\"https://didcomm.org/didexchange/1.0\",\"https://didcomm.org/connections/1.0\"],\"services\":[\"did:sov:LjgpST2rjsoxYegQDRm7EL\"]}\n

    Base64Url encoded:

    eyJAdHlwZSI6Imh0dHBzOi8vZGlkY29tbS5vcmcvb3V0LW9mLWJhbmQvMS4wL2ludml0YXRpb24iLCJAaWQiOiI2OTIxMmEzYS1kMDY4LTRmOWQtYTJkZC00NzQxYmNhODlhZjMiLCJsYWJlbCI6IkZhYmVyIENvbGxlZ2UiLCJnb2FsX2NvZGUiOiJpc3N1ZS12YyIsImdvYWwiOiJUbyBpc3N1ZSBhIEZhYmVyIENvbGxlZ2UgR3JhZHVhdGUgY3JlZGVudGlhbCIsImhhbmRzaGFrZV9wcm90b2NvbHMiOlsiaHR0cHM6Ly9kaWRjb21tLm9yZy9kaWRleGNoYW5nZS8xLjAiLCJodHRwczovL2RpZGNvbW0ub3JnL2Nvbm5lY3Rpb25zLzEuMCJdLCJzZXJ2aWNlcyI6WyJkaWQ6c292OkxqZ3BTVDJyanNveFllZ1FEUm03RUwiXX0=\n

    Example URL with Base64Url encoded message:

    http://example.com/ssi?oob=eyJAdHlwZSI6Imh0dHBzOi8vZGlkY29tbS5vcmcvb3V0LW9mLWJhbmQvMS4wL2ludml0YXRpb24iLCJAaWQiOiI2OTIxMmEzYS1kMDY4LTRmOWQtYTJkZC00NzQxYmNhODlhZjMiLCJsYWJlbCI6IkZhYmVyIENvbGxlZ2UiLCJnb2FsX2NvZGUiOiJpc3N1ZS12YyIsImdvYWwiOiJUbyBpc3N1ZSBhIEZhYmVyIENvbGxlZ2UgR3JhZHVhdGUgY3JlZGVudGlhbCIsImhhbmRzaGFrZV9wcm90b2NvbHMiOlsiaHR0cHM6Ly9kaWRjb21tLm9yZy9kaWRleGNoYW5nZS8xLjAiLCJodHRwczovL2RpZGNvbW0ub3JnL2Nvbm5lY3Rpb25zLzEuMCJdLCJzZXJ2aWNlcyI6WyJkaWQ6c292OkxqZ3BTVDJyanNveFllZ1FEUm03RUwiXX0=\n

    Out-of-band message URLs can be transferred via any method that can send text, including an email, SMS, posting on a website, or QR Code.

    Example URL encoded as a QR Code:

    Example Email Message:

    To: alice@alum.faber.edu\nFrom: studentrecords@faber.edu\nSubject: Your request to connect and receive your graduate verifiable credential\n\nDear Alice,\n\nTo receive your Faber College graduation certificate, click here to [connect](http://example.com/ssi?oob=eyJAdHlwZSI6Imh0dHBzOi8vZGlkY29tbS5vcmcvb3V0LW9mLWJhbmQvMS4wL2ludml0YXRpb24iLCJAaWQiOiI2OTIxMmEzYS1kMDY4LTRmOWQtYTJkZC00NzQxYmNhODlhZjMiLCJsYWJlbCI6IkZhYmVyIENvbGxlZ2UiLCJnb2FsX2NvZGUiOiJpc3N1ZS12YyIsImdvYWwiOiJUbyBpc3N1ZSBhIEZhYmVyIENvbGxlZ2UgR3JhZHVhdGUgY3JlZGVudGlhbCIsImhhbmRzaGFrZV9wcm90b2NvbHMiOlsiaHR0cHM6Ly9kaWRjb21tLm9yZy9kaWRleGNoYW5nZS8xLjAiLCJodHRwczovL2RpZGNvbW0ub3JnL2Nvbm5lY3Rpb25zLzEuMCJdLCJzZXJ2aWNlcyI6WyJkaWQ6c292OkxqZ3BTVDJyanNveFllZ1FEUm03RUwiXX0=) with us, or paste the following into your browser:\n\nhttp://example.com/ssi?oob=eyJAdHlwZSI6Imh0dHBzOi8vZGlkY29tbS5vcmcvb3V0LW9mLWJhbmQvMS4wL2ludml0YXRpb24iLCJAaWQiOiI2OTIxMmEzYS1kMDY4LTRmOWQtYTJkZC00NzQxYmNhODlhZjMiLCJsYWJlbCI6IkZhYmVyIENvbGxlZ2UiLCJnb2FsX2NvZGUiOiJpc3N1ZS12YyIsImdvYWwiOiJUbyBpc3N1ZSBhIEZhYmVyIENvbGxlZ2UgR3JhZHVhdGUgY3JlZGVudGlhbCIsImhhbmRzaGFrZV9wcm90b2NvbHMiOlsiaHR0cHM6Ly9kaWRjb21tLm9yZy9kaWRleGNoYW5nZS8xLjAiLCJodHRwczovL2RpZGNvbW0ub3JnL2Nvbm5lY3Rpb25zLzEuMCJdLCJzZXJ2aWNlcyI6WyJkaWQ6c292OkxqZ3BTVDJyanNveFllZ1FEUm03RUwiXX0=\n\nIf you don't have an identity agent for holding credentials, you will be given instructions on how you can get one.\n\nThanks,\n\nFaber College\nKnowledge is Good\n
    "},{"location":"aip2/0434-outofband/#url-shortening","title":"URL Shortening","text":"

    It seems inevitable that the length of some out-of-band message will be too long to produce a useable QR code. Techniques to avoid unusable QR codes have been presented above, including using attachment links for requests, minimizing the routing of the response and eliminating unnecessary whitespace in the JSON. However, at some point a sender may need generate a very long URL. In that case, a DIDComm specific URL shortener redirection should be implemented by the sender as follows:

    A usable QR code will always be able to be generated from the shortened form of the URL.

    "},{"location":"aip2/0434-outofband/#url-shortening-caveats","title":"URL Shortening Caveats","text":"

    Some HTTP libraries don't support stopping redirects from occuring on reception of a 301 or 302, in this instance the redirect is automatically followed and will result in a response that MAY have a status of 200 and MAY contain a URL that can be processed as a normal Out-of-Band message.

    If the agent performs a HTTP GET with the Accept header requesting application/json MIME type the response can either contain the message in json or result in a redirect, processing of the response should attempt to determine which response type is received and process the message accordingly.

    "},{"location":"aip2/0434-outofband/#out-of-band-message-publishing","title":"Out-of-Band Message Publishing","text":"

    The sender will publish or transmit the out-of-band message URL in a manner available to the intended receiver. After publishing, the sender is in the await-response state, will the receiver is in the prepare-response state.

    "},{"location":"aip2/0434-outofband/#out-of-band-message-processing","title":"Out-of-Band Message Processing","text":"

    If the receiver receives an out-of-band message in the form of a QR code, the receiver should attempt to decode the QR code to an out-of-band message URL for processing.

    When the receiver receives the out-of-band message URL, there are two possible user flows, depending on whether the individual has an Aries agent. If the individual is new to Aries, they will likely load the URL in a browser. The resulting page SHOULD contain instructions on how to get started by installing an Aries agent. That install flow will transfer the out-of-band message to the newly installed software.

    A user that already has those steps accomplished will have the URL received by software directly. That software will attempt to base64URL decode the string and can read the out-of-band message directly out of the oob query parameter, without loading the URL. If this process fails then the software should attempt the steps to process a shortened URL.

    NOTE: In receiving the out-of-band message, the base64url decode implementation used MUST correctly decode padded and unpadded base64URL encoded data.

    If the receiver wants to respond to the out-of-band message, they will use the information in the message to prepare the request, including:

    "},{"location":"aip2/0434-outofband/#correlating-responses-to-out-of-band-messages","title":"Correlating responses to Out-of-Band messages","text":"

    The response to an out-of-band message MUST set its ~thread.pthid equal to the @id property of the out-of-band message.

    Example referencing an explicit invitation:

    {\n  \"@id\": \"a46cdd0f-a2ca-4d12-afbf-2e78a6f1f3ef\",\n  \"@type\": \"https://didcomm.org/didexchange/1.0/request\",\n  \"~thread\": { \"pthid\": \"032fbd19-f6fd-48c5-9197-ba9a47040470\" },\n  \"label\": \"Bob\",\n  \"did\": \"B.did@B:A\",\n  \"did_doc~attach\": {\n    \"base64\": \"eyJ0eXAiOiJKV1Qi... (bytes omitted)\",\n    \"jws\": {\n      \"header\": {\n        \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n      },\n      \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n      \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n    }\n  }\n}\n
    "},{"location":"aip2/0434-outofband/#response-transmission","title":"Response Transmission","text":"

    The response message from the receiver is encoded according to the standards of the DIDComm encryption envelope, using the service block present in (or resolved from) the out-of-band invitation.

    "},{"location":"aip2/0434-outofband/#reusing-connections","title":"Reusing Connections","text":"

    If an out-of-band invitation has a DID in the services block, and the receiver determines it has previously established a connection with that DID, the receiver MAY send its response on the established connection. See Reuse Messages for details.

    "},{"location":"aip2/0434-outofband/#receiver-error-handling","title":"Receiver Error Handling","text":"

    If the receiver is unable to process the out-of-band message, the receiver may respond with a Problem Report identifying the problem using a DIDComm message. As with any response, the ~thread decorator of the pthid MUST be the @id of the out-of-band message. The problem report MUST be in the protocol of an expected response. An example of an error that might come up is that the receiver is not able to handle any of the proposed protocols in the out-of-band message. The receiver MAY include in the problem report a ~service decorator that allows the sender to respond to the out-of-band message with a DIDComm message.

    "},{"location":"aip2/0434-outofband/#response-processing","title":"Response processing","text":"

    The sender MAY look up the corresponding out-of-band message identified in the response's ~thread.pthid to determine whether it should accept the response. Information about the related out-of-band message protocol may be required to provide the sender with context about processing the response and what to do after the protocol completes.

    "},{"location":"aip2/0434-outofband/#sender-error-handling","title":"Sender Error Handling","text":"

    If the sender receives a Problem Report message from the receiver, the sender has several options for responding. The sender will receive the message as part of an offered protocol in the out-of-band message.

    If the receiver did not include a ~service decorator in the response, the sender can only respond if it is still in session with the receiver. For example, if the sender is a website that displayed a QR code for the receiver to scan, the sender could create a new, presumably adjusted, out-of-band message, encode it and present it to the user in the same way as before.

    If the receiver included a ~service decorator in the response, the sender can provide a new message to the receiver, even a new version of the original out-of-band message, and send it to the receiver. The new message MUST include a ~thread decorator with the thid set to the @id from the problem report message.

    "},{"location":"aip2/0434-outofband/#drawbacks","title":"Drawbacks","text":""},{"location":"aip2/0434-outofband/#prior-art","title":"Prior art","text":""},{"location":"aip2/0434-outofband/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0434-outofband/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"aip2/0441-present-proof-best-practices/","title":"0441: Prover and Verifier Best Practices for Proof Presentation","text":""},{"location":"aip2/0441-present-proof-best-practices/#summary","title":"Summary","text":"

    This work prescribes best practices for provers in credential selection (toward proof presentation), for verifiers in proof acceptance, and for both regarding non-revocation interval semantics in fulfilment of the Present Proof protocol RFC0037. Of particular instance is behaviour against presentation requests and presentations in their various non-revocation interval profiles.

    "},{"location":"aip2/0441-present-proof-best-practices/#motivation","title":"Motivation","text":"

    Agents should behave consistently in automatically selecting credentials and proving presentations.

    "},{"location":"aip2/0441-present-proof-best-practices/#tutorial","title":"Tutorial","text":"

    The subsections below introduce constructs and outline best practices for provers and verifiers.

    "},{"location":"aip2/0441-present-proof-best-practices/#presentation-requests-and-non-revocation-intervals","title":"Presentation Requests and Non-Revocation Intervals","text":"

    This section prescribes norms and best practices in formulating and interpreting non-revocation intervals on proof requests.

    "},{"location":"aip2/0441-present-proof-best-practices/#semantics-of-non-revocation-interval-presence-and-absence","title":"Semantics of Non-Revocation Interval Presence and Absence","text":"

    The presence of a non-revocation interval applicable to a requested item (see below) in a presentation request signifies that the verifier requires proof of non-revocation status of the credential providing that item.

    The absence of any non-revocation interval applicable to a requested item signifies that the verifier has no interest in its credential's non-revocation status.

    A revocable or non-revocable credential may satisfy a presentation request with or without a non-revocation interval. The presence of a non-revocation interval conveys that if the prover presents a revocable credential, the presentation must include proof of non-revocation. Its presence does not convey any restriction on the revocability of the credential to present: in many cases the verifier cannot know whether a prover's credential is revocable or not.

    "},{"location":"aip2/0441-present-proof-best-practices/#non-revocation-interval-applicability-to-requested-items","title":"Non-Revocation Interval Applicability to Requested Items","text":"

    A requested item in a presentation request is an attribute or a predicate, proof of which the verifier requests presentation. A non-revocation interval within a presentation request is specifically applicable, generally applicable, or inapplicable to a requested item.

    Within a presentation request, a top-level non-revocation interval is generally applicable to all requested items. A non-revocation interval defined particularly for a requested item is specifically applicable to that requested attribute or predicate but inapplicable to all others.

    A non-revocation interval specifically applicable to a requested item overrides any generally applicable non-revocation interval: no requested item may have both.

    For example, in the following (indy) proof request

    {\n    \"name\": \"proof-request\",\n    \"version\": \"1.0\",\n    \"nonce\": \"1234567890\",\n    \"requested_attributes\": {\n        \"legalname\": {\n            \"name\": \"legalName\",\n            \"restrictions\": [\n                {\n                    \"issuer_did\": \"WgWxqztrNooG92RXvxSTWv\"\n                }\n            ]\n        },\n        \"regdate\": {\n            \"name\": \"regDate\",\n            \"restrictions\": [\n                {\n                    \"issuer_did\": \"WgWxqztrNooG92RXvxSTWv\"\n                }\n            ],\n            \"non_revoked\": {\n                \"from\": 1600001000,\n                \"to\": 1600001000\n            }\n        }\n    },\n    \"requested_predicates\": {\n    },\n    \"non_revoked\": {\n        \"from\": 1600000000,\n        \"to\": 1600000000\n    }\n}\n

    the non-revocation interval on 1600000000 is generally applicable to the referent \"legalname\" while the non-revocation interval on 1600001000 specifically applicable to referent \"regdate\".

    "},{"location":"aip2/0441-present-proof-best-practices/#semantics-of-non-revocation-interval-endpoints","title":"Semantics of Non-Revocation Interval Endpoints","text":"

    A non-revocation interval contains \"from\" and \"to\" (integer) EPOCH times. For historical reasons, any timestamp within this interval is technically acceptable in a non-revocation subproof. However, these semantics allow for ambiguity in cases where revocation occurs within the interval, and in cases where the ledger supports reinstatement. These best practices require the \"from\" value, should the prover specify it, to equal the \"to\" value: this approach fosters deterministic outcomes.

    A missing \"from\" specification defaults to the same value as the interval's \"to\" value. In other words, the non-revocation intervals

    {\n    \"to\": 1234567890\n}\n

    and

    {\n    \"from\": 1234567890,\n    \"to\": 1234567890\n}\n

    are semantically equivalent.

    "},{"location":"aip2/0441-present-proof-best-practices/#verifier-non-revocation-interval-formulation","title":"Verifier Non-Revocation Interval Formulation","text":"

    The verifier MUST specify, as current INDY-HIPE 11 notes, the same integer EPOCH time for both ends of the interval, or else omit the \"from\" key and value. In effect, where the presentation request specifies a non-revocation interval, the verifier MUST request a non-revocation instant.

    "},{"location":"aip2/0441-present-proof-best-practices/#prover-non-revocation-interval-processing","title":"Prover Non-Revocation Interval Processing","text":"

    In querying the nodes for revocation status, given a revocation interval on a single instant (i.e., on \"from\" and \"to\" the same, or \"from\" absent), the prover MUST query the ledger for all germane revocation updates from registry creation through that instant (i.e., from zero through \"to\" value): if the credential has been revoked prior to the instant, the revocation necessarily will appear in the aggregate delta.

    "},{"location":"aip2/0441-present-proof-best-practices/#provers-presentation-proposals-and-presentation-requests","title":"Provers, Presentation Proposals, and Presentation Requests","text":"

    In fulfilment of the RFC0037 Present Proof protocol, provers may initiate with a presentation proposal or verifiers may initiate with a presentation request. In the former case, the prover has both a presentation proposal and a presentation request; in the latter case, the prover has only a presentation request.

    "},{"location":"aip2/0441-present-proof-best-practices/#credential-selection-best-practices","title":"Credential Selection Best Practices","text":"

    This section specifies a prover's best practices in matching a credential to a requested item. The specification pertains to automated credential selection: obviously, a human user may select any credential in response to a presentation request; it is up to the verifier to verify the resulting presentation as satisfactory or not.

    Note that where a prover selects a revocable credential for inclusion in response to a requested item with a non-revocation interval in the presentation request, the prover MUST create a corresponding sub-proof of non-revocation at a timestamp within that non-revocation interval (insofar as possible; see below).

    "},{"location":"aip2/0441-present-proof-best-practices/#with-presentation-proposal","title":"With Presentation Proposal","text":"

    If prover initiated the protocol with a presentation proposal specifying a value (or predicate threshold) for an attribute, and the presentation request does not require a different value for it, then the prover MUST select a credential matching the presentation proposal, in addition to following the best practices below regarding the presentation request.

    "},{"location":"aip2/0441-present-proof-best-practices/#preference-for-irrevocable-credentials","title":"Preference for Irrevocable Credentials","text":"

    In keeping with the specification above, presentation of an irrevocable credential ipso facto constitutes proof of non-revocation. Provers MUST always prefer irrevocable credentials to revocable credentials, when the wallet has both satisfying a requested item, whether the requested item has an applicable non-revocation interval or not. Note that if a non-revocation interval is applicable to a credential's requested item in the presentation request, selecting an irrevocable credential for presentation may lead to a missing timestamp at the verifier (see below).

    If only revocable credentials are available to satisfy a requested item with no applicable non-revocation interval, the prover MUST present such for proof. As per above, the absence of a non-revocation interval signifies that the verifier has no interest in its revocation status.

    "},{"location":"aip2/0441-present-proof-best-practices/#verifiers-presentations-and-timestamps","title":"Verifiers, Presentations, and Timestamps","text":"

    This section prescribes verifier best practices concerning a received presentation by its timestamps against the corresponding presentation request's non-revocation intervals.

    "},{"location":"aip2/0441-present-proof-best-practices/#timestamp-for-irrevocable-credential","title":"Timestamp for Irrevocable Credential","text":"

    A presentation's inclusion of a timestamp pertaining to an irrevocable credential evinces tampering: the verifier MUST reject such a presentation.

    "},{"location":"aip2/0441-present-proof-best-practices/#missing-timestamp","title":"Missing Timestamp","text":"

    A presentation with no timestamp for a revocable credential purporting to satisfy a requested item in the corresponding presentation request, where the requested item has an applicable non-revocation interval, evinces tampering: the verifier MUST reject such a presentation.

    It is licit for a presentation to have no timestamp for an irrevocable credential: the applicable non-revocation interval is superfluous in the presentation request.

    "},{"location":"aip2/0441-present-proof-best-practices/#timestamp-outside-non-revocation-interval","title":"Timestamp Outside Non-Revocation Interval","text":"

    A presentation may include a timestamp outside of a the non-revocation interval applicable to the requested item that a presented credential purports to satisfy. If the latest timestamp from the ledger for a presented credential's revocation registry predates the non-revocation interval, but the timestamp is not in the future (relative to the instant of presentation proof, with a reasonable allowance for clock skew), the verifier MUST log and continue the proof verification process.

    Any timestamp in the future (relative to the instant of presentation proof, with a reasonable allowance for clock skew) evinces tampering: the verifier MUST reject a presentation with a future timestamp. Similarly, any timestamp predating the creation of its corresponding credential's revocation registry on the ledger evinces tampering: the verifier MUST reject a presentation with such a timestamp.

    "},{"location":"aip2/0441-present-proof-best-practices/#dates-and-predicates","title":"Dates and Predicates","text":"

    This section prescribes issuer and verifier best practices concerning representing dates for use in predicate proofs (eg proving Alice is over 21 without revealing her birth date).

    "},{"location":"aip2/0441-present-proof-best-practices/#dates-in-credentials","title":"Dates in Credentials","text":"

    In order for dates to be used in a predicate proof they MUST be expressed as an Int32. While unix timestamps could work for this, it has several drawbacks including: can't represent dates outside of the years 1901-2038, isn't human readable, and is overly precise in that birth time down to the second is generally not needed for an age check. To address these issues, date attributes SHOULD be represented as integers in the form YYYYMMDD (eg 19991231). This addresses the issues with unix timestamps (or any seconds-since-epoch system) while still allowing date values to be compared with < > operators. Note that this system won't work for any general date math (eg adding or subtracting days), but it will work for predicate proofs which just require comparisons. In order to make it clear that this format is being used, the attribute name SHOULD have the suffix _dateint. Since most datetime libraries don't include this format, here are some examples of helper functions written in typescript.

    "},{"location":"aip2/0441-present-proof-best-practices/#dates-in-presentations","title":"Dates in Presentations","text":"

    When constructing a proof request, the verifier SHOULD express the minimum/maximum date as an integer in the form YYYYMMDD. For example if today is Jan 1, 2021 then the verifier would request that bithdate_dateint is before or equal to Jan 1 2000 so <= 20000101. The holder MUST construct a predicate proof with a YYYYMMDD represented birth date less than that value to satisfy the proof request.

    "},{"location":"aip2/0441-present-proof-best-practices/#reference","title":"Reference","text":""},{"location":"aip2/0441-present-proof-best-practices/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"aip2/0453-issue-credential-v2/","title":"Aries RFC 0453: Issue Credential Protocol 2.0","text":""},{"location":"aip2/0453-issue-credential-v2/#version-change-log","title":"Version Change Log","text":"

    For a period of time, versions 2.1 and 2.2 where defined in this RFC. Those definitions were added prior to any implementations, and to date, there are no known implementations available or planned. An attempt at implementing version 2.1 was not merged into the main branch of Aries Cloud Agent Python, deemed overly complicated and not worth the effort for what amounts to an edge case (issuing multiple credentials of the same type in a single protocol instance). Further, there is a version 3.0 of this protocol that has been specified and implemented that does not include these capabilities. Thus, a decision was made that versions 2.1 and 2.2 be removed as being not accepted by the community and overly complicated to both implement and migrate from. Those interested in seeing how those capabilities were specified can look at this protocol before they were removed.

    "},{"location":"aip2/0453-issue-credential-v2/#20propose-credential-and-identifiers","title":"2.0/propose-credential and identifiers","text":"

    Version 2.0 of the protocol is introduced because of a breaking changes in the propose-credential message, replacing the (indy-specific) filtration criteria with a generalized filter attachment to align with the rest of the messages in the protocol. The previous version is 1.1/propose-credential. Version 2.0 also uses <angle brackets> explicitly to mark all values that may vary between instances, such as identifiers and comments.

    The \"formats\" field is added to all the messages to enable the linking the specific attachment IDs with the the format (credential format and version) of the attachment.

    The details that are part of each message type about the different attachment formats serves as a registry of the known formats and versions.

    "},{"location":"aip2/0453-issue-credential-v2/#summary","title":"Summary","text":"

    Formalizes messages used to issue a credential--whether the credential is JWT-oriented, JSON-LD-oriented, or ZKP-oriented. The general flow is similar, and this protocol intends to handle all of them. If you are using a credential type that doesn't fit this protocol, please raise a Github issue.

    "},{"location":"aip2/0453-issue-credential-v2/#motivation","title":"Motivation","text":"

    We need a standard protocol for issuing credentials. This is the basis of interoperability between Issuers and Holders.

    "},{"location":"aip2/0453-issue-credential-v2/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0453-issue-credential-v2/#name-and-version","title":"Name and Version","text":"

    issue-credential, version 2.0

    "},{"location":"aip2/0453-issue-credential-v2/#roles","title":"Roles","text":"

    There are two roles in this protocol: Issuer and Holder. Technically, the latter role is only potential until the protocol completes; that is, the second party becomes a Holder of a credential by completing the protocol. However, we will use the term Holder throughout, to keep things simple.

    Note: When a holder of credentials turns around and uses those credentials to prove something, they become a Prover. In the sister RFC to this one, 0454: Present Proof Protocol 2.0, the Holder is therefore renamed to Prover. Sometimes in casual conversation, the Holder role here might be called \"Prover\" as well, but more formally, \"Holder\" is the right term at this phase of the credential lifecycle.

    "},{"location":"aip2/0453-issue-credential-v2/#goals","title":"Goals","text":"

    When the goals of each role are not available because of context, goal codes may be specifically included in protocol messages. This is particularly helpful to differentiate between credentials passed between the same parties for several different reasons. A goal code included should be considered to apply to the entire thread and is not necessary to be repeated on each message. Changing the goal code may be done by including the new code in a message. All goal codes are optional, and without default.

    "},{"location":"aip2/0453-issue-credential-v2/#states","title":"States","text":"

    The choreography diagram below details how state evolves in this protocol, in a \"happy path.\" The states include

    "},{"location":"aip2/0453-issue-credential-v2/#issuer-states","title":"Issuer States","text":""},{"location":"aip2/0453-issue-credential-v2/#holder-states","title":"Holder States","text":"

    Errors might occur in various places. For example, an Issuer might offer a credential for a price that the Holder is unwilling to pay. All errors are modeled with a problem-report message. Easy-to-anticipate errors reset the flow as shown in the diagrams, and use the code issuance-abandoned; more exotic errors (e.g., server crashed at Issuer headquarters in the middle of a workflow) may have different codes but still cause the flow to be abandoned in the same way. That is, in this version of the protocol, all errors cause the state of both parties (the sender and the receiver of the problem-report) to revert to null (meaning it is no longer engaged in the protocol at all). Future versions of the protocol may allow more granular choices (e.g., requesting and receiving a (re-)send of the issue-credential message if the Holder times out while waiting in the request-sent state).

    The state table outlines the protocol states and transitions.

    "},{"location":"aip2/0453-issue-credential-v2/#messages","title":"Messages","text":"

    The Issue Credential protocol consists of these messages:

    In addition, the ack and problem-report messages are adopted into the protocol for confirmation and error handling.

    "},{"location":"aip2/0453-issue-credential-v2/#message-attachments","title":"Message Attachments","text":"

    This protocol is about the messages that must be exchanged to issue verifiable credentials, NOT about the specifics of particular verifiable credential schemes. DIDComm attachments are deliberately used in messages to isolate the protocol flow/semantics from the credential artifacts themselves as separate constructs. Attachments allow credential formats and this protocol to evolve through versioning milestones independently instead of in lockstep. Links are provided in the message descriptions below, to describe how the protocol adapts to specific verifiable credential implementations.

    The attachment items in the messages are arrays. The arrays are provided to support the issuing of different credential formats (e.g. ZKP, JSON-LD JWT, or other) containing the same data (claims). The arrays are not to be used for issuing credentials with different claims. The formats field of each message associates each attachment with the format (and version) of the attachment.

    A registry of attachment formats is provided in this RFC within the message type sections. A sub-section should be added for each attachment format type (and optionally, each version). Updates to the attachment type formats does NOT impact the versioning of the Issue Credential protocol. Formats are flexibly defined. For example, the first definitions are for hlindy/cred-abstract@v2.0 et al., assuming that all Hyperledger Indy implementations and ledgers will use a common format. However, if a specific instance of Indy uses a different format, another format value can be documented as a new registry entry.

    Any of the 0017-attachments RFC embedded inline attachments can be used. In the examples below, base64 is used in most cases, but implementations MUST expect any of the formats.

    "},{"location":"aip2/0453-issue-credential-v2/#choreography-diagram","title":"Choreography Diagram","text":"

    Note: This diagram was made in draw.io. To make changes:

    The protocol has 3 alternative beginnings:

    1. The Issuer can begin with an offer.
    2. The Holder can begin with a proposal.
    3. the Holder can begin with a request.

    The offer and proposal messages are part of an optional negotiation phase and may trigger back-and-forth counters. A request is not subject to negotiation; it can only be accepted or rejected.

    "},{"location":"aip2/0453-issue-credential-v2/#propose-credential","title":"Propose Credential","text":"

    An optional message sent by the potential Holder to the Issuer to initiate the protocol or in response to an offer-credential message when the Holder wants some adjustments made to the credential data offered by Issuer.

    Note: In Hyperledger Indy, where the `request-credential` message can **only** be sent in response to an `offer-credential` message, the `propose-credential` message is the only way for a potential Holder to initiate the workflow.

    Message format:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/propose-credential\",\n    \"@id\": \"<uuid of propose-message>\",\n    \"goal_code\": \"<goal-code>\",\n    \"comment\": \"<some comment>\",\n    \"credential_preview\": <json-ld object>,\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\"\n        }\n    ],\n    \"filters~attach\": [\n        {\n            \"@id\": \"<attachment identifier>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ]\n}\n

    Description of attributes:

    "},{"location":"aip2/0453-issue-credential-v2/#propose-attachment-registry","title":"Propose Attachment Registry","text":"Credential Format Format Value Link to Attachment Format Comment DIF Credential Manifest dif/credential-manifest@v1.0 propose-credential attachment format Linked Data Proof VC Detail aries/ld-proof-vc-detail@v1.0 ld-proof-vc-detail attachment format Hyperledger Indy Credential Filter hlindy/cred-filter@v2.0 cred filter format Hyperledger AnonCreds Credential Filter anoncreds/credential-filter@v1.0 Credential Filter format"},{"location":"aip2/0453-issue-credential-v2/#offer-credential","title":"Offer Credential","text":"

    A message sent by the Issuer to the potential Holder, describing the credential they intend to offer and possibly the price they expect to be paid. In Hyperledger Indy, this message is required, because it forces the Issuer to make a cryptographic commitment to the set of fields in the final credential and thus prevents Issuers from inserting spurious data. In credential implementations where this message is optional, an Issuer can use the message to negotiate the issuing following receipt of a request-credential message.

    Message Format:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/offer-credential\",\n    \"@id\": \"<uuid of offer message>\",\n    \"goal_code\": \"<goal-code>\",\n    \"replacement_id\": \"<issuer unique id>\",\n    \"comment\": \"<some comment>\",\n    \"credential_preview\": <json-ld object>,\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"offers~attach\": [\n        {\n            \"@id\": \"<attach@id value>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    It is possible for an Issuer to add a ~timing.expires_time decorator to this message to convey the idea that the offer will expire at a particular point in the future. Such behavior is not a special part of this protocol, and support for it is not a requirement of conforming implementations; the ~timing decorator is simply a general possibility for any DIDComm message. We mention it here just to note that the protocol can be enriched in composable ways.

    "},{"location":"aip2/0453-issue-credential-v2/#offer-attachment-registry","title":"Offer Attachment Registry","text":"Credential Format Format Value Link to Attachment Format Comment DIF Credential Manifest dif/credential-manifest@v1.0 offer-credential attachment format Hyperledger Indy Credential Abstract hlindy/cred-abstract@v2.0 cred abstract format Linked Data Proof VC Detail aries/ld-proof-vc-detail@v1.0 ld-proof-vc-detail attachment format Hyperledger AnonCreds Credential Offer anoncreds/credential-offer@v1.0 Credential Offer format W3C VC - Data Integrity Proof Credential Offer didcomm/w3c-di-vc-offer@v0.1 Credential Offer format"},{"location":"aip2/0453-issue-credential-v2/#request-credential","title":"Request Credential","text":"

    This is a message sent by the potential Holder to the Issuer, to request the issuance of a credential. Where circumstances do not require a preceding Offer Credential message (e.g., there is no cost to issuance that the Issuer needs to explain in advance, and there is no need for cryptographic negotiation), this message initiates the protocol. When using the Hyperledger Indy AnonCreds verifiable credential format, this message can only be sent in response to an offer-credential message.

    Message Format:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/request-credential\",\n    \"@id\": \"<uuid of request message>\",\n    \"goal_code\": \"<goal-code>\",\n    \"comment\": \"<some comment>\",\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"requests~attach\": [\n        {\n            \"@id\": \"<attachment identifier>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        },\n    ]\n}\n

    Description of Fields:

    "},{"location":"aip2/0453-issue-credential-v2/#request-attachment-registry","title":"Request Attachment Registry","text":"Credential Format Format Value Link to Attachment Format Comment DIF Credential Manifest dif/credential-manifest@v1.0 request-credential attachment format Hyperledger Indy Credential Request hlindy/cred-req@v2.0 cred request format Linked Data Proof VC Detail aries/ld-proof-vc-detail@v1.0 ld-proof-vc-detail attachment format Hyperledger AnonCreds Credential Request anoncreds/credential-request@v1.0 Credential Request format W3C VC - Data Integrity Proof Credential Request didcomm/w3c-di-vc-request@v0.1 Credential Request format"},{"location":"aip2/0453-issue-credential-v2/#issue-credential","title":"Issue Credential","text":"

    This message contains a verifiable credential being issued as an attached payload. It is sent in response to a valid Request Credential message.

    Message Format:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/issue-credential\",\n    \"@id\": \"<uuid of issue message>\",\n    \"goal_code\": \"<goal-code>\",\n    \"replacement_id\": \"<issuer unique id>\",\n    \"comment\": \"<some comment>\",\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"credentials~attach\": [\n        {\n            \"@id\": \"<attachment-id>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    If the issuer wants an acknowledgement that he issued credential was accepted, this message must be decorated with the ~please-ack decorator using the OUTCOME acknowledgement request. Outcome in the context of this protocol means the acceptance of the credential in whole, i.e. the credential is verified and the contents of the credential are acknowledged. Note that this is different from the default behavior as described in 0317: Please ACK Decorator. It is then best practice for the new Holder to respond with an explicit ack message as described in the please ack decorator RFC.

    "},{"location":"aip2/0453-issue-credential-v2/#credentials-attachment-registry","title":"Credentials Attachment Registry","text":"Credential Format Format Value Link to Attachment Format Comment Linked Data Proof VC aries/ld-proof-vc@v1.0 ld-proof-vc attachment format Hyperledger Indy Credential hlindy/cred@v2.0 credential format Hyperledger AnonCreds Credential anoncreds/credential@v1.0 Credential format W3C VC - Data Integrity Proof Credential didcomm/w3c-di-vc@v0.1 Credential format"},{"location":"aip2/0453-issue-credential-v2/#adopted-problem-report","title":"Adopted Problem Report","text":"

    The problem-report message is adopted by this protocol. problem-report messages can be used by either party to indicate an error in the protocol.

    "},{"location":"aip2/0453-issue-credential-v2/#preview-credential","title":"Preview Credential","text":"

    This is not a message but an inner object for other messages in this protocol. It is used construct a preview of the data for the credential that is to be issued. Its schema follows:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/credential-preview\",\n    \"attributes\": [\n        {\n            \"name\": \"<attribute name>\",\n            \"mime-type\": \"<type>\",\n            \"value\": \"<value>\"\n        },\n        // more attributes\n    ]\n}\n

    The main element is attributes. It is an array of (object) attribute specifications; the subsections below outline their semantics.

    "},{"location":"aip2/0453-issue-credential-v2/#attribute-name","title":"Attribute Name","text":"

    The mandatory \"name\" key maps to the attribute name as a string.

    "},{"location":"aip2/0453-issue-credential-v2/#mime-type-and-value","title":"MIME Type and Value","text":"

    The optional mime-type advises the issuer how to render a binary attribute, to judge its content for applicability before issuing a credential containing it. Its value parses case-insensitively in keeping with MIME type semantics of RFC 2045. If mime-type is missing, its value is null.

    The mandatory value holds the attribute value:

    "},{"location":"aip2/0453-issue-credential-v2/#threading","title":"Threading","text":"

    Threading can be used to initiate a sub-protocol during an issue credential protocol instance. For example, during credential issuance, the Issuer may initiate a child message thread to execute the Present Proof sub-protocol to have the potential Holder (now acting as a Prover) prove attributes about themselves before issuing the credential. Depending on circumstances, this might be a best practice for preventing credential fraud at issuance time.

    If threading were added to all of the above messages, a ~thread decorator would be present, and later messages in the flow would reference the @id of earlier messages to stitch the flow into a single coherent sequence. Details about threading can be found in the 0008: Message ID and Threading RFC.

    "},{"location":"aip2/0453-issue-credential-v2/#limitations","title":"Limitations","text":"

    Smart contracts may be missed in ecosystem, so operation \"issue credential after payment received\" is not atomic. It\u2019s possible case that malicious issuer will charge first and then will not issue credential in fact. But this situation should be easily detected and appropriate penalty should be applied in such type of networks.

    "},{"location":"aip2/0453-issue-credential-v2/#negotiation-and-preview","title":"Negotiation and Preview","text":"

    Negotiation prior to issuing the credential can be done using the offer-credential and propose-credential messages. A common negotiation use case would be about the data to go into the credential. For that, the credential_preview element is used.

    "},{"location":"aip2/0453-issue-credential-v2/#drawbacks","title":"Drawbacks","text":"

    None documented

    "},{"location":"aip2/0453-issue-credential-v2/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"aip2/0453-issue-credential-v2/#prior-art","title":"Prior art","text":"

    See RFC 0036 Issue Credential, v1.x.

    "},{"location":"aip2/0453-issue-credential-v2/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0453-issue-credential-v2/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"aip2/0454-present-proof-v2/","title":"Aries RFC 0454: Present Proof Protocol 2.0","text":""},{"location":"aip2/0454-present-proof-v2/#version-change-log","title":"Version Change Log","text":"

    For a period of time, versions 2.1 and 2.2 where defined in this RFC. Those definitions were added prior to any implementations, and to date, there are no known implementations available or planned. An attempt at implementing version 2.1 of the associated \"issue multiple credentials\" was not merged into the main branch of Aries Cloud Agent Python, deemed overly complicated and not worth the effort for what amounts to an edge case (presenting multiple presentations of the same type in a single protocol instance). Further, there is a version 3.0 of this protocol that has been specified and implemented that does not include these capabilities. Thus, a decision was made that versions 2.1 and 2.2 be removed as being not accepted by the community and overly complicated to both implement and migrate from. Those interested in seeing how those capabilities were specified can look at this protocol before they were removed.

    "},{"location":"aip2/0454-present-proof-v2/#20-alignment-with-rfc-0453-issue-credential","title":"2.0 - Alignment with RFC 0453 Issue Credential","text":""},{"location":"aip2/0454-present-proof-v2/#summary","title":"Summary","text":"

    A protocol supporting a general purpose verifiable presentation exchange regardless of the specifics of the underlying verifiable presentation request and verifiable presentation format.

    "},{"location":"aip2/0454-present-proof-v2/#motivation","title":"Motivation","text":"

    We need a standard protocol for a verifier to request a presentation from a prover, and for the prover to respond by presenting a proof to the verifier. When doing that exchange, we want to provide a mechanism for the participants to negotiate the underlying format and content of the proof.

    "},{"location":"aip2/0454-present-proof-v2/#tutorial","title":"Tutorial","text":""},{"location":"aip2/0454-present-proof-v2/#name-and-version","title":"Name and Version","text":"

    present-proof, version 2.0

    "},{"location":"aip2/0454-present-proof-v2/#key-concepts","title":"Key Concepts","text":"

    This protocol is about the messages to support the presentation of verifiable claims, not about the specifics of particular verifiable presentation formats. DIDComm attachments are deliberately used in messages to make the protocol agnostic to specific verifiable presentation format payloads. Links are provided in the message data element descriptions to details of specific verifiable presentation implementation data structures.

    Diagrams in this protocol were made in draw.io. To make changes:

    "},{"location":"aip2/0454-present-proof-v2/#roles","title":"Roles","text":"

    The roles are verifier and prover. The verifier requests the presentation of a proof and verifies the presentation, while the prover prepares the proof and presents it to the verifier. Optionally, although unlikely from a business sense, the prover may initiate an instance of the protocol using the propose-presentation message.

    "},{"location":"aip2/0454-present-proof-v2/#goals","title":"Goals","text":"

    When the goals of each role are not available because of context, goal codes may be specifically included in protocol messages. This is particularly helpful to differentiate between credentials passed between the same parties for several different reasons. A goal code included should be considered to apply to the entire thread and is not necessary to be repeated on each message. Changing the goal code may be done by including the new code in a message. All goal codes are optional, and without default.

    "},{"location":"aip2/0454-present-proof-v2/#states","title":"States","text":"

    The following states are defined and included in the state transition table below.

    "},{"location":"aip2/0454-present-proof-v2/#states-for-verifier","title":"States for Verifier","text":""},{"location":"aip2/0454-present-proof-v2/#states-for-prover","title":"States for Prover","text":"

    For the most part, these states map onto the transitions shown in both the state transition table above, and in the choreography diagram (below) in obvious ways. However, a few subtleties are worth highlighting:

    "},{"location":"aip2/0454-present-proof-v2/#choreography-diagram","title":"Choreography Diagram","text":""},{"location":"aip2/0454-present-proof-v2/#messages","title":"Messages","text":"

    The present proof protocol consists of these messages:

    In addition, the ack and problem-report messages are adopted into the protocol for confirmation and error handling.

    The messages that include ~attach attachments may use any form of the embedded attachment. In the examples below, the forms of the attachment are arbitrary.

    The ~attach array is to be used to enable a single presentation to be requested/delivered in different verifiable presentation formats. The ability to have multiple attachments must not be used to request/deliver multiple different presentations in a single instance of the protocol.

    "},{"location":"aip2/0454-present-proof-v2/#propose-presentation","title":"Propose Presentation","text":"

    An optional message sent by the prover to the verifier to initiate a proof presentation process, or in response to a request-presentation message when the prover wants to propose using a different presentation format or request. Schema:

    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/propose-presentation\",\n    \"@id\": \"<uuid-propose-presentation>\",\n    \"goal_code\": \"<goal-code>\",\n    \"comment\": \"some comment\",\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"proposals~attach\": [\n        {\n            \"@id\": \"<attachment identifier>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"json\": \"<json>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    If the proposals~attach is not provided, the attach_id item in the formats array should not be provided. That form of the propose-presentation message is to indicate the presentation formats supported by the prover, independent of the verifiable presentation request content.

    "},{"location":"aip2/0454-present-proof-v2/#negotiation-and-preview","title":"Negotiation and Preview","text":"

    Negotiation prior to the delivery of the presentation can be done using the propose-presentation and request-presentation messages. The common negotiation use cases would be about the claims to go into the presentation and the format of the verifiable presentation.

    "},{"location":"aip2/0454-present-proof-v2/#propose-attachment-registry","title":"Propose Attachment Registry","text":"Presentation Format Format Value Link to Attachment Format Comment Hyperledger Indy Proof Req hlindy/proof-req@v2.0 proof request format Used to propose as well as request proofs. DIF Presentation Exchange dif/presentation-exchange/definitions@v1.0 propose-presentation attachment format Hyperledger AnonCreds Proof Request anoncreds/proof-request@v1.0 Proof Request format Used to propose as well as request proofs."},{"location":"aip2/0454-present-proof-v2/#request-presentation","title":"Request Presentation","text":"

    From a verifier to a prover, the request-presentation message describes values that need to be revealed and predicates that need to be fulfilled. Schema:

    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/request-presentation\",\n    \"@id\": \"<uuid-request>\",\n    \"goal_code\": \"<goal-code>\",\n    \"comment\": \"some comment\",\n    \"will_confirm\": true,\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"request_presentations~attach\": [\n        {\n            \"@id\": \"<attachment identifier>\",\n            \"mime-type\": \"application/json\",\n            \"data\":  {\n                \"base64\": \"<base64 data>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    "},{"location":"aip2/0454-present-proof-v2/#presentation-request-attachment-registry","title":"Presentation Request Attachment Registry","text":"Presentation Format Format Value Link to Attachment Format Comment Hyperledger Indy Proof Req hlindy/proof-req@v2.0 proof request format Used to propose as well as request proofs. DIF Presentation Exchange dif/presentation-exchange/definitions@v1.0 propose-presentation attachment format Hyperledger AnonCreds Proof Request anoncreds/proof-request@v1.0 Proof Request format Used to propose as well as request proofs."},{"location":"aip2/0454-present-proof-v2/#presentation","title":"Presentation","text":"

    This message is a response to a Presentation Request message and contains signed presentations. Schema:

    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/presentation\",\n    \"@id\": \"<uuid-presentation>\",\n    \"goal_code\": \"<goal-code>\",\n    \"comment\": \"some comment\",\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"presentations~attach\": [\n        {\n            \"@id\": \"<attachment identifier>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"sha256\": \"f8dca1d901d18c802e6a8ce1956d4b0d17f03d9dc5e4e1f618b6a022153ef373\",\n                \"links\": [\"https://ibb.co/TtgKkZY\"]\n            }\n        }\n    ]\n}\n

    Description of fields:

    If the prover wants an acknowledgement that the presentation was accepted, this message may be decorated with the ~please-ack decorator using the OUTCOME acknowledgement request. This is not necessary if the verifier has indicated it will send an ack-presentation using the will_confirm property. Outcome in the context of this protocol is the definition of \"successful\" as described in Ack Presentation. Note that this is different from the default behavior as described in 0317: Please ACK Decorator. It is then best practice for the new Verifier to respond with an explicit ack message as described in the please ack decorator RFC.

    "},{"location":"aip2/0454-present-proof-v2/#presentations-attachment-registry","title":"Presentations Attachment Registry","text":"Presentation Format Format Value Link to Attachment Format Comment Hyperledger Indy Proof hlindy/proof@v2.0 proof format DIF Presentation Exchange dif/presentation-exchange/submission@v1.0 propose-presentation attachment format Hyperledger AnonCreds Proof anoncreds/proof@v1.0 Proof format"},{"location":"aip2/0454-present-proof-v2/#ack-presentation","title":"Ack Presentation","text":"

    A message from the verifier to the prover that the Present Proof protocol was completed successfully and is now in the done state. The message is an adopted ack from the RFC 0015 acks protocol. The definition of \"successful\" in this protocol means the acceptance of the presentation in whole, i.e. the proof is verified and the contents of the proof are acknowledged.

    "},{"location":"aip2/0454-present-proof-v2/#problem-report","title":"Problem Report","text":"

    A message from the verifier to the prover that follows the presentation message to indicate that the Present Proof protocol was completed unsuccessfully and is now in the abandoned state. The message is an adopted problem-report from the RFC 0015 report-problem protocol. The definition of \"unsuccessful\" from a business sense is up to the verifier. The elements of the problem-report message can provide information to the prover about why the protocol instance was unsuccessful.

    Either party may send a problem-report message earlier in the flow to terminate the protocol before its normal conclusion.

    "},{"location":"aip2/0454-present-proof-v2/#reference","title":"Reference","text":"

    Details are covered in the Tutorial section.

    "},{"location":"aip2/0454-present-proof-v2/#drawbacks","title":"Drawbacks","text":"

    The Indy format of the proposal attachment as proposed above does not allow nesting of logic along the lines of \"A and either B or C if D, otherwise A and B\", nor cross-credential options such as proposing a legal name issued by either (for example) a specific financial institution or government entity.

    The verifiable presentation standardization work being conducted in parallel to this in DIF and the W3C Credentials Community Group (CCG) should be included in at least the Registry tables of this document, and ideally used to eliminate the need for presentation format-specific options.

    "},{"location":"aip2/0454-present-proof-v2/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"aip2/0454-present-proof-v2/#prior-art","title":"Prior art","text":"

    The previous major version of this protocol is RFC 0037 Present Proof protocol and implementations.

    "},{"location":"aip2/0454-present-proof-v2/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"aip2/0454-present-proof-v2/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"aip2/0510-dif-pres-exch-attach/","title":"Aries RFC 0510: Presentation-Exchange Attachment format for requesting and presenting proofs","text":""},{"location":"aip2/0510-dif-pres-exch-attach/#summary","title":"Summary","text":"

    This RFC registers three attachment formats for use in the present-proof V2 protocol based on the Decentralized Identity Foundation's (DIF) Presentation Exchange specification (P-E). Two of these formats define containers for a presentation-exchange request object and another options object carrying additional parameters, while the third format is just a vessel for the final presentation_submission verifiable presentation transferred from the Prover to the Verifier.

    Presentation Exchange defines a data format capable of articulating a rich set of proof requirements from Verifiers, and also provides a means of describing the formats in which Provers must submit those proofs.

    A Verifier's defines their requirements in a presentation_definition containing input_descriptors that describe the credential(s) the proof(s) must be derived from as well as a rich set of operators that place constraints on those proofs (eg. \"must be issued from issuer X\" or \"age over X\", etc.).

    The Verifiable Presentation format of Presentation Submissions is used as opposed to OIDC tokens or CHAPI objects. For an alternative on how to tunnel OIDC messages over DIDComm, see HTTP-Over-DIDComm. CHAPI is an alternative transport to DIDComm.

    "},{"location":"aip2/0510-dif-pres-exch-attach/#motivation","title":"Motivation","text":"

    The Presentation Exchange specification (P-E) possesses a rich language for expressing a Verifier's criterion.

    P-E lends itself well to several transport mediums due to its limited scope as a data format, and is easily transported over DIDComm.

    It is furthermore desirable to make use of specifications developed in an open standards body.

    "},{"location":"aip2/0510-dif-pres-exch-attach/#tutorial","title":"Tutorial","text":"

    Complete examples of messages are provided in the reference section.

    The Verifier sends a request-presentation to the Prover containing a presentation_definition, along with a domain and challenge the Prover must sign over in the proof.

    The Prover can optionally respond to the Verifier's request-presentation with a propose-presentation message containing \"Input Descriptors\" that describe the proofs they can provide. The contents of the attachment is just the input_descriptors attribute of the presentation_definition object.

    The Prover responds with a presentation message containing a presentation_submission.

    "},{"location":"aip2/0510-dif-pres-exch-attach/#reference","title":"Reference","text":""},{"location":"aip2/0510-dif-pres-exch-attach/#propose-presentation-attachment-format","title":"propose-presentation attachment format","text":"

    Format identifier: dif/presentation-exchange/definitions@v1.0

    "},{"location":"aip2/0510-dif-pres-exch-attach/#examples-propose-presentation","title":"Examples: propose-presentation","text":"Complete message example
    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/propose-presentation\",\n    \"@id\": \"fce30ed1-96f8-44c9-95cf-b274288009dc\",\n    \"comment\": \"some comment\",\n    \"formats\" : [{\n        \"attach_id\" : \"143c458d-1b1c-40c7-ab85-4d16808ddf0a\",\n        \"format\" : \"dif/presentation-exchange/definitions@v1.0\"\n    }],\n    \"proposal~attach\": [{\n        \"@id\": \"143c458d-1b1c-40c7-ab85-4d16808ddf0a\",\n        \"mime-type\": \"application/json\",\n        \"data\": {\n            \"json\": {\n                \"input_descriptors\": [{\n                    \"id\": \"citizenship_input\",\n                    \"name\": \"US Passport\",\n                    \"group\": [\"A\"],\n                    \"schema\": [{\n                        \"uri\": \"hub://did:foo:123/Collections/schema.us.gov/passport.json\"\n                    }],\n                    \"constraints\": {\n                        \"fields\": [{\n                            \"path\": [\"$.credentialSubject.birth_date\", \"$.vc.credentialSubject.birth_date\", \"$.birth_date\"],\n                            \"filter\": {\n                                \"type\": \"date\",\n                                \"minimum\": \"1999-5-16\"\n                            }\n                        }]\n                    }\n                }]\n            }\n        }\n    }]\n}\n
    "},{"location":"aip2/0510-dif-pres-exch-attach/#request-presentation-attachment-format","title":"request-presentation attachment format","text":"

    Format identifier: dif/presentation-exchange/definitions@v1.0

    Since the format identifier defined above is the same as the one used in the propose-presentation message, it's recommended to consider both the message @type and the format to accuarately understand the contents of the attachment.

    The contents of the attachment is a JSON object containing the Verifier's presentation definition and an options object with proof options:

    {\n    \"options\": {\n        \"challenge\": \"...\",\n        \"domain\": \"...\",\n    },\n    \"presentation_definition\": {\n        // presentation definition object\n    }\n}\n
    "},{"location":"aip2/0510-dif-pres-exch-attach/#the-options-object","title":"The options object","text":"

    options is a container of additional parameters required for the Prover to fulfill the Verifier's request.

    Available options are:

    Name Status Description challenge RECOMMENDED (for LD proofs) Random seed provided by the Verifier for LD Proofs. domain RECOMMENDED (for LD proofs) The operational domain of the requested LD proof."},{"location":"aip2/0510-dif-pres-exch-attach/#examples-request-presentation","title":"Examples: request-presentation","text":"Complete message example requesting a verifiable presentation with proof type Ed25519Signature2018
    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/request-presentation\",\n    \"@id\": \"0ac534c8-98ed-4fe3-8a41-3600775e1e92\",\n    \"comment\": \"some comment\",\n    \"formats\" : [{\n        \"attach_id\" : \"ed7d9b1f-9eed-4bde-b81c-3aa7485cf947\",\n        \"format\" : \"dif/presentation-exchange/definitions@v1.0\"\n    }],\n    \"request_presentations~attach\": [{\n        \"@id\": \"ed7d9b1f-9eed-4bde-b81c-3aa7485cf947\",\n        \"mime-type\": \"application/json\",\n        \"data\":  {\n            \"json\": {\n                \"options\": {\n                    \"challenge\": \"23516943-1d79-4ebd-8981-623f036365ef\",\n                    \"domain\": \"us.gov/DriversLicense\"\n                },\n                \"presentation_definition\": {\n                    \"input_descriptors\": [{\n                        \"id\": \"citizenship_input\",\n                        \"name\": \"US Passport\",\n                        \"group\": [\"A\"],\n                        \"schema\": [{\n                            \"uri\": \"hub://did:foo:123/Collections/schema.us.gov/passport.json\"\n                        }],\n                        \"constraints\": {\n                            \"fields\": [{\n                                \"path\": [\"$.credentialSubject.birth_date\", \"$.birth_date\"],\n                                \"filter\": {\n                                    \"type\": \"date\",\n                                    \"minimum\": \"1999-5-16\"\n                                }\n                            }]\n                        }\n                    }],\n                    \"format\": {\n                        \"ldp_vp\": {\n                            \"proof_type\": [\"Ed25519Signature2018\"]\n                        }\n                    }\n                }\n            }\n        }\n    }]\n}\n
    The same example but requesting the verifiable presentation with proof type BbsBlsSignatureProof2020 instead
    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/request-presentation\",\n    \"@id\": \"0ac534c8-98ed-4fe3-8a41-3600775e1e92\",\n    \"comment\": \"some comment\",\n    \"formats\" : [{\n        \"attach_id\" : \"ed7d9b1f-9eed-4bde-b81c-3aa7485cf947\",\n        \"format\" : \"dif/presentation-exchange/definitions@v1.0\"\n    }],\n    \"request_presentations~attach\": [{\n        \"@id\": \"ed7d9b1f-9eed-4bde-b81c-3aa7485cf947\",\n        \"mime-type\": \"application/json\",\n        \"data\":  {\n            \"json\": {\n                \"options\": {\n                    \"challenge\": \"23516943-1d79-4ebd-8981-623f036365ef\",\n                    \"domain\": \"us.gov/DriversLicense\"\n                },\n                \"presentation_definition\": {\n                    \"input_descriptors\": [{\n                        \"id\": \"citizenship_input\",\n                        \"name\": \"US Passport\",\n                        \"group\": [\"A\"],\n                        \"schema\": [{\n                            \"uri\": \"hub://did:foo:123/Collections/schema.us.gov/passport.json\"\n                        }],\n                        \"constraints\": {\n                            \"fields\": [{\n                                \"path\": [\"$.credentialSubject.birth_date\", \"$.vc.credentialSubject.birth_date\", \"$.birth_date\"],\n                                \"filter\": {\n                                    \"type\": \"date\",\n                                    \"minimum\": \"1999-5-16\"\n                                }\n                            }],\n                            \"limit_disclosure\": \"required\"\n                        }\n                    }],\n                    \"format\": {\n                        \"ldp_vc\": {\n                            \"proof_type\": [\"BbsBlsSignatureProof2020\"]\n                        }\n                    }\n                }\n            }\n        }\n    }]\n}\n
    "},{"location":"aip2/0510-dif-pres-exch-attach/#presentation-attachment-format","title":"presentation attachment format","text":"

    Format identifier: dif/presentation-exchange/submission@v1.0

    The contents of the attachment is a Presentation Submission in a standard Verifiable Presentation format containing the proofs requested.

    "},{"location":"aip2/0510-dif-pres-exch-attach/#examples-presentation","title":"Examples: presentation","text":"Complete message example
    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/presentation\",\n    \"@id\": \"f1ca8245-ab2d-4d9c-8d7d-94bf310314ef\",\n    \"comment\": \"some comment\",\n    \"formats\" : [{\n        \"attach_id\" : \"2a3f1c4c-623c-44e6-b159-179048c51260\",\n        \"format\" : \"dif/presentation-exchange/submission@v1.0\"\n    }],\n    \"presentations~attach\": [{\n        \"@id\": \"2a3f1c4c-623c-44e6-b159-179048c51260\",\n        \"mime-type\": \"application/ld+json\",\n        \"data\": {\n            \"json\": {\n                \"@context\": [\n                    \"https://www.w3.org/2018/credentials/v1\",\n                    \"https://identity.foundation/presentation-exchange/submission/v1\"\n                ],\n                \"type\": [\n                    \"VerifiablePresentation\",\n                    \"PresentationSubmission\"\n                ],\n                \"presentation_submission\": {\n                    \"descriptor_map\": [{\n                        \"id\": \"citizenship_input\",\n                        \"path\": \"$.verifiableCredential.[0]\"\n                    }]\n                },\n                \"verifiableCredential\": [{\n                    \"@context\": \"https://www.w3.org/2018/credentials/v1\",\n                    \"id\": \"https://eu.com/claims/DriversLicense\",\n                    \"type\": [\"EUDriversLicense\"],\n                    \"issuer\": \"did:foo:123\",\n                    \"issuanceDate\": \"2010-01-01T19:73:24Z\",\n                    \"credentialSubject\": {\n                        \"id\": \"did:example:ebfeb1f712ebc6f1c276e12ec21\",\n                        \"license\": {\n                            \"number\": \"34DGE352\",\n                            \"dob\": \"07/13/80\"\n                        }\n                    },\n                    \"proof\": {\n                        \"type\": \"RsaSignature2018\",\n                        \"created\": \"2017-06-18T21:19:10Z\",\n                        \"proofPurpose\": \"assertionMethod\",\n                        \"verificationMethod\": \"https://example.edu/issuers/keys/1\",\n                        \"jws\": \"...\"\n                    }\n                }],\n                \"proof\": {\n                    \"type\": \"RsaSignature2018\",\n                    \"created\": \"2018-09-14T21:19:10Z\",\n                    \"proofPurpose\": \"authentication\",\n                    \"verificationMethod\": \"did:example:ebfeb1f712ebc6f1c276e12ec21#keys-1\",\n                    \"challenge\": \"1f44d55f-f161-4938-a659-f8026467f126\",\n                    \"domain\": \"4jt78h47fh47\",\n                    \"jws\": \"...\"\n                }\n            }\n        }\n    }]\n}\n
    "},{"location":"aip2/0510-dif-pres-exch-attach/#supported-features-of-presentation-exchange","title":"Supported Features of Presentation-Exchange","text":"

    Level of support for Presentation-Exchange ../../features:

    Feature Notes presentation_definition.input_descriptors.id presentation_definition.input_descriptors.name presentation_definition.input_descriptors.purpose presentation_definition.input_descriptors.schema.uri URI for the credential's schema. presentation_definition.input_descriptors.constraints.fields.path Array of JSONPath string expressions as defined in section 8. REQUIRED as per the spec. presentation_definition.input_descriptors.constraints.fields.filter JSONSchema descriptor. presentation_definition.input_descriptors.constraints.limit_disclosure preferred or required as defined in the spec and as supported by the Holder and Verifier proof mechanisms.Note that the Holder MUST have credentials with cryptographic proof suites that are capable of selective disclosure in order to respond to a request with limit_disclosure: \"required\".See RFC0593 for appropriate crypto suites. presentation_definition.input_descriptors.constraints.is_holder preferred or required as defined in the spec.Note that this feature allows the Holder to present credentials with a different subject identifier than the DID used to establish the DIDComm connection with the Verifier. presentation_definition.format For JSONLD-based credentials: ldp_vc and ldp_vp. presentation_definition.format.proof_type For JSONLD-based credentials: Ed25519Signature2018, BbsBlsSignature2020, and JsonWebSignature2020. When specifying ldp_vc, BbsBlsSignatureProof2020 may also be used."},{"location":"aip2/0510-dif-pres-exch-attach/#proof-formats","title":"Proof Formats","text":""},{"location":"aip2/0510-dif-pres-exch-attach/#constraints","title":"Constraints","text":"

    Verifiable Presentations MUST be produced and consumed using the JSON-LD syntax.

    The proof types defined below MUST be registered in the Linked Data Cryptographic Suite Registry.

    The value of any credentialSubject.id in a credential MUST be a Dentralized Identifier (DID) conforming to the DID Syntax if present. This allows the Holder to authenticate as the credential's subject if required by the Verifier (see the is_holder property above). The Holder authenticates as the credential's subject by attaching an LD Proof on the enclosing Verifiable Presentation.

    "},{"location":"aip2/0510-dif-pres-exch-attach/#proof-formats-on-credentials","title":"Proof Formats on Credentials","text":"

    Aries agents implementing this RFC MUST support the formats outlined in RFC0593 for proofs on Verifiable Credentials.

    "},{"location":"aip2/0510-dif-pres-exch-attach/#proof-formats-on-presentations","title":"Proof Formats on Presentations","text":"

    Aries agents implementing this RFC MUST support the formats outlined below for proofs on Verifiable Presentations.

    "},{"location":"aip2/0510-dif-pres-exch-attach/#ed25519signature2018","title":"Ed25519Signature2018","text":"

    Specification.

    Request Parameters:

    Result:

    A Verifiable Presentation of type Presentation Submission containing the credentials requested under the verifiableCredential property and a proof property of type Ed25519Signature2018.

    Example
    {\n    \"@context\": [\n        \"https://www.w3.org/2018/credentials/v1\",\n        \"https://identity.foundation/presentation-exchange/submission/v1\"\n    ],\n    \"type\": [\n        \"VerifiablePresentation\",\n        \"PresentationSubmission\"\n    ],\n    \"presentation_submission\": {\n        \"descriptor_map\": [{\n           \"id\": \"citizenship_input\",\n           \"path\": \"$.verifiableCredential.[0]\"\n        }]\n    },\n    \"verifiableCredential\": [{\n        \"@context\": \"https://www.w3.org/2018/credentials/v1\",\n        \"id\": \"https://eu.com/claims/DriversLicense\",\n        \"type\": [\n            \"EUDriversLicense\"\n        ],\n        \"issuer\": \"did:foo:123\",\n        \"issuanceDate\": \"2010-01-01T19:73:24Z\",\n        \"credentialSubject\": {\n            \"id\": \"did:example:ebfeb1f712ebc6f1c276e12ec21\",\n            \"license\": {\n            \"number\": \"34DGE352\",\n            \"dob\": \"07/13/80\"\n          }\n        },\n        \"proof\": {\n            \"type\": \"RsaSignature2018\",\n            \"created\": \"2017-06-18T21:19:10Z\",\n            \"proofPurpose\": \"assertionMethod\",\n            \"verificationMethod\": \"https://example.edu/issuers/keys/1\",\n            \"jws\": \"...\"\n        }\n    }],\n    \"proof\": {\n      \"type\": \"Ed25519Signature2018\",\n      \"proofPurpose\": \"authentication\",\n      \"created\": \"2017-09-23T20:21:34Z\",\n      \"verificationMethod\": \"did:example:123456#key1\",\n      \"challenge\": \"2bbgh3dgjg2302d-d2b3gi423d42\",\n      \"domain\": \"example.org\",\n      \"jws\": \"eyJ0eXAiOiJK...gFWFOEjXk\"\n  }\n}\n
    "},{"location":"aip2/0510-dif-pres-exch-attach/#bbsblssignature2020","title":"BbsBlsSignature2020","text":"

    Specification.

    Associated RFC: RFC0646.

    Request Parameters: * presentation_definition.format: ldp_vp * presentation_definition.format.proof_type: BbsBlsSignature2020 * options.challenge: (Optional) a random string value generated by the Verifier * options.domain: (Optional) a string value specified set by the Verifier

    Result:

    A Verifiable Presentation of type Presentation Submission containing the credentials requested under the verifiableCredential property and a proof property of type BbsBlsSignature2020.

    Example
    {\n    \"@context\": [\n        \"https://www.w3.org/2018/credentials/v1\",\n        \"https://w3id.org/security/v2\",\n        \"https://w3id.org/security/bbs/v1\",\n        \"https://identity.foundation/presentation-exchange/submission/v1\"\n    ],\n    \"type\": [\n        \"VerifiablePresentation\",\n        \"PresentationSubmission\"\n    ],\n    \"presentation_submission\": {\n        \"descriptor_map\": [{\n            \"id\": \"citizenship_input\",\n            \"path\": \"$.verifiableCredential.[0]\"\n        }]\n    },\n    \"verifiableCredential\": [{\n        \"@context\": \"https://www.w3.org/2018/credentials/v1\",\n        \"id\": \"https://eu.com/claims/DriversLicense\",\n        \"type\": [\"EUDriversLicense\"],\n        \"issuer\": \"did:foo:123\",\n        \"issuanceDate\": \"2010-01-01T19:73:24Z\",\n        \"credentialSubject\": {\n            \"id\": \"did:example:ebfeb1f712ebc6f1c276e12ec21\",\n            \"license\": {\n                \"number\": \"34DGE352\",\n                \"dob\": \"07/13/80\"\n            }\n       },\n       \"proof\": {\n           \"type\": \"BbsBlsSignatureProof2020\",\n           \"created\": \"2020-04-25\",\n           \"verificationMethod\": \"did:example:489398593#test\",\n           \"proofPurpose\": \"assertionMethod\",\n           \"signature\": \"F9uMuJzNBqj4j+HPTvWjUN/MNoe6KRH0818WkvDn2Sf7kg1P17YpNyzSB+CH57AWDFunU13tL8oTBDpBhODckelTxHIaEfG0rNmqmjK6DOs0/ObksTZh7W3OTbqfD2h4C/wqqMQHSWdXXnojwyFDEg==\"\n       }\n    }],\n    \"proof\": {\n        \"type\": \"BbsBlsSignature2020\",\n        \"created\": \"2020-04-25\",\n        \"verificationMethod\": \"did:example:489398593#test\",\n        \"proofPurpose\": \"authentication\",\n        \"proofValue\": \"F9uMuJzNBqj4j+HPTvWjUN/MNoe6KRH0818WkvDn2Sf7kg1P17YpNyzSB+CH57AWDFunU13tL8oTBDpBhODckelTxHIaEfG0rNmqmjK6DOs0/ObksTZh7W3OTbqfD2h4C/wqqMQHSWdXXnojwyFDEg==\",\n        \"requiredRevealStatements\": [ 4, 5 ]\n    }\n}\n

    Note: The above example is for illustrative purposes. In particular, note that whether a Verifier requests a proof_type of BbsBlsSignature2020 has no bearing on whether the Holder is required to present credentials with proofs of type BbsBlsSignatureProof2020. The choice of proof types on the credentials is constrained by a) the available types registered in RFC0593 and b) additional constraints placed on them due to other aspects of the proof requested by the Verifier, such as requiring limited disclosure with the limit_disclosure property. In such a case, a proof type of Ed25519Signature2018 in the credentials is not appropriate whereas BbsBlsSignatureProof2020 is capable of selective disclosure.

    "},{"location":"aip2/0510-dif-pres-exch-attach/#jsonwebsignature2020","title":"JsonWebSignature2020","text":"

    Specification.

    Request Parameters:

    Result:

    A Verifiable Presentation of type Presentation Submission containing the credentials requested under the verifiableCredential property and a proof property of type JsonWebSignature2020.

    Example
    {\n    \"@context\": [\n        \"https://www.w3.org/2018/credentials/v1\",\n        \"https://identity.foundation/presentation-exchange/submission/v1\"\n    ],\n    \"type\": [\n        \"VerifiablePresentation\",\n        \"PresentationSubmission\"\n    ],\n    \"presentation_submission\": {\n        \"descriptor_map\": [{\n           \"id\": \"citizenship_input\",\n           \"path\": \"$.verifiableCredential.[0]\"\n        }]\n    },\n    \"verifiableCredential\": [{\n        \"@context\": \"https://www.w3.org/2018/credentials/v1\",\n        \"id\": \"https://eu.com/claims/DriversLicense\",\n        \"type\": [\n            \"EUDriversLicense\"\n        ],\n        \"issuer\": \"did:foo:123\",\n        \"issuanceDate\": \"2010-01-01T19:73:24Z\",\n        \"credentialSubject\": {\n            \"id\": \"did:example:ebfeb1f712ebc6f1c276e12ec21\",\n            \"license\": {\n            \"number\": \"34DGE352\",\n            \"dob\": \"07/13/80\"\n          }\n        },\n        \"proof\": {\n            \"type\": \"RsaSignature2018\",\n            \"created\": \"2017-06-18T21:19:10Z\",\n            \"proofPurpose\": \"assertionMethod\",\n            \"verificationMethod\": \"https://example.edu/issuers/keys/1\",\n            \"jws\": \"...\"\n        }\n    }],\n    \"proof\": {\n      \"type\": \"JsonWebSignature2020\",\n      \"proofPurpose\": \"authentication\",\n      \"created\": \"2017-09-23T20:21:34Z\",\n      \"verificationMethod\": \"did:example:123456#key1\",\n      \"challenge\": \"2bbgh3dgjg2302d-d2b3gi423d42\",\n      \"domain\": \"example.org\",\n      \"jws\": \"eyJ0eXAiOiJK...gFWFOEjXk\"\n  }\n}\n

    Available JOSE key types are:

    kty crv signature EC P-256 ES256 EC P-384 ES384"},{"location":"aip2/0510-dif-pres-exch-attach/#drawbacks","title":"Drawbacks","text":"

    N/A

    "},{"location":"aip2/0510-dif-pres-exch-attach/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"aip2/0510-dif-pres-exch-attach/#prior-art","title":"Prior art","text":""},{"location":"aip2/0510-dif-pres-exch-attach/#unresolved-questions","title":"Unresolved questions","text":"

    TODO it is assumed the Verifier will initiate the protocol if they can transmit their presentation definition via an out-of-band channel (eg. it is published on their website) with a request-presentation message, possibly delivered via an Out-of-Band invitation (see RFC0434). For now, the Prover sends propose-presentation as a response to request-presentation.

    "},{"location":"aip2/0510-dif-pres-exch-attach/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"aip2/0519-goal-codes/","title":"0519: Goal Codes","text":""},{"location":"aip2/0519-goal-codes/#summary","title":"Summary","text":"

    Explain how different parties in an SSI ecosystem can communicate about their intentions in a way that is understandable by humans and by automated software.

    "},{"location":"aip2/0519-goal-codes/#motivation","title":"Motivation","text":"

    Agents exist to achieve the intents of their owners. Those intents largely unfold through protocols. Sometimes intelligent action in these protocols depends on a party declaring their intent. We need a standard way to do that.

    "},{"location":"aip2/0519-goal-codes/#tutorial","title":"Tutorial","text":"

    Our early learnings in SSI focused on VC-based proving with a very loose, casual approach to context. We did demos where Alice connects with a potential employer, Acme Corp -- and we assumed that each of the interacting parties had a shared understanding of one another's needs and purposes.

    But in a mature SSI ecosystem, where unknown agents can contact one another for arbitrary reasons, this context is not always easy to deduce. Acme Corp's agent may support many different protocols, and Alice may interact with Acme in the capacity of customer or potential employee or vendor. Although we have feature discovery to learn what's possible, and we have machine-readable governance frameworks to tell us what rules might apply in a given context, we haven't had a way to establish the context in the first place. When Alice contacts Acme, a context is needed before a governance framework is selectable, and before we know which ../../features are desirable.

    The key ingredient in context is intent. If Alice says to Acme, \"I'd like to connect,\", Acme wants to be able to trigger different behavior depending on whether Alice's intent is to be a customer, apply for a job, or audit Acme's taxes. This is the purpose of a goal code.

    "},{"location":"aip2/0519-goal-codes/#the-goal-code-datatype","title":"The goal code datatype","text":"

    To express intent, this RFC formally introduces the goal code datatype. When a field in a DIDComm message contains a goal code, its semantics and format match the description given here. (Goal codes are often declared via the ~thread decorator, but may also appear in ordinary message fields. See the Scope section below. Convention is to name this field \"goal_code\" where possible; however, this is only a convention, and individual protocols may adapt to it however they wish.)

    TODO: should we make a decorator out of this, so protocols don't have to declare it, and so any message can have a goal code? Or should we just let protocols declare a field in whatever message makes sense?

    Protocols use fields of this type as a way to express the intent of the message sender, thus coloring the larger context. In a sense, goal codes are to DIDComm what the subject: field is to email -- except that goal codes have formalized meanings to make them recognizable to automation.

    Goal codes use a standard format. They are lower-cased, kebab-punctuated strings. ASCII and English are recommended, as they are intended to be read by the software developer community, not by human beings; however, full UTF-8 is allowed. They support hierarchical dotted notation, where more general categories are to the left of a dot, and more specific categories are to the right. Some example goal codes might be:

    Goals are inherently self-attested. Thus, goal codes don't represent objective fact that a recipient can rely upon in a strong sense; subsequent interactions can always yield surprises. Even so, goal codes let agents triage interactions and find misalignments early; there's no point in engaging if their goals are incompatible. This has significant benefits for spam prevention, among other things.

    "},{"location":"aip2/0519-goal-codes/#verbs","title":"Verbs","text":"

    Notice the verbs in the examples: sell, date, hire, and arrange. Goals typically involve action; a complete goal code should have one or more verbs in it somewhere. Turning verbs into nouns (e.g., employment.references instead of employment.check-references) is considered bad form. (Some namespaces may put the verbs at the end; some may put them in the middle. That's a purely stylistic choice.)

    "},{"location":"aip2/0519-goal-codes/#directionality","title":"Directionality","text":"

    Notice, too, that the verbs may imply directionality. A goal with the sell verb implies that the person announcing the goal is a would-be seller, not a buyer. We could imagine a more general verb like engage-in-commerce that would allow either behavior. However, that would often be a mistake. The value of goal codes is that they let agents align around intent; announcing that you want to engage in general commerce without clarifying whether you intend to sell or buy may be too vague to help the other party make decisions.

    It is conceivable that this would lead to parallel branchs of a goal ontology that differ only in the direction of their verb. Thus, we could imagine sell.A and sell.B being shadowed by buy.A and buy.B. This might be necessary if a family of protocols allow either party to initiate an interaction and declare the goal, and if both parties view the goals as perfect mirror images. However, practical considerations may make this kind of parallelism unlikely. A random party contacting an individual to sell something may need to be quite clear about the type of selling they intend, to make it past a spam filter. In contrast, a random individual arriving at the digital storefront of a mega retailer may be quite vague about the type of buying they intend. Thus, the buy.* side of the namespace may need much less detail than the sell.* side.

    "},{"location":"aip2/0519-goal-codes/#goals-for-others","title":"Goals for others","text":"

    Related to directionality, it may occasionally be desirable to propose goals to others, rather than adovcating your own: \"Let <parties = us = Alice, Bob, and Carol> <goal = hold an auction> -- I nominate Carol to be the <role = auctioneer> and get us started.\" The difference between a normal message and an unusual one like this is not visible in the goal code; it should be exposed in additional fields that associate the goal with a particular identifier+role pair. Essentially, you are proposing a goal to another party, and these extra fields clarify who should receive the proposal, and what role/perspective they might take with respect to the goal.

    Making proposals like this may be a feature in some protocols. Where it is, the protocols determine the message field names for the goal code, the role, and the DID associated with the role and goal.

    "},{"location":"aip2/0519-goal-codes/#matching","title":"Matching","text":"

    The goal code cci.healthcare is considered a more general form of the code cci.healthcare.procedure, which is more general than cci.healthcare.procedure.schedule. Because these codes are hierarchical, wildcards and fuzzy matching are possible for either a sender or a recipient of a message. Filename-style globbing semantics are used.

    A sender agent can specify that their owner's goal is just meetupcorp.personal without clarifying more; this is like specifying that a file is located under a folder named \"meetupcorp/personal\" without specifying where; any file \"under\" that folder -- or the folder itself -- would match the pattern. A recipient agent can have a policy that says, \"Reject any attempts to connect if the goal code of the other party is aries.sell.*. Notice how this differs from aries.sell*; the first looks for things \"inside\" aries.sell; the latter looks for things \"inside\" aries that have names beginning with sell.

    "},{"location":"aip2/0519-goal-codes/#scope","title":"Scope","text":"

    When is a declared goal known to color interactions, and when is it undefined?

    We previously noted that goal codes are a bit like the subject: header on an email; they contextualize everything that follows in that thread. We don't generally want to declare a goal outside of a thread context, because that would prevent an agent from engaging in two goals at the same time.

    Given these two observations, we can say that a goal applies as soon as it is declared, and it continues to apply to all messages in the same thread. It is also inherited by implication through a thread's pthid field; that is, a parent thread's goal colors the child thread unless/until overridden.

    "},{"location":"aip2/0519-goal-codes/#namespacing","title":"Namespacing","text":"

    To avoid collision and ambiguity in code values, we need to support namespacing in our goal codes. Since goals are only a coarse-grained alignment mechanism, however, we don't need perfect decentralized precision. Confusion isn't much more than an annoyance; the worst that could happen is that two agents discover one or two steps into a protocol that they're not as aligned as they supposed. They need to be prepared to tolerate that outcome in any case.

    Thus, we follow the same general approach that's used in java's packaging system, where organizations and communities use a self-declared prefix for their ecosystem as the leftmost segment or segments of a family of identifiers (goal codes) they manage. Unlike java, though, these need not be tied to DNS in any way. We recommend a single segment namespace that is a unique string, and that is an alias for a URI identifying the origin ecosystem. (In other words, you don't need to start with \"com.yourcorp.yourproduct\" -- \"yourcorp\" is probably fine.)

    The aries namespace alias is reserved for goal codes defined in Aries RFCs. The URI aliased by this name is TBD. See the Reference section for more details.

    "},{"location":"aip2/0519-goal-codes/#versioning","title":"Versioning","text":"

    Semver-style semantics don't map to goals in an simple way; it is not obvious what constitutes a \"major\" versus a \"minor\" difference in a goal, or a difference that's not worth tracking at all. The content of a goal \u2014 the only thing that might vary across versions \u2014 is simply its free-form description, and that varies according to human judgment. Many different versions of a protocol are likely to share the goal to make a payment or to introduce two strangers. A goal is likely to be far more stable than the details of how it is accomplished.

    Because of these considerations, goal codes do not impose an explicit versioning mechanism. However, one is reserved for use, in the unusual cases where it may be helpful. It is to append -v plus a numeric suffix: my-goal-code-v1, my-goal-code-v2, etc. Goal codes that vary only by this suffix should be understood as ordered-by-numeric-suffix evolutions of one another, and goal codes that do not intend to express versioning should not use this convention for something else. A variant of the goal code without any version suffix is equivalent to a variant with the -v1 suffix. This allows human intuition about the relatedness of different codes, and it allows useful wildcard matching across versions. It also treats all version-like changes to a goal as breaking (semver \"major\") changes, which is probably a safe default.

    Families of goal codes are free to use this convention if they need it, or to invent a non-conflicting one of their own. However, we repeat our observation that versioning in goal codes is often inappropriate and unnecessary.

    "},{"location":"aip2/0519-goal-codes/#declaring-goal-codes","title":"Declaring goal codes","text":""},{"location":"aip2/0519-goal-codes/#standalone-rfcs-or-similar-sources","title":"Standalone RFCs or Similar Sources","text":"

    Any URI-referencable document can declare famlies or ontologies of goal codes. In the context of Aries, we encourage standalone RFCs for this purpose if the goals seem likely to be relevant in many contexts. Other communities may of course document goal codes in their own specs -- either dedicated to goal codes, or as part of larger topics. The following block is a sample of how we recommend that such goal codes be declared. Note that each code is individually hyperlink-able, and each is associated with a brief human-friendly description in one or more languages. This description may be used in menuing mechanisms such as the one described in Action Menu Protocol.

    "},{"location":"aip2/0519-goal-codes/#goal-codes","title":"goal codes","text":""},{"location":"aip2/0519-goal-codes/#ariessell","title":"aries.sell","text":"

    en: Sell something. Assumes two parties (buyer/seller). es: Vender algo. Asume que dos partes participan (comprador/vendedor).

    "},{"location":"aip2/0519-goal-codes/#ariessellgoodsconsumer","title":"aries.sell.goods.consumer","text":"

    en: Sell tangible goods of interest to general consumers.

    "},{"location":"aip2/0519-goal-codes/#ariessellservicesconsumer","title":"aries.sell.services.consumer","text":"

    en: Sell services of interest to general consumers.

    "},{"location":"aip2/0519-goal-codes/#ariessellservicesenterprise","title":"aries.sell.services.enterprise","text":"

    en: Sell services of interest to enterprises.

    "},{"location":"aip2/0519-goal-codes/#in-didcomm-based-protocol-specs","title":"In DIDComm-based Protocol Specs","text":"

    Occasionally, goal codes may have meaning only within the context of a specific protocol. In such cases, it may be appropriate to declare the goal codes directly in a protocol spec. This can be done using a section of the RFC as described above.

    More commonly, however, a protocol will accomplish one or more goals (e.g., when the protocol is fulfilling a co-protocol interface), or will require a participant to identify a goal at one or more points in a protocol flow. In such cases, the goal codes are probably declared external to the protocol. If they can be enumerated, they should still be referenced (hyperlinked to their respective definitions) in the protocol RFC.

    "},{"location":"aip2/0519-goal-codes/#in-governance-frameworks","title":"In Governance Frameworks","text":"

    Goal codes can also be (re-)declared in a machine-readable governance framework.

    "},{"location":"aip2/0519-goal-codes/#reference","title":"Reference","text":""},{"location":"aip2/0519-goal-codes/#known-namespace-aliases","title":"Known Namespace Aliases","text":"

    No central registry of namespace aliases is maintained; you need not register with an authority to create a new one. Just pick an alias with good enough uniqueness, and socialize it within your community. For convenience of collision avoidance, however, we maintain a table of aliases that are typically used in global contexts, and welcome PRs from anyone who wants to update it.

    alias used by URI aries Hyperledger Aries Community TBD"},{"location":"aip2/0519-goal-codes/#well-known-goal-codes","title":"Well-known goal codes","text":"

    The following goal codes are defined here because they already have demonstrated utility, based on early SSI work in Aries and elsewhere.

    "},{"location":"aip2/0519-goal-codes/#ariesvc","title":"aries.vc","text":"

    Participate in some form of VC-based interaction.

    "},{"location":"aip2/0519-goal-codes/#ariesvcissue","title":"aries.vc.issue","text":"

    Issue a verifiable credential.

    "},{"location":"aip2/0519-goal-codes/#ariesvcverify","title":"aries.vc.verify","text":"

    Verify or validate VC-based assertions.

    "},{"location":"aip2/0519-goal-codes/#ariesvcrevoke","title":"aries.vc.revoke","text":"

    Revoke a VC.

    "},{"location":"aip2/0519-goal-codes/#ariesrel","title":"aries.rel","text":"

    Create, maintain, or end something that humans would consider a relationship. This may be accomplished by establishing, updating or deleting a DIDComm messaging connection that provides a secure communication channel for the relationship. The DIDComm connection itself is not the relationship, but would be used to carry out interactions between the parties to facilitate the relationship.

    "},{"location":"aip2/0519-goal-codes/#ariesrelbuild","title":"aries.rel.build","text":"

    Create a relationship. Carries the meaning implied today by a LinkedIn invitation to connect or a Facebook \"Friend\" request. Could be as limited as creating a DIDComm Connection.

    "},{"location":"aip2/0519-goal-codes/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"aip2/0557-discover-features-v2/","title":"Aries RFC 0557: Discover Features Protocol v2.x","text":""},{"location":"aip2/0557-discover-features-v2/#summary","title":"Summary","text":"

    Describes how one agent can query another to discover which ../../features it supports, and to what extent.

    "},{"location":"aip2/0557-discover-features-v2/#motivation","title":"Motivation","text":"

    Though some agents will support just one feature and will be statically configured to interact with just one other party, many exciting uses of agents are more dynamic and unpredictable. When Alice and Bob meet, they won't know in advance which ../../features are supported by one another's agents. They need a way to find out.

    "},{"location":"aip2/0557-discover-features-v2/#tutorial","title":"Tutorial","text":"

    This is version 2.0 of the Discover Features protocol, and its fully qualified PIURI for the Discover Features protocol is:

    https://didcomm.org/discover-features/2.0\n

    This version is conceptually similar to version 1.0 of this protocol. It differs in its ability to ask about multiple feature types, and to ask multiple questions and receive multiple answers in a single round trip.

    "},{"location":"aip2/0557-discover-features-v2/#roles","title":"Roles","text":"

    There are two roles in the discover-features protocol: requester and responder. Normally, the requester asks the responder about the ../../features it supports, and the responder answers. Each role uses a single message type.

    It is also possible to proactively disclose ../../features; in this case a requester receives a response without asking for it. This may eliminate some chattiness in certain use cases (e.g., where two-way connectivity is limited).

    "},{"location":"aip2/0557-discover-features-v2/#states","title":"States","text":"

    The state progression is very simple. In the normal case, it is simple request-response; in a proactive disclosure, it's a simple one-way notification.

    "},{"location":"aip2/0557-discover-features-v2/#requester","title":"Requester","text":""},{"location":"aip2/0557-discover-features-v2/#responder","title":"Responder","text":""},{"location":"aip2/0557-discover-features-v2/#messages","title":"Messages","text":""},{"location":"aip2/0557-discover-features-v2/#queries-message-type","title":"queries Message Type","text":"

    A discover-features/queries message looks like this:

    {\n  \"@type\": \"https://didcomm.org/discover-features/2.0/queries\",\n  \"@id\": \"yWd8wfYzhmuXX3hmLNaV5bVbAjbWaU\",\n  \"queries\": [\n    { \"feature-type\": \"protocol\", \"match\": \"https://didcomm.org/tictactoe/1.*\" },\n    { \"feature-type\": \"goal-code\", \"match\": \"aries.*\" }\n  ]\n}\n

    Queries messages contain one or more query objects in the queries array. Each query essentially says, \"Please tell me what ../../features of type X you support, where the feature identifiers match this (potentially wildcarded) string.\" This particular example asks an agent if it supports any 1.x versions of the tictactoe protocol, and if it supports any goal codes that begin with \"aries.\".

    Implementations of this protocol must recognize the following values for feature-type: protocol, goal-code, gov-fw, didcomm-version, and decorator/header. (The concept known as decorator in DIDComm v1 approximately maps to the concept known as header in DIDComm v2. The two values should be considered synonyms and must both be recognized.) Additional values of feature-type may be standardized by raising a PR against this RFC that defines the new type and increments the minor protocol version number; non-standardized values are also valid, but there is no guarantee that their semantics will be recognized.

    Identifiers for feature types vary. For protocols, identifiers are PIURIs. For goal codes, identifiers are goal code values. For governance frameworks, identifiers are URIs where the framework is published (typically the data_uri field if machine-readable. For DIDComm versions, identifiers are the URIs where DIDComm versions are developed (https://github.com/hyperledger/aries-rfcs for V1 and https://github.com/decentralized-identity/didcomm-messaging for V2; see \"Detecting DIDComm Versions\" in RFC 0044 for more details).

    The match field of a query descriptor may use the * wildcard. By itself, a match with just the wildcard says, \"I'm interested in anything you want to share with me.\" But usually, this wildcard will be to match a prefix that's a little more specific, as in the example that matches any 1.x version.

    Any agent may send another agent this message type at any time. Implementers of agents that intend to support dynamic relationships and rich ../../features are strongly encouraged to implement support for this message, as it is likely to be among the first messages exchanged with a stranger.

    "},{"location":"aip2/0557-discover-features-v2/#disclosures-message-type","title":"disclosures Message Type","text":"

    A discover-features/disclosures message looks like this:

    {\n  \"@type\": \"https://didcomm.org/discover-features/2.0/disclosures\",\n  \"~thread\": { \"thid\": \"yWd8wfYzhmuXX3hmLNaV5bVbAjbWaU\" },\n  \"disclosures\": [\n    {\n      \"feature-type\": \"protocol\",\n      \"id\": \"https://didcomm.org/tictactoe/1.0\",\n      \"roles\": [\"player\"]\n    },\n    {\n      \"feature-type\": \"goal-code\",\n      \"id\": \"aries.sell.goods.consumer\"\n    }\n  ]\n}\n

    The disclosures field is a JSON array of zero or more disclosure objects that describe a feature. Each descriptor has a feature-type field that contains data corresponding to feature-type in a query object, and an id field that unambiguously identifies a single item of that feature type. When the item is a protocol, the disclosure object may also contain a roles array that enumerates the roles the responding agent can play in the associated protocol. Future feature types may add additional optional fields, though no other fields are being standardized with this version of the RFC.

    Disclosures messages say, \"Here are some ../../features I support (that matched your queries).\"

    "},{"location":"aip2/0557-discover-features-v2/#sparse-disclosures","title":"Sparse Disclosures","text":"

    Disclosures do not have to contain exhaustive detail. For example, the following response omits the optional roles field but may be just as useful as one that includes it:

    {\n  \"@type\": \"https://didcomm.org/discover-features/2.0/disclosures\",\n  \"~thread\": { \"thid\": \"yWd8wfYzhmuXX3hmLNaV5bVbAjbWaU\" },\n  \"disclosures\": [\n    {\"feature-type\": \"protocol\", \"id\": \"https://didcomm.org/tictactoe/1.0\"}\n  ]\n}\n

    Less detail probably suffices because agents do not need to know everything about one another's implementations in order to start an interaction--usually the flow will organically reveal what's needed. For example, the outcome message in the tictactoe protocol isn't needed until the end, and is optional anyway. Alice can start a tictactoe game with Bob and will eventually see whether he has the right idea about outcome messages.

    The missing roles in this disclosure does not say, \"I support no roles in this protocol.\" It says, \"I support the protocol but I'm providing no detail about specific roles.\" Similar logic applies to any other omitted fields.

    An empty disclosures array does not say, \"I support no ../../features that match your query.\" It says, \"I'm not disclosing to you that I support any ../../features (that match your query).\" An agent might not tell another that it supports a feature for various reasons, including: the trust that it imputes to the other party based on cumulative interactions so far, whether it's in the middle of upgrading a plugin, whether it's currently under high load, and so forth. And responses to a discover-features query are not guaranteed to be true forever; agents can be upgraded or downgraded, although they probably won't churn in their feature profiles from moment to moment.

    "},{"location":"aip2/0557-discover-features-v2/#privacy-considerations","title":"Privacy Considerations","text":"

    Because the wildcards in a queries message can be very inclusive, the discover-features protocol could be used to mine information suitable for agent fingerprinting, in much the same way that browser fingerprinting works. This is antithetical to the ethos of our ecosystem, and represents bad behavior. Agents should use discover-features to answer legitimate questions, and not to build detailed profiles of one another. However, fingerprinting may be attempted anyway.

    For agents that want to maintain privacy, several best practices are recommended:

    "},{"location":"aip2/0557-discover-features-v2/#follow-selective-disclosure","title":"Follow selective disclosure.","text":"

    Only reveal supported ../../features based on trust in the relationship. Even if you support a protocol, you may not wish to use it in every relationship. Don't tell others about ../../features you do not plan to use with them.

    Patterns are easier to see in larger data samples. However, a pattern of ultra-minimal data is also a problem, so use good judgment about how forthcoming to be.

    "},{"location":"aip2/0557-discover-features-v2/#vary-the-format-of-responses","title":"Vary the format of responses.","text":"

    Sometimes, you might prettify your agent plaintext message one way, sometimes another.

    "},{"location":"aip2/0557-discover-features-v2/#vary-the-order-of-items-in-the-disclosures-array","title":"Vary the order of items in the disclosures array.","text":"

    If more than one key matches a query, do not always return them in alphabetical order or version order. If you do return them in order, do not always return them in ascending order.

    "},{"location":"aip2/0557-discover-features-v2/#consider-adding-some-spurious-details","title":"Consider adding some spurious details.","text":"

    If a query could match multiple ../../features, then occasionally you might add some made-up ../../features as matches. If a wildcard allows multiple versions of a protocol, then sometimes you might use some made-up versions. And sometimes not. (Doing this too aggressively might reveal your agent implementation, so use sparingly.)

    "},{"location":"aip2/0557-discover-features-v2/#vary-how-you-query-too","title":"Vary how you query, too.","text":"

    How you ask questions may also be fingerprintable.

    "},{"location":"aip2/0557-discover-features-v2/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"aip2/0592-indy-attachments/","title":"Aries RFC 0592: Indy Attachment Formats for Requesting and Presenting Credentials","text":""},{"location":"aip2/0592-indy-attachments/#summary","title":"Summary","text":"

    This RFC registers attachment formats used with Hyperledger Indy-style ZKP-oriented credentials in Issue Credential Protocol 2.0 and Present Proof Protocol 2.0. These formats are generally considered v2 formats, as they align with the \"anoncreds2\" work in Hyperledger Ursa and are a second generation implementation. They began to be used in production in 2018 and are in active deployment in 2021.

    "},{"location":"aip2/0592-indy-attachments/#motivation","title":"Motivation","text":"

    Allows Indy-style credentials to be used with credential-related protocols that take pluggable formats as payloads.

    "},{"location":"aip2/0592-indy-attachments/#reference","title":"Reference","text":""},{"location":"aip2/0592-indy-attachments/#cred-filter-format","title":"cred filter format","text":"

    The potential holder uses this format to propose criteria for a potential credential for the issuer to offer.

    The identifier for this format is hlindy/cred-filter@v2.0. It is a base64-encoded version of the data structure specifying zero or more criteria from the following (non-base64-encoded) structure:

    {\n    \"schema_issuer_did\": \"<schema_issuer_did>\",\n    \"schema_name\": \"<schema_name>\",\n    \"schema_version\": \"<schema_version>\",\n    \"schema_id\": \"<schema_identifier>\",\n    \"issuer_did\": \"<issuer_did>\",\n    \"cred_def_id\": \"<credential_definition_identifier>\"\n}\n

    The potential holder may not know, and need not specify, all of these criteria. For example, the holder might only know the schema name and the (credential) issuer DID. Recall that the potential holder may specify target attribute values and MIME types in the credential preview.

    For example, the JSON (non-base64-encoded) structure might look like this:

    {\n    \"schema_issuer_did\": \"did:sov:4RW6QK2HZhHxa2tg7t1jqt\",\n    \"schema_name\": \"bcgov-mines-act-permit.bcgov-mines-permitting\",\n    \"issuer_did\": \"did:sov:4RW6QK2HZhHxa2tg7t1jqt\"\n}\n

    A complete propose-credential message from the Issue Credential protocol 2.0 embeds this format at /filters~attach/data/base64:

    {\n    \"@id\": \"<uuid of propose message>\",\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/propose-credential\",\n    \"comment\": \"<some comment>\",\n    \"formats\" : [{\n        \"attach_id\": \"<attach@id value>\",\n        \"format\": \"hlindy/cred-filter@v2.0\"\n    }],\n    \"filters~attach\": [{\n        \"@id\": \"<attach@id value>\",\n        \"mime-type\": \"application/json\",\n        \"data\": {\n            \"base64\": \"ewogICAgInNjaGVtYV9pc3N1ZXJfZGlkIjogImRpZDpzb3Y... (clipped)... LMkhaaEh4YTJ0Zzd0MWpxdCIKfQ==\"\n        }\n    }]\n}\n
    "},{"location":"aip2/0592-indy-attachments/#cred-abstract-format","title":"cred abstract format","text":"

    This format is used to clarify the structure and semantics (but not the concrete data values) of a potential credential, in offers sent from issuer to potential holder.

    The identifier for this format is hlindy/cred-abstract@v2.0. It is a base64-encoded version of the data returned from indy_issuer_create_credential_offer().

    The JSON (non-base64-encoded) structure might look like this:

    {\n    \"schema_id\": \"4RW6QK2HZhHxa2tg7t1jqt:2:bcgov-mines-act-permit.bcgov-mines-permitting:0.2.0\",\n    \"cred_def_id\": \"4RW6QK2HZhHxa2tg7t1jqt:3:CL:58160:default\",\n    \"nonce\": \"57a62300-fbe2-4f08-ace0-6c329c5210e1\",\n    \"key_correctness_proof\" : <key_correctness_proof>\n}\n

    A complete offer-credential message from the Issue Credential protocol 2.0 embeds this format at /offers~attach/data/base64:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/offer-credential\",\n    \"@id\": \"<uuid of offer message>\",\n    \"replacement_id\": \"<issuer unique id>\",\n    \"comment\": \"<some comment>\",\n    \"credential_preview\": <json-ld object>,\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\": \"hlindy/cred-abstract@v2.0\"\n        }\n    ],\n    \"offers~attach\": [\n        {\n            \"@id\": \"<attach@id value>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"ewogICAgInNjaGVtYV9pZCI6ICI0Ulc2UUsySFpoS... (clipped)... jb3JyZWN0bmVzc19wcm9vZj4KfQ==\"\n            }\n        }\n    ]\n}\n

    The same structure can be embedded at /offers~attach/data/base64 in an offer-credential message.

    "},{"location":"aip2/0592-indy-attachments/#cred-request-format","title":"cred request format","text":"

    This format is used to formally request a credential. It differs from the credential abstract above in that it contains a cryptographic commitment to a link secret; an issuer can therefore use it to bind a concrete instance of an issued credential to the appropriate holder. (In contrast, the credential abstract describes the schema and cred def, but not enough information to actually issue to a specific holder.)

    The identifier for this format is hlindy/cred-req@v2.0. It is a base64-encoded version of the data returned from indy_prover_create_credential_req().

    The JSON (non-base64-encoded) structure might look like this:

    {\n    \"prover_did\" : \"did:sov:abcxyz123\",\n    \"cred_def_id\" : \"4RW6QK2HZhHxa2tg7t1jqt:3:CL:58160:default\",\n    // Fields below can depend on Cred Def type\n    \"blinded_ms\" : <blinded_master_secret>,\n    \"blinded_ms_correctness_proof\" : <blinded_ms_correctness_proof>,\n    \"nonce\": \"fbe22300-57a6-4f08-ace0-9c5210e16c32\"\n}\n

    A complete request-credential message from the Issue Credential protocol 2.0 embeds this format at /requests~attach/data/base64:

    {\n    \"@id\": \"cf3a9301-6d4a-430f-ae02-b4a79ddc9706\",\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/request-credential\",\n    \"comment\": \"<some comment>\",\n    \"formats\": [{\n        \"attach_id\": \"7cd11894-838a-45c0-a9ec-13e2d9d125a1\",\n        \"format\": \"hlindy/cred-req@v2.0\"\n    }],\n    \"requests~attach\": [{\n        \"@id\": \"7cd11894-838a-45c0-a9ec-13e2d9d125a1\",\n        \"mime-type\": \"application/json\",\n        \"data\": {\n            \"base64\": \"ewogICAgInByb3Zlcl9kaWQiIDogImRpZDpzb3Y6YWJjeHl.. (clipped)... DAtNTdhNi00ZjA4LWFjZTAtOWM1MjEwZTE2YzMyIgp9\"\n        }\n    }]\n}\n
    "},{"location":"aip2/0592-indy-attachments/#credential-format","title":"credential format","text":"

    A concrete, issued Indy credential may be transmitted over many protocols, but is specifically expected as the final message in Issuance Protocol 2.0. The identifier for its format is hlindy/cred@v2.0.

    This is a credential that's designed to be held but not shared directly. It is stored in the holder's wallet and used to derive a novel ZKP or W3C-compatible verifiable presentation just in time for each sharing of credential material.

    The encoded values of the credential MUST follow the encoding algorithm as described in Encoding Claims.

    This is the format emitted by libindy's indy_issuer_create_credential() function. It is JSON-based and might look like this:

    {\n    \"schema_id\": \"4RW6QK2HZhHxa2tg7t1jqt:2:bcgov-mines-act-permit.bcgov-mines-permitting:0.2.0\",\n    \"cred_def_id\": \"4RW6QK2HZhHxa2tg7t1jqt:3:CL:58160:default\",\n    \"rev_reg_id\", \"EyN78DDGHyok8qw6W96UBY:4:EyN78DDGHyok8qw6W96UBY:3:CL:56389:CardossierOrgPerson:CL_ACCUM:1-1000\",\n    \"values\": {\n        \"attr1\" : {\"raw\": \"value1\", \"encoded\": \"value1_as_int\" },\n        \"attr2\" : {\"raw\": \"value2\", \"encoded\": \"value2_as_int\" }\n    },\n    // Fields below can depend on Cred Def type\n    \"signature\": <signature>,\n    \"signature_correctness_proof\": <signature_correctness_proof>\n    \"rev_reg\": <revocation registry state>\n    \"witness\": <witness>\n}\n

    An exhaustive description of the format is out of scope here; it is more completely documented in white papers, source code, and other Indy materials.

    "},{"location":"aip2/0592-indy-attachments/#proof-request-format","title":"proof request format","text":"

    This format is used to formally request a verifiable presenation (proof) derived from an Indy-style ZKP-oriented credential. It can also be used by a holder to propose a presentation.

    The identifier for this format is hlindy/proof-req@v2.0. It is a base64-encoded version of the data returned from indy_prover_search_credentials_for_proof_req().

    Here is a sample proof request that embodies the following: \"Using a government-issued ID, disclose the credential holder\u2019s name and height, hide the credential holder\u2019s sex, get them to self-attest their phone number, and prove that their age is at least 18\":

    {\n    \"nonce\": \u201c2934823091873049823740198370q23984710239847\u201d, \n    \"name\":\"proof_req_1\",\n    \"version\":\"0.1\",\n    \"requested_attributes\":{\n        \"attr1_referent\": {\"name\":\"sex\"},\n        \"attr2_referent\": {\"name\":\"phone\"},\n        \"attr3_referent\": {\"names\": [\"name\", \"height\"], \"restrictions\": <restrictions specifying government-issued ID>}\n    },\n    \"requested_predicates\":{\n        \"predicate1_referent\":{\"name\":\"age\",\"p_type\":\">=\",\"p_value\":18}\n    }\n}\n
    "},{"location":"aip2/0592-indy-attachments/#proof-format","title":"proof format","text":"

    This is the format of an Indy-style ZKP. It plays the same role as a W3C-style verifiable presentation (VP) and can be mapped to one.

    The raw values encoded in the presentation SHOULD be verified against the encoded values using the encoding algorithm as described below in Encoding Claims.

    The identifier for this format is hlindy/proof@v2.0. It is a version of the (JSON-based) data emitted by libindy's indy_prover_create_proof()) function. A proof that responds to the previous proof request sample looks like this:

    {\n  \"proof\":{\n    \"proofs\":[\n      {\n        \"primary_proof\":{\n          \"eq_proof\":{\n            \"revealed_attrs\":{\n              \"height\":\"175\",\n              \"name\":\"1139481716457488690172217916278103335\"\n            },\n            \"a_prime\":\"5817705...096889\",\n            \"e\":\"1270938...756380\",\n            \"v\":\"1138...39984052\",\n            \"m\":{\n              \"master_secret\":\"375275...0939395\",\n              \"sex\":\"3511483...897083518\",\n              \"age\":\"13430...63372249\"\n            },\n            \"m2\":\"1444497...2278453\"\n          },\n          \"ge_proofs\":[\n            {\n              \"u\":{\n                \"1\":\"152500...3999140\",\n                \"2\":\"147748...2005753\",\n                \"0\":\"8806...77968\",\n                \"3\":\"10403...8538260\"\n              },\n              \"r\":{\n                \"2\":\"15706...781609\",\n                \"3\":\"343...4378642\",\n                \"0\":\"59003...702140\",\n                \"DELTA\":\"9607...28201020\",\n                \"1\":\"180097...96766\"\n              },\n              \"mj\":\"134300...249\",\n              \"alpha\":\"827896...52261\",\n              \"t\":{\n                \"2\":\"7132...47794\",\n                \"3\":\"38051...27372\",\n                \"DELTA\":\"68025...508719\",\n                \"1\":\"32924...41082\",\n                \"0\":\"74906...07857\"\n              },\n              \"predicate\":{\n                \"attr_name\":\"age\",\n                \"p_type\":\"GE\",\n                \"value\":18\n              }\n            }\n          ]\n        },\n        \"non_revoc_proof\":null\n      }\n    ],\n    \"aggregated_proof\":{\n      \"c_hash\":\"108743...92564\",\n      \"c_list\":[ 6 arrays of 257 numbers between 0 and 255]\n    }\n  },\n  \"requested_proof\":{\n    \"revealed_attrs\":{\n      \"attr1_referent\":{\n        \"sub_proof_index\":0,\n        \"raw\":\"Alex\",\n        \"encoded\":\"1139481716457488690172217916278103335\"\n      }\n    },\n    \"revealed_attr_groups\":{\n      \"attr4_referent\":{\n        \"sub_proof_index\":0,\n        \"values\":{\n          \"name\":{\n            \"raw\":\"Alex\",\n            \"encoded\":\"1139481716457488690172217916278103335\"\n          },\n          \"height\":{\n            \"raw\":\"175\",\n            \"encoded\":\"175\"\n          }\n        }\n      }\n    },\n    \"self_attested_attrs\":{\n      \"attr3_referent\":\"8-800-300\"\n    },\n    \"unrevealed_attrs\":{\n      \"attr2_referent\":{\n        \"sub_proof_index\":0\n      }\n    },\n    \"predicates\":{\n      \"predicate1_referent\":{\n        \"sub_proof_index\":0\n      }\n    }\n  },\n  \"identifiers\":[\n    {\n      \"schema_id\":\"NcYxiDXkpYi6ov5FcYDi1e:2:gvt:1.0\",\n      \"cred_def_id\":\"NcYxi...cYDi1e:2:gvt:1.0:TAG_1\",\n      \"rev_reg_id\":null,\n      \"timestamp\":null\n    }\n  ]\n}\n
    "},{"location":"aip2/0592-indy-attachments/#unrevealed-attributes","title":"Unrevealed Attributes","text":"

    AnonCreds supports a holder responding to a proof request with some of the requested claims included in an unrevealed_attrs array, as seen in the example above, with attr2_referent. Assuming the rest of the proof is valid, AnonCreds will indicate that a proof with unrevealed attributes has been successfully verified. It is the responsibility of the verifier to determine if the purpose of the verification has been met if some of the attributes are not revealed.

    There are at least a few valid use cases for this approach:

    "},{"location":"aip2/0592-indy-attachments/#encoding-claims","title":"Encoding Claims","text":"

    Claims in AnonCreds-based verifiable credentials are put into the credential in two forms, raw and encoded. raw is the actual data value, and encoded is the (possibly derived) integer value that is used in presentations. At this time, AnonCreds does not take an opinion on the method used for encoding the raw value.

    AnonCreds issuers and verifiers must agree on the encoding method so that the verifier can check that the raw value returned in a presentation corresponds to the proven encoded value. The following is the encoding algorithm that MUST be used by Issuers when creating credentials and SHOULD be verified by Verifiers receiving presentations:

    An example implementation in Python can be found here.

    A gist of test value pairs can be found here.

    "},{"location":"aip2/0592-indy-attachments/#notes-on-encoding-claims","title":"Notes on Encoding Claims","text":""},{"location":"aip2/0592-indy-attachments/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"aip2/0593-json-ld-cred-attach/","title":"Aries RFC 0593: JSON-LD Credential Attachment format for requesting and issuing credentials","text":""},{"location":"aip2/0593-json-ld-cred-attach/#summary","title":"Summary","text":"

    This RFC registers an attachment format for use in the issue-credential V2 protocol based on JSON-LD credentials with Linked Data Proofs from the VC Data Model.

    It defines a minimal set of parameters needed to create a common understanding of the verifiable credential to issue. It is based on version 1.0 of the Verifiable Credentials Data Model which is a W3C recommendation since 19 November 2019.

    "},{"location":"aip2/0593-json-ld-cred-attach/#motivation","title":"Motivation","text":"

    The Issue Credential protocol needs an attachment format to be able to exchange JSON-LD credentials with Linked Data Proofs. It is desirable to make use of specifications developed in an open standards body, such as the Credential Manifest for which the attachment format is described in RFC 0511: Credential-Manifest Attachment format. However, the Credential Manifest is not finished and ready yet, and therefore there is a need to bridge the gap between standards.

    "},{"location":"aip2/0593-json-ld-cred-attach/#tutorial","title":"Tutorial","text":"

    Complete examples of messages are provided in the reference section.

    "},{"location":"aip2/0593-json-ld-cred-attach/#reference","title":"Reference","text":""},{"location":"aip2/0593-json-ld-cred-attach/#ld-proof-vc-detail-attachment-format","title":"ld-proof-vc-detail attachment format","text":"

    Format identifier: aries/ld-proof-vc-detail@v1.0

    This format is used to formally propose, offer, or request a credential. The credential property should contain the credential as it is going to be issued, without the proof and credentialStatus properties. Options for these properties are specified in the options object.

    The JSON structure might look like this:

    {\n  \"credential\": {\n    \"@context\": [\n      \"https://www.w3.org/2018/credentials/v1\",\n      \"https://www.w3.org/2018/credentials/examples/v1\"\n    ],\n    \"id\": \"urn:uuid:3978344f-8596-4c3a-a978-8fcaba3903c5\",\n    \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n    \"issuer\": \"did:key:z6MkodKV3mnjQQMB9jhMZtKD9Sm75ajiYq51JDLuRSPZTXrr\",\n    \"issuanceDate\": \"2020-01-01T19:23:24Z\",\n    \"expirationDate\": \"2021-01-01T19:23:24Z\",\n    \"credentialSubject\": {\n      \"id\": \"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\",\n      \"degree\": {\n        \"type\": \"BachelorDegree\",\n        \"name\": \"Bachelor of Science and Arts\"\n      }\n    }\n  },\n  \"options\": {\n    \"proofPurpose\": \"assertionMethod\",\n    \"created\": \"2020-04-02T18:48:36Z\",\n    \"domain\": \"example.com\",\n    \"challenge\": \"9450a9c1-4db5-4ab9-bc0c-b7a9b2edac38\",\n    \"credentialStatus\": {\n      \"type\": \"CredentialStatusList2017\"\n    },\n    \"proofType\": \"Ed25519Signature2018\"\n  }\n}\n

    A complete request credential message form the Issue Credential protocol 2.0 might look like this:

    {\n  \"@id\": \"7293daf0-ed47-4295-8cc4-5beb513e500f\",\n  \"@type\": \"https://didcomm.org/issue-credential/%VER/request-credential\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"13a3f100-38ce-4e96-96b4-ea8f30250df9\",\n      \"format\": \"aries/ld-proof-vc-detail@v1.0\"\n    }\n  ],\n  \"requests~attach\": [\n    {\n      \"@id\": \"13a3f100-38ce-4e96-96b4-ea8f30250df9\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"base64\": \"ewogICJjcmVkZW50aWFsIjogewogICAgIkBjb250...(clipped)...IkVkMjU1MTlTaWduYXR1cmUyMDE4IgogIH0KfQ==\"\n      }\n    }\n  ]\n}\n

    The format is closely related to the Verifiable Credentials HTTP API, but diverts on some places. The main differences are:

    "},{"location":"aip2/0593-json-ld-cred-attach/#ld-proof-vc-attachment-format","title":"ld-proof-vc attachment format","text":"

    Format identifier: aries/ld-proof-vc@v1.0

    This format is used to transmit a verifiable credential with linked data proof. The contents of the attachment is a standard JSON-LD Verifiable Credential object with linked data proof as defined by the Verifiable Credentials Data Model and the Linked Data Proofs specification.

    The JSON structure might look like this:

    {\n  \"@context\": [\n    \"https://www.w3.org/2018/credentials/v1\",\n    \"https://www.w3.org/2018/credentials/examples/v1\"\n  ],\n  \"id\": \"http://example.gov/credentials/3732\",\n  \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n  \"issuer\": {\n    \"id\": \"did:web:vc.transmute.world\"\n  },\n  \"issuanceDate\": \"2020-03-10T04:24:12.164Z\",\n  \"credentialSubject\": {\n    \"id\": \"did:example:ebfeb1f712ebc6f1c276e12ec21\",\n    \"degree\": {\n      \"type\": \"BachelorDegree\",\n      \"name\": \"Bachelor of Science and Arts\"\n    }\n  },\n  \"proof\": {\n    \"type\": \"JsonWebSignature2020\",\n    \"created\": \"2020-03-21T17:51:48Z\",\n    \"verificationMethod\": \"did:web:vc.transmute.world#_Qq0UL2Fq651Q0Fjd6TvnYE-faHiOpRlPVQcY_-tA4A\",\n    \"proofPurpose\": \"assertionMethod\",\n    \"jws\": \"eyJiNjQiOmZhbHNlLCJjcml0IjpbImI2NCJdLCJhbGciOiJFZERTQSJ9..OPxskX37SK0FhmYygDk-S4csY_gNhCUgSOAaXFXDTZx86CmI5nU9xkqtLWg-f4cqkigKDdMVdtIqWAvaYx2JBA\"\n  }\n}\n

    A complete issue-credential message from the Issue Credential protocol 2.0 might look like this:

    {\n  \"@id\": \"284d3996-ba85-45d9-964b-9fd5805517b6\",\n  \"@type\": \"https://didcomm.org/issue-credential/%VER/issue-credential\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"format\": \"aries/ld-proof-vc@v1.0\"\n    }\n  ],\n  \"credentials~attach\": [\n    {\n      \"@id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"mime-type\": \"application/ld+json\",\n      \"data\": {\n        \"base64\": \"ewogICAgICAgICAgIkBjb250ZXogWwogICAgICAg...(clipped)...RNVmR0SXFXZhWXgySkJBIgAgfQogICAgICAgIH0=\"\n      }\n    }\n  ]\n}\n
    "},{"location":"aip2/0593-json-ld-cred-attach/#supported-proof-types","title":"Supported Proof Types","text":"

    Following are the Linked Data proof types on Verifiable Credentials that MUST be supported for compliance with this RFC. All suites listed in the following table MUST be registered in the Linked Data Cryptographic Suite Registry:

    Suite Spec Enables Selective disclosure? Enables Zero-knowledge proofs? Optional Ed25519Signature2018 Link No No No BbsBlsSignature2020** Link Yes No No JsonWebSignature2020*** Link No No Yes

    ** Note: see RFC0646 for details on how BBS+ signatures are to be produced and consumed by Aries agents.

    *** Note: P-256 and P-384 curves are supported.

    "},{"location":"aip2/0593-json-ld-cred-attach/#drawbacks","title":"Drawbacks","text":"

    N/A

    "},{"location":"aip2/0593-json-ld-cred-attach/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"aip2/0593-json-ld-cred-attach/#prior-art","title":"Prior art","text":"

    N/A

    "},{"location":"aip2/0593-json-ld-cred-attach/#unresolved-questions","title":"Unresolved questions","text":"

    N/A

    "},{"location":"aip2/0593-json-ld-cred-attach/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0003-protocols/","title":"Aries RFC 0003: Protocols","text":""},{"location":"concepts/0003-protocols/#summary","title":"Summary","text":"

    Defines peer-to-peer application-level protocols in the context of interactions among agent-like things, and shows how they should be designed and documented.

    "},{"location":"concepts/0003-protocols/#table-of-contents","title":"Table of Contents","text":""},{"location":"concepts/0003-protocols/#motivation","title":"Motivation","text":"

    APIs in the style of Swagger are familiar to nearly all developers, and it's a common assumption that we should use them to solve the problems at hand in the decentralized identity space. However, to truly decentralize, we must think about interactions at a higher level of generalization. Protocols can model all APIs, but not the other way around. This matters. We need to explain why.

    We also need to show how a protocol is defined, so the analog to defining a Swagger API is demystified.

    "},{"location":"concepts/0003-protocols/#tutorial","title":"Tutorial","text":""},{"location":"concepts/0003-protocols/#what-is-a-protocol","title":"What is a Protocol?","text":"

    A protocol is a recipe for a stateful interaction. Protocols are all around us, and are so ordinary that we take them for granted. Each of the following interactions is stateful, and has conventions that constitute a sort of \"recipe\":

    In the context of decentralized identity, protocols manifest at many different levels of the stack: at the lowest levels of networking, in cryptographic algorithms like Diffie Hellman, in the management of DIDs, in the conventions of DIDComm, and in higher-level interactions that solve problems for people with only minimal interest in the technology they're using. However, this RFC focuses on the last of these layers, where use cases and personas are transformed into ../../features with obvious social value like:

    When \"protocol\" is used in an Aries context without any qualifying adjective, it is referencing a recipe for a high-level interaction like these. Lower-level protocols are usually described more specifically and possibly with other verbiage: \"cryptographic algorithms\", \"DID management procedures\", \"DIDComm conventions\", \"transports\", and so forth. This helps us focus \"protocol\" on the place where application developers that consume Aries do most of the work that creates value.

    "},{"location":"concepts/0003-protocols/#relationship-to-apis","title":"Relationship to APIs","text":"

    The familiar world of web APIs is a world of protocols, but it comes with constraints antithetical to decentralized identity:

    Protocols impose none of these constraints. Web APIs can easily be modeled as protocols where the transport is HTTP and the payload is a message, and the Aries community actively does this. We are not opposed to APIs. We just want to describe and standardize the higher level abstraction so we don't have a web solution and a BlueTooth solution that are diverged for no good reason.

    "},{"location":"concepts/0003-protocols/#decentralized","title":"Decentralized","text":"

    As used in the agent/DIDComm world, protocols are decentralized. This means there is not an overseer for the protocol, guaranteeing information flow, enforcing behaviors, and ensuring a coherent view. It is a subtle but important divergence from API-centric approaches, where a server holds state against which all other parties (clients) operate. Instead, all parties are peers, and they interact by mutual consent and with a (hopefully) shared understanding of the rules and goals. Protocols are like a dance\u2014not one that's choreographed or directed, but one where the parties make dynamic decisions and react to them.

    "},{"location":"concepts/0003-protocols/#types-of-protocols","title":"Types of Protocols","text":"

    The simplest protocol style is notification. This style involves two parties, but it is one-way: the notifier emits a message, and the protocol ends when the notified receives it. The basic message protocol uses this style.

    Slightly more complex is the request-response protocol style. This style involve two parties, with the requester making the first move, and the responder completing the interaction. The Discover Features Protocol uses this style. Note that with protocols as Aries models them (and unlike an HTTP request), the request-response messages are asynchronous.

    However, more complex protocols exist. The Introduce Protocol involves three parties, not two. The issue credential protocol includes up to six message types (including ack and problem_report), two of which (proposal and offer) can be used to interactively negotiate details of the elements of the subsequent messages in the protocol.

    See this subsection for definitions of the terms \"role\", \"participant\", and \"party\".

    "},{"location":"concepts/0003-protocols/#agent-design","title":"Agent Design","text":"

    Protocols are the key unit of interoperable extensibility in agents and agent-like things. To add a new interoperable feature to an agent, give it the ability to handle a new protocol.

    When agents receive messages, they map the messages to a protocol handler and possibly to an interaction state that was previously persisted. This is the analog to routes, route handlers, and sessions in web APIs, and could actually be implemented as such if the transport for the protocol is HTTP. The protocol handler is code that knows the rules of a particular protocol; the interaction state tracks progress through an interaction. For more information, see the agents explainer\u2014RFC 0004 and the DIDComm explainer\u2014RFC 0005.

    "},{"location":"concepts/0003-protocols/#composable","title":"Composable","text":"

    Protocols are composable--meaning that you can build complex ones from simple ones. The protocol for asking someone to repeat their last sentence can be part of the protocol for ordering food at a restaurant. It's common to ask a potential driver's license holder to prove their street address before issuing the license. In protocol terms, this is nicely modeled as the present proof being invoked in the middle of an issue credential protocol.

    When we run one protocol inside another, we call the inner protocol a subprotocol, and the outer protocol a superprotocol. A given protocol may be a subprotocol in some contexts, and a standalone protocol in others. In some contexts, a protocol may be a subprotocol from one perspective, and a superprotocol from another (as when protocols are nested at least 3 deep).

    Commonly, protocols wait for subprotocols to complete, and then they continue. A good example of this is mentioned above\u2014starting an issue credential flow, but requiring the potential issuer and/or the potential holder to prove something to one another before completing the process.

    In other cases, a protocol B is not \"contained\" inside protocol A. Rather, A triggers B, then continues in parallel, without waiting for B to complete. This coprotocol relationship is analogous to relationship between coroutines in computer science. In the Introduce Protocol, the final step is to begin a connection protocol between the two introducees-- but the introduction coprotocol completes when the connect coprotocol starts, not when it completes.

    "},{"location":"concepts/0003-protocols/#message-types","title":"Message Types","text":"

    A protocol includes a number of message types that enable the execution of an instance of a protocol. Collectively, the message types of a protocol become the skeleton of its interface. Most of the message types are defined with the protocol, but several key message types, notably acks and problem reports are defined in separate RFCs and adopted into a protocol. This ensures that the structure of such messages is standardized, but used in the context of the protocol adopting the message types.

    "},{"location":"concepts/0003-protocols/#handling-unrecognized-items-in-messages","title":"Handling Unrecognized Items in Messages","text":"

    In the semver section of this document there is discussion of the handling of mismatches in minor versions supported and received. Notably, a recipient that supports a given minor version of a protocol less than that of a received protocol message should ignore any unrecognized fields in the message. Such handling of unrecognized data items applies more generally than just minor version mismatches. A recipient of a message from a supported major version of a protocol should ignore any unrecognized items in a received message, even if the supported and minor versions are the same. When items from the message are ignored, the recipient may want to send a warning problem-report message with code fields-ignored.

    "},{"location":"concepts/0003-protocols/#ingredients","title":"Ingredients","text":"

    A protocol has the following ingredients:

    "},{"location":"concepts/0003-protocols/#how-to-define-a-protocol","title":"How to Define a Protocol","text":"

    To define a protocol, write an RFC. Specific instructions for protocol RFCs, and a discussion about the theory behind detailed protocol ../../concepts, are given in the instructions for protocol RFCs and in the protocol RFC template.

    The tictactoe protocol is attached to this RFC as an example.

    "},{"location":"concepts/0003-protocols/#security-considerations","title":"Security Considerations","text":""},{"location":"concepts/0003-protocols/#replay-attacks","title":"Replay Attacks","text":"

    It should be noted that when defining a protocol that has domain specific requirements around preventing replay attacks, an @id property SHOULD be required. Given an @id field is most commonly set to be a UUID, it should provide randomness comparable to that of a nonce in preventing replay attacks. However, this means that care will be needed in processing of the @id field to make sure its value has not been used before. In some cases, nonces require being unpredictable as well. In this case, greater review should be taken as to how the @id field should be used in the domain specific protocol. In the event where the @id field is not adequate for preventing replay attacks, it's recommended that an additional nonce field be required by the domain specific protocol specification.

    "},{"location":"concepts/0003-protocols/#reference","title":"Reference","text":""},{"location":"concepts/0003-protocols/#message-type-and-protocol-identifier-uris","title":"Message Type and Protocol Identifier URIs","text":"

    Message types and protocols are identified with URIs that match certain conventions.

    "},{"location":"concepts/0003-protocols/#mturi","title":"MTURI","text":"

    A message type URI (MTURI) identifies message types unambiguously. Standardizing its format is important because it is parsed by agents that will map messages to handlers--basically, code will look at this string and say, \"Do I have something that can handle this message type inside protocol X version Y?\"

    When this analysis happens, strings should be compared for byte-wise equality in all segments except version. This means that case, unicode normalization, and punctuation differences all matter. It is thus best practice to avoid protocol and message names that differ only in subtle, easy-to-mistake ways.

    Comparison of the version segment of an MTURI or PIURI should follow semver rules and is discussed in the semver section of this document.

    The URI MUST be composed as follows:

    message-type-uri  = doc-uri delim protocol-name\n    \"/\" protocol-version \"/\" message-type-name\ndelim             = \"?\" / \"/\" / \"&\" / \":\" / \";\" / \"=\"\nprotocol-name     = identifier\nprotocol-version  = semver\nmessage-type-name = identifier\nidentifier        = alpha *(*(alphanum / \"_\" / \"-\" / \".\") alphanum)\n

    It can be loosely matched and parsed with the following regex:

        (.*?)([a-z0-9._-]+)/(\\d[^/]*)/([a-z0-9._-]+)$\n

    A match will have captures groups of (1) = doc-uri, (2) = protocol-name, (3) = protocol-version, and (4) = message-type-name.

    The goals of this URI are, in descending priority:

    The doc-uri portion is any URI that exposes documentation about protocols. A developer should be able to browse to that URI and use human intelligence to look up the named and versioned protocol. Optionally and preferably, the full URI may produce a page of documentation about the specific message type, with no human mediation involved.

    "},{"location":"concepts/0003-protocols/#piuri","title":"PIURI","text":"

    A shorter URI that follows the same conventions but lacks the message-type-name portion is called a protocol identifier URI (PIURI).

    protocol-identifier-uri  = doc-uri delim protocol-name\n    \"/\" semver\n

    Its loose matcher regex is:

        (.*?)([a-z0-9._-]+)/(\\d[^/]*)/?$\n

    The following are examples of valid MTURIs and PIURIs:

    "},{"location":"concepts/0003-protocols/#semver-rules-for-protocols","title":"Semver Rules for Protocols","text":"

    Semver rules apply to protocols, with the version of a protocol is expressed in the semver portion of its identifying URI. The \"ingredients\" of a protocol combine to form a public API in the semver sense. Core Aries protocols specify only major and minor elements in a version; the patch component is not used. Non-core protocols may choose to use the patch element.

    The major and minor versions of protocols match semver semantics:

    Within a given major version of a protocol, an agent should:

    This leads to the following received message handling rules:

    Note: The deprecation of the \"warning\" problem-reports in cases of minor version mismatches is because the recipient of the response can detect the mismatch by looking at the PIURI, making the \"warning\" unnecessary, and because the problem-report message may be received after (and definitely at a different time than) the response message, and so the warning is of very little value to the recipient. Recipients should still be aware that minor version mismatch warning problem-report messages may be received and handle them appropriately, likely by quietly ignoring them.

    As documented in the semver documentation, these requirements are not applied when major version 0 is used. In that case, minor version increments are considered breaking.

    Agents may support multiple major versions and select which major version to use when initiating an instance of the protocol.

    An agent should reject messages from protocols or unsupported protocol major versions with a problem-report message with code version-not-supported. Agents that receive such a problem-report message may use the discover ../../features protocol to resolve the mismatch.

    "},{"location":"concepts/0003-protocols/#semver-examples","title":"Semver Examples","text":""},{"location":"concepts/0003-protocols/#initiator","title":"Initiator","text":"

    Unless Alice's agent (the initiator of a protocol) knows from prior history that it should do something different, it should begin a protocol using the highest version number that it supports. For example, if A.1 supports versions 2.0 through 2.2 of protocol X, it should use 2.2 as the version in the message type of its first message.

    "},{"location":"concepts/0003-protocols/#recipient-rules","title":"Recipient Rules","text":"

    Agents for Bob (the recipient) should reject messages from protocols with major versions different from those they support. For major version 0, they should also reject protocols with minor versions they don't support, since semver stipulates that ../../features are not stable before 1.0. For example, if B.1 supports only versions 2.0 and 2.1 of protocol X, it should reject any messages from version 3 or version 1 or 0. In most cases, rejecting a message means sending a problem-report that the message is unsupported. The code field in such messages should be version-not-supported. Agents that receive such a problem-report can then use the Discover Features Protocol to resolve version problems.

    Recipient agents should accept messages that differ from their own supported version of a protocol only in the patch, prerelease, and/or build fields, whether these differences make the message earlier or later than the version the recipient prefers. These messages will be robustly compatible.

    For major version >= 1, recipients should also accept messages that differ only in that the message's minor version is earlier than their own preference. In such a case, the recipient should degrade gracefully to use the earlier version of the protocol. If the earlier version lacks important ../../features, the recipient may optionally choose to send, in addition to a response, a problem-report with code version-with-degraded-../../features.

    If a recipient supports protocol X version 1.0, it should tentatively accept messages with later minor versions (e.g., 1.2). Message types that differ in only in minor version are guaranteed to be compatible for the feature set of the earlier version. That is, a 1.0-capable agent can support 1.0 ../../features using a 1.2 message, though of course it will lose any ../../features that 1.2 added. Thus, accepting such a message could have two possible outcomes:

    1. The message at version 1.2 might look and behave exactly like it did at version 1.0, in which case the message will process without any trouble.

    2. The message might contain some fields that are unrecognized and need to be ignored.

    In case 2, it is best practice for the recipient to send a problem-report that is a warning, not an error, announcing that some fields could not be processed (code = fields-ignored-due-to-version-mismatch). Such a message is in addition to any response that the protocol demands of the recipient.

    If the recipient of a protocol's initial message generates a response, the response should use the latest major.minor protocol version that both parties support and know about. Generally, all messages after the first use only major.minor

    "},{"location":"concepts/0003-protocols/#state-details-and-state-machines","title":"State Details and State Machines","text":"

    While some protocols have only one sequence of states to manage, in most different roles perceive the interaction differently. The sequence of states for each role needs to be described with care in the RFC.

    "},{"location":"concepts/0003-protocols/#state-machines","title":"State Machines","text":"

    By convention, protocol state and sequence rules are described using the concept of state machines, and we encourage developers who implement protocols to build them that way.

    Among other benefits, this helps with error handling: when one agent sends a problem-report message to another, the message can make it crystal clear which state it has fallen back to as a result of the error.

    Many developers will have encountered a formal of definition of state machines as they wrote parsers or worked on other highly demanding tasks, and may worry that state machines are heavy and intimidating. But as they are used in Aries protocols, state machines are straightforward and elegant. They cleanly encapsulate logic that would otherwise be a bunch of conditionals scattered throughout agent code. The tictactoe example protocol example includes a complete state machine in less than 50 lines of python code, with tests.

    For an extended discussion of how state machines can be used, including in nested protocols, and with hooks that let custom processing happen at each point in a flow, see https://github.com/dhh1128/distributed-state-machine.

    "},{"location":"concepts/0003-protocols/#processing-points","title":"Processing Points","text":"

    A protocol definition describes key points in the flow where business logic can attach. Some of these processing points are obvious, because the protocol makes calls for decisions to be made. Others are implicit. Some examples include:

    "},{"location":"concepts/0003-protocols/#roles-participants-parties-and-controllers","title":"Roles, Participants, Parties, and Controllers","text":""},{"location":"concepts/0003-protocols/#roles","title":"Roles","text":"

    The roles in a protocol are the perspectives (responsibilities, privileges) that parties take in an interaction.

    This perspective is manifested in three general ways:

    Like parties, roles are normally known at the start of the protocol but this is not a requirement.

    In an auction protocol, there are only two roles\u2014auctioneer and bidder\u2014even though there may be many parties involved.

    "},{"location":"concepts/0003-protocols/#participants","title":"Participants","text":"

    The participants in a protocol are the agents that send and/or receive plaintext application-level messages that embody the protocol's interaction. Alice, Bob, and Carol may each have a cloud agent, a laptop, and a phone; if they engage in an introduction protocol using phones, then the agents on their phones are the participants. If the phones talk directly over Bluetooth, this is particularly clear--but even if the phones leverage push notifications and HTTP such that cloud agents help with routing, only the phone agents are participants, because only they maintain state for the interaction underway. (The cloud agents would be facilitators, and the laptops would be bystanders). When a protocol is complete, the participant agents know about the outcome; they may need to synchronize or replicate their state before other agents of the parties are aware.

    "},{"location":"concepts/0003-protocols/#parties","title":"Parties","text":"

    The parties to a protocol are the entities directly responsible for achieving the protocol's goals. When a protocol is high-level, parties are typically people or organizations; as protocols become lower-level, parties may be specific agents tasked with detail work through delegation.

    Imagine a situation where Alice wants a vacation. She engages with a travel agent named Bob. Together, they begin an \"arrange a vacation\" protocol. Alice is responsible for expressing her parameters and proving her willingness to pay; Bob is responsible for running a bunch of subprotocols to work out the details. Alice and Bob--not software agents they use--are parties to this high-level protocol, since they share responsibility for its goals.

    As soon as Alice has provided enough direction and hangs up the phone, Bob begins a sub-protocol with a hotel to book a room for Alice. This sub-protocol has related but different goals--it is about booking a particular hotel room, not about the vacation as a whole. We can see the difference when we consider that Bob could abandon the booking and choose a different hotel entirely, without affecting the overarching \"arrange a vacation\" protocol.

    With the change in goal, the parties have now changed, too. Bob and a hotel concierge are the ones responsible for making the \"book a hotel room\" protocol progress. Alice is an approver and indirect stakeholder, but she is not doing the work. (In RACI terms, Alice is an \"accountable\" or \"approving\" entity, but only Bob and the concierge are \"responsible\" parties.)

    Now, as part of the hotel reservation, Bob tells the concierge that the guest would like access to a waverunner to play in the ocean on day 2. The concierge engages in a sub-sub-protocol to reserve the waverunner. The goal of this sub-sub-protocol is to reserve the equipment, not to book a hotel or arrange a vacation. The parties to this sub-sub-protocol are the concierge and the person or automated system that manages waverunners.

    Often, parties are known at the start of a protocol; however, that is not a requirement. Some protocols might commence with some parties not yet known or assigned.

    For many protocols, there are only two parties, and they are in a pairwise relationship. Other protocols are more complex. Introductions involves three; an auction may involve many.

    Normally, the parties that are involved in a protocol also participate in the interaction but this is not always the case. Consider a gossip protocol, two parties may be talking about a third party. In this case, the third party would not even know that the protocol was happening and would definitely not participate.

    "},{"location":"concepts/0003-protocols/#controllers","title":"Controllers","text":"

    The controllers in a protocol are entities that make decisions. They may or may not be direct parties.

    Imagine a remote chess game between Bob and Carol, conducted with software agents. The chess protocol isn't technically about how to select a wise chess move; it's about communicating the moves so parties achieve the shared goal of running a game to completion. Yet choices about moves are clearly made as the protocol unfolds. These choices are made by controllers--Bob and Carol--while the agents responsible for the work of moving the game forward wait with the protocol suspended.

    In this case, Bob and Carol could be analyzed as parties to the protocol, as well as controllers. But in other cases, the ../../concepts are distinct. For example, in a protocol to issue credentials, the issuing institution might use an AI and/or business automation as a controller.

    "},{"location":"concepts/0003-protocols/#instructions-for-protocol-rfcs","title":"Instructions for Protocol RFCs","text":"

    A protocol RFC conforms to general RFC patterns, but includes some specific substructure.

    Please see the special protocol RFC template for details.

    "},{"location":"concepts/0003-protocols/#drawbacks","title":"Drawbacks","text":"

    This RFC creates some formalism around defining protocols. It doesn't go nearly as far as SOAP or CORBA/COM did, but it is slightly more demanding of a protocol author than the familiar world of RESTful Swagger/OpenAPI.

    The extra complexity is justified by the greater demands that agent-to-agent communications place on the protocol definition. See notes in Prior Art section for details.

    "},{"location":"concepts/0003-protocols/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Some of the simplest DIDComm protocols could be specified in a Swagger/OpenAPI style. This would give some nice tooling. However, not all fit into that mold. It may be desirable to create conversion tools that allow Swagger interop.

    "},{"location":"concepts/0003-protocols/#prior-art","title":"Prior art","text":""},{"location":"concepts/0003-protocols/#bpmn","title":"BPMN","text":"

    BPMN (Business Process Model and Notation) is a graphical language for modeling flows of all types (plus things less like our protocols as well). BPMN is a mature standard sponsored by OMG(Object Management Group). It has a nice tool ecosystem (such as this). It also has an XML file format, so the visual diagrams have a two-way transformation to and from formal written language. And it has a code generation mode, where BPMN can be used to drive executable behavior if diagrams are sufficiently detailed and sufficiently standard. (Since BPMN supports various extensions and is often used at various levels of formality, execution is not its most common application.)

    BPMN began with a focus on centralized processes (those driven by a business entity), with diagrams organized around the goal of the point-of-view entity and what they experience in the interaction. This is somewhat different from a DIDComm protocol where any given entity may experience the goal and the scope of interaction differently; the state machine for a home inspector in the \"buy a home\" protocol is quite different, and somewhat separable, from the state machine of the buyer, and that of the title insurance company.

    BPMN 2.0 introduced the notion of a choreography, which is much closer to the concept of an A2A protocol, and which has quite an elegant and intuitive visual representation. However, even a BPMN choreography doesn't have a way to discuss interactions with decorators, adoption of generic messages, and other A2A-specific concerns. Thus, we may lean on BPMN for some diagramming tasks, but it is not a substitute for the RFC definition procedure described here.

    "},{"location":"concepts/0003-protocols/#wsdl","title":"WSDL","text":"

    WSDL (Web Services Description Language) is a web-centric evolution of earlier, RPC-style interface definition languages like IDL in all its varieties and CORBA. These technologies describe a called interface, but they don't describe the caller, and they lack a formalism for capturing state changes, especiall by the caller. They are also out of favor in the programmer community at present, as being too heavy, too fragile, or poorly supported by current tools.

    "},{"location":"concepts/0003-protocols/#swagger-openapi","title":"Swagger / OpenAPI","text":"

    Swagger / OpenAPI overlaps with some of the concerns of protocol definition in agent-to-agent interactions. We like the tools and the convenience of the paradigm offered by OpenAPI, but where these two do not overlap, we have impedance.

    Agent-to-agent protocols must support more than 2 roles, or two roles that are peers, whereas RESTful web services assume just client and server--and only the server has a documented API.

    Agent-to-agent protocols are fundamentally asynchronous, whereas RESTful web services mostly assume synchronous request~response.

    Agent-to-agent protocols have complex considerations for diffuse trust, whereas RESTful web services centralize trust in the web server.

    Agent-to-agent protocols need to support transports beyond HTTP, whereas RESTful web services do not.

    Agent-to-agent protocols are nestable, while RESTful web services don't provide any special support for that construct.

    "},{"location":"concepts/0003-protocols/#other","title":"Other","text":""},{"location":"concepts/0003-protocols/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0003-protocols/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python several protocols, circa Feb 2019 Aries Framework - .NET several protocols, circa Feb 2019 Streetcred.id several protocols, circa Feb 2019 Aries Cloud Agent - Python numerous protocols plus extension mechanism for pluggable protocols Aries Static Agent - Python 2 or 3 protocols Aries Framework - Go DID Exchange Connect.Me mature but proprietary protocols; community protocols in process Verity mature but proprietary protocols; community protocols in process Aries Protocol Test Suite 2 or 3 core protocols; active work to implement all that are ACCEPTED, since this tests conformance of other agents Pico Labs implemented protocols: connections, trust_ping, basicmessage, routing"},{"location":"concepts/0003-protocols/roles-participants-etc/","title":"Roles participants etc","text":""},{"location":"concepts/0003-protocols/roles-participants-etc/#roles-participants-parties-and-controllers","title":"Roles, Participants, Parties, and Controllers","text":""},{"location":"concepts/0003-protocols/roles-participants-etc/#roles","title":"Roles","text":"

    The roles in a protocol are the perspectives (responsibilities, privileges) that parties take i an interaction.

    This perspective is manifested in three general ways:

    Like parties, roles are normally known at the start of the protocol but this is not a requirement.

    In an auction protocol, there are only two roles\u2014auctioneer and bidder\u2014even though there may be many parties involved.

    "},{"location":"concepts/0003-protocols/roles-participants-etc/#participants","title":"Participants","text":"

    The participants in a protocol are the agents that send and/or receive plaintext application-level messages that embody the protocol's interaction. Alice, Bob, and Carol may each have a cloud agent, a laptop, and a phone; if they engage in an introduction protocol using phones, then the agents on their phones are the participants. If the phones talk directly over Bluetooth, this is particularly clear--but even if the phones leverage push notifications and HTTP such that cloud agents help with routing, only the phone agents are participants, because only they maintain state for the interaction underway. (The cloud agents would be facilitators, and the laptops would be bystanders). When a protocol is complete, the participant agents know about the outcome; they may need to synchronize or replicate their state before other agents of the parties are aware.

    "},{"location":"concepts/0003-protocols/roles-participants-etc/#parties","title":"Parties","text":"

    The parties to a protocol are the entities directly responsible for achieving the protocol's goals. When a protocol is high-level, parties are typically people or organizations; as protocols become lower-level, parties may be specific agents tasked with detail work through delegation.

    Imagine a situation where Alice wants a vacation. She engages with a travel agent named Bob. Together, they begin an \"arrange a vacation\" protocol. Alice is responsible for expressing her parameters and proving her willingness to pay; Bob is responsible for running a bunch of subprotocols to work out the details. Alice and Bob--not software agents they use--are parties to this high-level protocol, since they share responsibility for its goals.

    As soon as Alice has provided enough direction and hangs up the phone, Bob begins a sub-protocol with a hotel to book a room for Alice. This sub-protocol has related but different goals--it is about booking a particular hotel room, not about the vacation as a whole. We can see the difference when we consider that Bob could abandon the booking and choose a different hotel entirely, without affecting the overarching \"arrange a vacation\" protocol.

    With the change in goal, the parties have now changed, too. Bob and a hotel concierge are the ones responsible for making the \"book a hotel room\" protocol progress. Alice is an approver and indirect stakeholder, but she is not doing the work. (In RACI terms, Alice is an \"accountable\" or \"approving\" entity, but only Bob and the concierge are \"responsible\" parties.)

    Now, as part of the hotel reservation, Bob tells the concierge that the guest would like access to a waverunner to play in the ocean on day 2. The concierge engages in a sub-sub-protocol to reserve the waverunner. The goal of this sub-sub-protocol is to reserve the equipment, not to book a hotel or arrange a vacation. The parties to this sub-sub-protocol are the concierge and the person or automated system that manages waverunners.

    Often, parties are known at the start of a protocol; however, that is not a requirement. Some protocols might commence with some parties not yet known or assigned.

    For many protocols, there are only two parties, and they are in a pairwise relationship. Other protocols are more complex. Introductions involves three; an auction may involve many.

    Normally, the parties that are involved in a protocol also participate in the interaction but this is not always the case. Consider a gossip protocol, two parties may be talking about a third party. In this case, the third party would not even know that the protocol was happening and would definitely not participate.

    "},{"location":"concepts/0003-protocols/roles-participants-etc/#controllers","title":"Controllers","text":"

    The controllers in a protocol are entities that make decisions. They may or may not be direct parties.

    Imagine a remote chess game between Bob and Carol, conducted with software agents. The chess protocol isn't technically about how to select a wise chess move; it's about communicating the moves so parties achieve the shared goal of running a game to completion. Yet choices about moves are clearly made as the protocol unfolds. These choices are made by controllers--Bob and Carol--while the agents responsible for the work of moving the game forward wait with the protocol suspended.

    In this case, Bob and Carol could be analyzed as parties to the protocol, as well as controllers. But in other cases, the concepts are distinct. For example, in a protocol to issue credentials, the issuing institution might use an AI and/or business automation as a controller.

    "},{"location":"concepts/0003-protocols/tictactoe/","title":"Tic Tac Toe Protocol 1.0","text":""},{"location":"concepts/0003-protocols/tictactoe/#summary","title":"Summary","text":"

    Describes a simple protocol, already familiar to most developers, as a way to demonstrate how all protocols should be documented.

    "},{"location":"concepts/0003-protocols/tictactoe/#motivation","title":"Motivation","text":"

    Playing tic-tac-toe is a good way to test whether agents are working properly, since it requires two parties to take turns and to communicate reliably about state. However, it is also pretty simple, and it has a low bar for trust (it's not dangerous to play tic-tac-toe with a malicious stranger). Thus, we expect agent tic-tac-toe to be a good way to test basic plumbing and to identify functional gaps. The game also provides a way of testing interactions with the human owners of agents, or of hooking up an agent AI.

    "},{"location":"concepts/0003-protocols/tictactoe/#tutorial","title":"Tutorial","text":"

    Tic-tac-toe is a simple game where players take turns placing Xs and Os in a 3x3 grid, attempting to capture 3 cells of the grid in a straight line.

    "},{"location":"concepts/0003-protocols/tictactoe/#name-and-version","title":"Name and Version","text":"

    This defines the tictactoe protocol, version 1.x, as identified by the following PIURI:

    did:sov:SLfEi9esrjzybysFxQZbfq;spec/tictactoe/1.0\n
    "},{"location":"concepts/0003-protocols/tictactoe/#key-concepts","title":"Key Concepts","text":"

    A tic-tac-toe game is an interaction where 2 parties take turns to make up to 9 moves. It starts when either party proposes the game, and ends when one of the parties wins, or when all all cells in the grid are occupied but nobody has won (a draw).

    Note: Optionally, a Tic-Tac-Toe game can be preceded by a Coin Flip Protocol to decide who goes first. This is not a high-value enhancement, but we add it for illustration purposes. If used, the choice-id field in the initial propose message of the Coin Flip should have the value did:sov:SLfEi9esrjzybysFxQZbfq;spec/tictactoe/1.0/who-goes-first, and the caller-wins and flipper-wins fields should contain the DIDs of the two players.

    Illegal moves and moving out of turn are errors that trigger a complaint from the other player. However, they do not scuttle the interaction. A game can also be abandoned in an unfinished state by either player, for any reason. Games can last any amount of time.

    About the Key Concepts section: Here we describe the flow at a very\nhigh level. We identify preconditions, ways the protocol can start\nand end, and what can go wrong. We also talk about timing\nconstraints and other assumptions.\n
    "},{"location":"concepts/0003-protocols/tictactoe/#roles","title":"Roles","text":"

    There are two parties in a tic-tac-toe game, but only one role, player. One player places 'X' for the duration of a game; the other places 'O'. There are no special requirements about who can be a player. The parties do not need to be trusted or even known to one another, either at the outset or as the game proceeds. No prior setup is required, other than an ability to communicate.

    About the Roles section: Here we name the roles in the protocol,\nsay who and how many can play each role, and describe constraints.\nWe also explore qualifications for roles.\n
    "},{"location":"concepts/0003-protocols/tictactoe/#states","title":"States","text":"

    The states of each player in the protocol evolve according to the following state machine:

    When a player is in the my-move state, possible valid events include send move (the normal case), send outcome (if the player decides to abandon the game), and receive outcome (if the other player decides to abandon). A receive move event could conceivably occur, too-- but it would be an error on the part of the other player, and would trigger a problem-report message as described above, leaving the state unchanged.

    In the their-move state, send move is an impossible event for a properly behaving player. All 3 of the other events could occur, causing a state transition.

    In the wrap-up state, the game is over, but communication with the outcome message has not yet occurred. The logical flow is send outcome, whereupon the player transitions to the done state.

    About the States section: Here we explain which states exist for each\nrole. We also enumerate the events that can occur, including messages,\nerrors, or events triggered by surrounding context, and what should\nhappen to state as a result. In this protocol, we only have one role,\nand thus only one state machine matrix. But in many protocols, each\nrole may have a different state machine.\n
    "},{"location":"concepts/0003-protocols/tictactoe/#messages","title":"Messages","text":"

    All messages in this protocol are part of the \"tictactoe 1.0\" message family uniquely identified by this DID reference: did:sov:SLfEi9esrjzybysFxQZbfq;spec/tictactoe/1.0

    NOTE 1: All the messages defined in a protocol should follow DIDComm best practices as far as how they name fields and define their data types and semantics. NOTE 2 about the \"DID Reference\" URI that appears here: DIDs can be resolved to a DID doc that contains an endpoint, to which everything after a semicolon can be appended. Thus, if this DID is publicly registered and its DID doc gives an endpoint of http://example.com, this URI would mean that anyone can find a formal definition of the protocol at http://example.com/spec/tictactoe/1.0. It is also possible to use a traditional URI here, such as http://example.com/spec/tictactoe/1.0. If that sort of URI is used, it is best practice for it to reference immutable content, as with a link to specific commit on github: https://github.com/hyperledger/aries-rfcs/blob/ab7a04f/concepts/0003-protocols/tictactoe/README.md#messages"},{"location":"concepts/0003-protocols/tictactoe/#move-message","title":"move message","text":"

    The protocol begins when one party sends a move message to the other. It looks like this:

    @id is required here, as it establishes a message thread that will govern the rest of the game.

    me tells which mark (X or O) the sender is placing. It is required.

    moves is optional in the first message of the interaction. If missing or empty, the sender of the first message is inviting the recipient to make the first move. If it contains a move, the sender is moving first.

    Moves are strings like \"X:B2\" that match the regular expression (?i)[XO]:[A-C][1-3]. They identify a mark to be placed (\"X\" or \"O\") and a position in the 3x3 grid. The grid's columns and rows are numbered like familiar spreadsheets, with columns A, B, and C, and rows 1, 2, and 3.

    comment is optional and probably not used much, but could be a way for players to razz one another or chat as they play. It follows the conventions of localized messages.

    Other decorators could be placed on tic-tac-toe messages, such as those to enable message timing to force players to make a move within a certain period of time.

    "},{"location":"concepts/0003-protocols/tictactoe/#subsequent-moves","title":"Subsequent Moves","text":"

    Once the initial move message has been sent, game play continues by each player taking turns sending responses, which are also move messages. With each new message the move array inside the message grows by one, ensuring that the players agree on the current accumulated state of the game. The me field is still required and must accurately reflect the role of the message sender; it thus alternates values between X and O.

    Subsequent messages in the game use the message threading mechanism where the @id of the first move becomes the ~thread.thid for the duration of the game.

    An evolving sequence of move messages might thus look like this, suppressing all fields except what's required:

    "},{"location":"concepts/0003-protocols/tictactoe/#messagemove-2","title":"Message/Move 2","text":"

    This is the first message in the thread that's sent by the player placing \"O\"; hence it has myindex = 0.

    "},{"location":"concepts/0003-protocols/tictactoe/#messagemove-3","title":"Message/Move 3","text":"

    This is the second message in the thread by the player placing \"X\"; hence it has myindex = 1.

    "},{"location":"concepts/0003-protocols/tictactoe/#messagemove-4","title":"Message/Move 4","text":"

    ...and so forth.

    Note that the order of the items in the moves array is NOT significant. The state of the game at any given point of time is fully captured by the moves, regardless of the order in which they were made.

    If a player makes an illegal move or another error occurs, the other player can complain using a problem-report message, with explain.@l10n.code set to one of the values defined in the Message Catalog section (see below).

    "},{"location":"concepts/0003-protocols/tictactoe/#outcome-message","title":"outcome message","text":"

    Game play ends when one player sends a move message that manages to mark 3 cells in a row. Thereupon, it is best practice, but not strictly required, for the other player to send an acknowledgement in the form of an outcome message.

    The moves and me fields from a move message can also, optionally, be included to further document state. The winner field is required. Its value may be \"X\", \"O\", or--in the case of a draw--\"none\".

    This outcome message can also be used to document an abandoned game, in which case winner is null, and comment can be used to explain why (e.g., timeout, loss of interest).

    About the Messages section: Here we explain the message types, but\nalso which roles send which messages, what sequencing rules apply,\nand how errors may occur during the flow. The message begins with\nan announcement of the identifier and version of the message\nfamily, and also enumerates error codes to be used with problem\nreports. This protocol is simple enough that we document the\ndatatypes and validation rules for fields inline in the narrative;\nin more complex protocols, we'd move that text into the Reference\n> Messages section instead.\n
    "},{"location":"concepts/0003-protocols/tictactoe/#constraints","title":"Constraints","text":"

    Players do not have to trust one another. Messages do not have to be authcrypted, although anoncrypted messages still have to have a path back to the sender to be useful.

    About the Constraints section: Many protocols have rules\nor mechanisms that help parties build trust. For example, in buying\na house, the protocol includes such things as commission paid to\nrealtors to guarantee their incentives, title insurance, earnest\nmoney, and a phase of the process where a home inspection takes\nplace. If you are documenting a protocol that has attributes like\nthese, explain them here.\n
    "},{"location":"concepts/0003-protocols/tictactoe/#reference","title":"Reference","text":"
    About the Reference section: If the Tutorial > Messages section\nsuppresses details, we would add a Messages section here to\nexhaustively describe each field. We could also include an\nExamples section to show variations on the main flow.\n
    "},{"location":"concepts/0003-protocols/tictactoe/#collateral","title":"Collateral","text":"

    A reference implementation of the logic of a game is provided with this RFC as python 3.x code. See game.py. There is also a simple hand-coded AI that can play the game when plugged into an agent (see ai.py), and a set of unit tests that prove correctness (see test_tictactoe.py).

    A full implementation of the state machine is provided as well; see state_machine.py and test_state_machine.py.

    The game can be played interactively by running python game.py.

    "},{"location":"concepts/0003-protocols/tictactoe/#localization","title":"Localization","text":"

    The only localizable field in this message family is comment on both move and outcome messages. It contains ad hoc text supplied by the sender, instead of a value selected from an enumeration and identified by code for use with message catalogs. This means the only approach to localize move or outcome messages is to submit comment fields to an automated translation service. Because the locale of tictactoe messages is not predefined, each message must be decorated with ~l10n.locale to make automated translation possible.

    There is one other way that localization is relevant to this protocol: in error messages. Errors are communicated through the general problem-report message type rather than through a special message type that's part of the tictactoe family. However, we define a catalog of tictactoe-specific error codes below to make this protocol's specific error strings localizable.

    Thus, all instances of this message family carry localization metadata in the form of an implicit ~l10n decorator that looks like this:

    This JSON fragment is checked in next to the narrative content of this RFC as ~l10n.json, for easy machine parsing.

    Individual messages can use the ~l10n decorator to supplement or override these settings.

    For more information about localization concepts, see the RFC about localized messages.

    "},{"location":"concepts/0003-protocols/tictactoe/#message-catalog","title":"Message Catalog","text":"

    To facilitate localization of error messages, all instances of this message family assume the following catalog in their ~l10n data:

    When referencing this catalog, please be sure you have the correct version. The official, immutable URL to this version of the catalog file is:

    https://github.com/hyperledger/indy-hipe/blob/fc7a6028/text/tictactoe-protocol/catalog.json\n

    This JSON fragment is checked in next to the narrative content of this RFC as catalog.json, for easy machine parsing. The catalog currently contains localized alternatives only for English. Other language contributions would be welcome.

    For more information, see the Message Catalog section of the localization HIPE.

    "},{"location":"concepts/0003-protocols/tictactoe/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Verity Commercially licensed enterprise agent, SaaS or on-prem. Pico Labs Open source TicTacToe for Pico Agents"},{"location":"concepts/0004-agents/","title":"Aries RFC 0004: Agents","text":""},{"location":"concepts/0004-agents/#summary","title":"Summary","text":"

    Provide a high-level introduction to the ../../concepts of agents in the self-sovereign identity ecosystem.

    "},{"location":"concepts/0004-agents/#tutorial","title":"Tutorial","text":"

    Managing an identity is complex. We need tools to help us.

    In the physical world, we often delegate complexity to trusted proxies that can help. We hire an accountant to do our taxes, a real estate agent to help us buy a house, and a talent agent to help us pitch an album to a recording studio.

    On the digital landscape, humans and organizations (and sometimes, things) cannot directly consume and emit bytes, store and manage data, or perform the crypto that self-sovereign identity demands. They need delegates--agents--to help. Agents are a vital dimension across which we exercise sovereignty over identity.

    "},{"location":"concepts/0004-agents/#essential-characteristics","title":"Essential Characteristics","text":"

    When we use the term \"agent\" in the SSI community, we more properly mean \"an agent of self-sovereign identity.\" This means something more specific than just a \"user agent\" or a \"software agent.\" Such an agent has three defining characteristics:

    1. It acts as a fiduciary on behalf of a single identity owner (or, for agents of things like IoT devices, pets, and similar things, a single controller).
    2. It holds cryptographic keys that uniquely embody its delegated authorization.
    3. It interacts using interoperable DIDComm protocols.

    These characteristics don't tie an agent to any particular blockchain. It is possible to implement agents without any use of blockchain at all (e.g., with peer DIDs), and some efforts to do so are quite active.

    "},{"location":"concepts/0004-agents/#canonical-examples","title":"Canonical Examples","text":"

    Three types of agents are especially common:

    1. A mobile app that Alice uses to manage credentials and to connect to others is an agent for Alice.
    2. A cloud-based service that Alice uses to expose a stable endpoint where other agents can talk to her is an agent for Alice.
    3. A server run by Faber College, allowing it to issue credentials to its students, is an agent for Faber.

    Depending on your perspective, you might describe these agents in various ways. #1 can correctly be called a \"mobile\" or \"edge\" or \"rich\" agent. #2 can be called a \"cloud\" or \"routing\" agent. #3 can be called an \"on-prem\" or \"edge\" or \"advanced\" agent. See Categorizing Agents for a discussion about why multiple labels are correct.

    Agents can be other things as well. They can big or small, complex or simple. They can interact and be packaged in various ways. They can be written in a host of programming languages. Some are more canonical than others. But all the ones we intend to interact with in the self-sovereign identity problem domain share the three essential characteristics described above.

    "},{"location":"concepts/0004-agents/#how-agents-talk","title":"How Agents Talk","text":"

    DID communication (DIDComm), and the protocols built atop it are each rich subjects unto themselves. Here, we will stay very high-level.

    Agents can use many different communication transports: HTTP(S) 1.x and 2.0, WebSockets, IRC, Bluetooth, AMQP, NFC, Signal, email, push notifications to mobile devices, ZMQ, and more. However, all A2A is message-based, and is secured by modern, best-practice public key cryptography. How messages flow over a transport may vary--but their security and privacy toolset, their links to the DIDs and DID Docs of identity owners, and the ways their messages are packaged and handled are standard.

    Agents connect to one another through a standard connection protocol, discover one another's endpoints and keys through standard DID Docs, discover one another's ../../features in a standard way, and maintain relationships in a standard way. All of these points of standardization are what makes them interoperable.

    Because agents speak so many different ways, and because many of them won't have a permanent, accessible point of presence on the network, they can't all be thought of as web servers with a Swagger-compatible API for request-response. The analog to an API construct in agent-land is protocols. These are patterns for stateful interactions. They specify things like, \"If you want to negotiate a sale with an agent, send it a message of type X. It will respond with a message of type Y or type Z, or with an error message of type W. Repeat until the negotiation finishes.\" Some interesting A2A protocols include the one where two parties connect to one another to build a relationship, the one where agents discover which protocols they each support, the one where credentials are issued, and the one where proof is requested and sent. Hundreds of other protocols are being defined.

    "},{"location":"concepts/0004-agents/#how-to-get-an-agent","title":"How to Get an Agent","text":"

    As the ecosystem for self-sovereign identity matures, the average person or organization will get an agent by downloading it from the app store, installing it with their OS package manager, or subscribing to it as a service. However, the availability of quality pre-packaged agents is still limited today.

    Agent providers are emerging in the marketplace, though. Some are governments, NGOs, or educational institutions that offer agents for free; others are for-profit ventures. If you'd like suggestions about ready-to-use agent offerings, please describe your use case in #aries on chat.hyperledger.org.

    There is also intense activity in the SSI community around building custom agents and the tools and processes that enable them. A significant amount of early work occurred in the Indy Agent Community with some of those efforts materializing in the indy-agent repo on github.com and other code bases. The indy-agent repo is now deprecated but is still valuable in demonstrating the basics of agents. With the introduction of Hyperledger Aries, agent efforts are migrating from the Indy Agent community.

    Hyperledger Aries provides a number of code bases ranging from agent frameworks to tools to aid in development to ready-to-use agents.

    "},{"location":"concepts/0004-agents/#how-to-write-an-agent","title":"How to Write an Agent","text":"

    This is one of the most common questions that Aries newcomers ask. It's a challenging one to answer, because it's so open-ended. It's sort of like someone asking, \"Can you give me a recipe for dinner?\" The obvious follow-up question would be, \"What type of dinner did you have in mind?\"

    Here are some thought questions to clarify intent:

    "},{"location":"concepts/0004-agents/#general-patterns","title":"General Patterns","text":"

    We said it's hard to provide a recipe for an agent without specifics. However, the majority of agents do have two things in common: they listen to and process A2A messages, and they use a wallet to manage keys, credentials, and other sensitive material. Unless you have uses cases that involve IoT, cron jobs, or web hooks, your agent is likely to fit this mold.

    The heart of such an agent is probably a messaging handling loop, with pluggable protocols to give it new capabilities, and pluggable transports to let it talk in different ways. The pseudocode for its main function might look like this:

    "},{"location":"concepts/0004-agents/#pseudocode-for-main","title":"Pseudocode for main()","text":"
    1  While not done:\n2      Get next message.\n3      Verify it (decrypt, identify sender, check signature...).\n3      Look at the type of the plaintext message.\n4      Find a plugged in protocol handler that matches that type.\n5      Give plaintext message and security metadata to handler.\n

    Line 2 can be done via standard HTTP dispatch, or by checking an email inbox, or in many other ways. Line 3 can be quite sophisticated--the sender will not be Alice, but rather one of the agents that she has authorized. Verification may involve consulting cached information and/or a blockchain where a DID and DID Doc are stored, among other things.

    The pseudocode for each protocol handler it loads might look like:

    "},{"location":"concepts/0004-agents/#pseudocode-for-protocol-handler","title":"Pseudocode for protocol handler","text":"
    1  Check authorization against metadata. Reject if needed.\n2  Read message header. Is it part of an ongoing interaction?\n3  If yes, load persisted state.\n4  Process the message and update interaction state.\n5  If a response is appropriate:\n6      Prepare response content.\n7      Ask my outbound comm module to package and send it.\n

    Line 4 is the workhorse. For example, if the interaction is about issuing credentials and this agent is doing the issuance, this would be where it looks up the material for the credential in internal databases, formats it appropriately, and records the fact that the credential has now been built. Line 6 might be where that credential is attached to an outgoing message for transmission to the recipient.

    The pseudocode for the outbound communication module might be:

    "},{"location":"concepts/0004-agents/#pseudocode-for-outbound","title":"Pseudocode for outbound","text":"
    1  Iterate through all pluggable transports to find best one to use\n     with the intended recipient.\n2  Figure out how to route the message over the selected transport.\n3  Serialize the message content and encrypt it appropriately.\n4  Send the message.\n

    Line 2 can be complex. It involves looking up one or more endpoints in the DID Doc of the recipient, and finding an intersection between transports they use, and transports the sender can speak. Line 3 requires the keys of the sender, which would normally be held in a wallet.

    If you are building this sort of code using Aries technology, you will certainly want to use Aries Agent SDK. This gives you a ready-made, highly secure wallet that can be adapted to many requirements. It also provides easy functions to serialize and encrypt. Many of the operations you need to do are demonstrated in the SDK's /doc/how-tos folder, or in its Getting Started Guide.

    "},{"location":"concepts/0004-agents/#how-to-learn-more","title":"How to Learn More","text":""},{"location":"concepts/0004-agents/#reference","title":"Reference","text":""},{"location":"concepts/0004-agents/#categorizing-agents","title":"Categorizing Agents","text":"

    Agents can be categorized in various ways, and these categories lead to terms you're likely to encounter in RFCs and other documentation. Understanding the categories will help the definitions make sense.

    "},{"location":"concepts/0004-agents/#by-trust","title":"By Trust","text":"

    A trustable agent runs in an environment that's under the direct control of its owner; the owner can trust it without incurring much risk. A semi-trustable agent runs in an environment where others besides the owner may have access, so giving it crucial secrets is less advisable. (An untrustable delegate should never be an agent, by definition, so we don't use that term.)

    Note that these distinctions highlight what is advisable, not how much trust the owner actually extends.

    "},{"location":"concepts/0004-agents/#by-location","title":"By Location","text":"

    Two related but deprecated terms are edge agent and cloud agent. You will probably hear these terms in the community or read them in docs. The problem with them is that they suggest location, but were formally defined to imply levels of trust. When they were chosen, location and levels of trust were seen as going together--you trust your edge more, and your cloud less. We've since realized that a trustable agent could exist in the cloud, if it is directly controlled by the owner, and a semi-trustable agent could be on-prem, if the owner's control is indirect. Thus we are trying to correct usage and make \"edge\" and \"cloud\" about location instead.

    "},{"location":"concepts/0004-agents/#by-platform","title":"By Platform","text":""},{"location":"concepts/0004-agents/#by-complexity","title":"By Complexity","text":"

    We can arrange agents on a continuum, from simple to complex. The simplest agents are static--they are preconfigured for a single relationship. Thin agents are somewhat fancier. Thick agents are fancier still, and rich agents exhibit the most sophistication and flexibility:

    A nice visualization of several dimensions of agent category has been built by Michael Herman:

    "},{"location":"concepts/0004-agents/#the-agent-ness-continuum","title":"The Agent-ness Continuum","text":"

    The tutorial above gives three essential characteristics of agents, and lists some canonical examples. This may make it feel like agent-ness is pretty binary. However, we've learned that reality is more fuzzy.

    Having a tight definition of an agent may not matter in all cases. However, it is important when we are trying to understand interoperability goals. We want agents to be able to interact with one another. Does that mean they must interact with every piece of software that is even marginally agent-like? Probably not.

    Some attributes that are not technically necessary in agents include:

    Agents that lack these characteristics can still be fully interoperable.

    Some interesting examples of less prototypical agents or agent-like things include:

    "},{"location":"concepts/0004-agents/#dif-hubs","title":"DIF Hubs","text":"

    A DIF Identity Hub is construct that resembles agents in some ways, but that focuses on the data-sharing aspects of identity. Currently DIF Hubs do not use the protocols known to the Aries community, and vice versa. However, there are efforts to bridge that gap.

    "},{"location":"concepts/0004-agents/#identity-wallets","title":"Identity Wallets","text":"

    \"Identity wallet\" is a term that's carefully defined in our ecosystem, and in strict, technical usage it maps to a concept much closer to \"database\" than \"agent\". This is because it is an inert storage container, not an active interacter. However, in casual usage, it may mean the software that uses a wallet to do identity work--in which case it is definitely an agent.

    "},{"location":"concepts/0004-agents/#crypto-wallets","title":"Crypto Wallets","text":"

    Cryptocurrency wallets are quite agent-like in that they hold keys and represent a user. However, they diverge from the agent definition in that they talk proprietary protocols to blockchains, rather than A2A to other agents.

    "},{"location":"concepts/0004-agents/#uport","title":"uPort","text":"

    The uPort app is an edge agent. Here, too, there are efforts to bridge a protocol gap.

    "},{"location":"concepts/0004-agents/#learning-machine","title":"Learning Machine","text":"

    The credential issuance technology offered by Learning Machine, and the app used to share those credentials, are agents of institutions and individuals, respectively. Again, there is a protocol gap to bridge.

    "},{"location":"concepts/0004-agents/#cron-jobs","title":"Cron Jobs","text":"

    A cron job that runs once a night at Faber, scanning a database and revoking credentials that have changes status during the day, is an agent for Faber. This is true even though it doesn't listen for incoming messages (it only talks revocation protocol to the ledger). In order to talk that protocol, it must hold keys delegated by Faber, and it is surely Faber's fiduciary.

    "},{"location":"concepts/0004-agents/#operating-systems","title":"Operating Systems","text":"

    The operating system on a laptop could be described as agent-like, in that it works for a single owner and may have a keystore. However, it doesn't talk A2A to other agents--at least not yet. (OSes that service multiple users fit the definition less.)

    "},{"location":"concepts/0004-agents/#devices","title":"Devices","text":"

    A device can be thought of as an agent (e.g., Alice's phone as an edge agent). However, strictly speaking, one device might run multiple agents, so this is only casually correct.

    "},{"location":"concepts/0004-agents/#sovrin-mainnet","title":"Sovrin MainNet","text":"

    The Sovrin MainNet can be thought of as an agent for the Sovrin community (but NOT the Sovrin Foundation, which codifies the rules but leaves operation of the network to its stewards). Certainly, the blockchain holds keys, uses A2A protocols, and acts in a fiduciary capacity toward the community to further its interests. The only challenge with this perspective is that the Sovrin community has a very fuzzy identity.

    "},{"location":"concepts/0004-agents/#validators","title":"Validators","text":"

    Validator nodes on a particular blockchain are agents of the stewards that operate them.

    "},{"location":"concepts/0004-agents/#digital-assistants","title":"Digital Assistants","text":"

    Digital assistants like Alexa and Google Home are somewhat agent-like. However, the Alexa in the home of the Jones family is probably not an agent for either the Jones family or Amazon. It accepts delegated work from anybody who talks to it (instead of a single controlling identity), and all current implementations are totally antithetical to the ethos of privacy and security required by self-sovereign identity. Although it interfaces with Amazon to download data and ../../features, it isn't Amazon's fiduciary, either. It doesn't hold keys that allow it to represent its owner. The protocols it uses are not interactions with other agents, but with non-agent entities. Perhaps agents and digtal assistants will converge in the future.

    "},{"location":"concepts/0004-agents/#doorbell","title":"Doorbell","text":"

    An doorbell that emits a simple signal each time it is pressed is not an agent. It doesn't represent a fiduciary or hold keys. (However, a fancy IoT doorbell that reports to Alice's mobile agent using an A2A protocol would be an agent.)

    "},{"location":"concepts/0004-agents/#microservices","title":"Microservices","text":"

    A microservice run by AcmeCorp to integrate with its vendors is not an agent for Acme's vendors. Depending on whether it holds keys and uses A2A protocols, it may or may not be an agent for Acme.

    "},{"location":"concepts/0004-agents/#human-delegates","title":"Human Delegates","text":"

    A human delegate who proves empowerment through keys might be thought of as an agent.

    "},{"location":"concepts/0004-agents/#paper","title":"Paper","text":"

    The keys for an agent can be stored on paper. This storage basically constitutes a wallet. It isn't an agent. However, it can be thought of as playing the role of an agent in some cases when designing backup and recovery solutions.

    "},{"location":"concepts/0004-agents/#prior-art","title":"Prior art","text":""},{"location":"concepts/0004-agents/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community Aries Framework - .NET .NET framework for building agents of all types Streetcred.id Commercial mobile and web app built using Aries Framework for .NET Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases. Aries Framework - Go For building agents, hubs and other DIDComm ../../features in GoLang. Connect.Me Free mobile app from Evernym. Installed via app store on iOS and Android. Verity Commercially licensed enterprise agent, SaaS or on-prem. Aries Protocol Test Suite Pico Labs Pico Agents protocols: connections, trust_ping, basicmessage, routing Rust Agent Rust implementation of a framework for building agents of all types"},{"location":"concepts/0005-didcomm/","title":"Aries RFC 0005: DID Communication","text":""},{"location":"concepts/0005-didcomm/#summary","title":"Summary","text":"

    Explain the basics of DID communication (DIDComm) at a high level, and link to other RFCs to promote deeper exploration.

    NOTE: The version of DIDComm collectively defined in Aries RFCs is known by the label \"DIDComm V1.\" A newer version of DIDComm (\"DIDComm V2\") is now being incubated at DIF. Many ../../concepts are the same between the two versions, but there are some differences in the details. For information about detecting V1 versus V2, see Detecting DIDComm Versions.

    "},{"location":"concepts/0005-didcomm/#motivation","title":"Motivation","text":"

    The DID communication between agents and agent-like things is a rich subject with a lot of tribal knowledge. Newcomers to the decentralized identity ecosystem tend to bring mental models that are subtly divergent from its paradigm. When they encounter dissonance, DIDComm becomes mysterious. We need a standard high-level reference.

    "},{"location":"concepts/0005-didcomm/#tutorial","title":"Tutorial","text":"

    This discussion assumes that you have a reasonable grasp on topics like self-sovereign identity, DIDs and DID docs, and agents. If you find yourself lost, please review that material for background and starting assumptions.

    Agent-like things have to interact with one another to get work done. How they talk in general is DIDComm, the subject of this RFC. The specific interactions enabled by DIDComm--connecting and maintaining relationships, issuing credentials, providing proof, etc.--are called protocols; they are described elsewhere.

    "},{"location":"concepts/0005-didcomm/#rough-overview","title":"Rough Overview","text":"

    A typical DIDComm interaction works like this:

    Imagine Alice wants to negotiate with Bob to sell something online, and that DIDComm, not direct human communication, is involved. This means Alice's agent and Bob's agent are going to exchange a series of messages. Alice may just press a button and be unaware of details, but underneath, her agent begins by preparing a plaintext JSON message about the proposed sale. (The particulars are irrelevant here, but would be described in the spec for a \"sell something\" protocol.) It then looks up Bob's DID Doc to access two key pieces of information: * An endpoint (web, email, etc) where messages can be delivered to Bob. * The public key that Bob's agent is using in the Alice:Bob relationship. Now Alice's agent uses Bob's public key to encrypt the plaintext so that only Bob's agent can read it, adding authentication with its own private key. The agent arranges delivery to Bob. This \"arranging\" can involve various hops and intermediaries. It can be complex. Bob's agent eventually receives and decrypts the message, authenticating its origin as Alice using her public key. It prepares its response and routes it back using a reciprocal process (plaintext -> lookup endpoint and public key for Alice -> encrypt with authentication -> arrange delivery).

    That's it.

    Well, mostly. The description is pretty good, if you squint, but it does not fit all DIDComm interactions:

    Before we provide more details, let's explore what drives the design of DIDComm.

    "},{"location":"concepts/0005-didcomm/#goals-and-ramifications","title":"Goals and Ramifications","text":"

    The DIDComm design attempts to be:

    1. Secure
    2. Private
    3. Interoperable
    4. Transport-agnostic
    5. Extensible

    As a list of buzz words, this may elicit nods rather than surprise. However, several items have deep ramifications.

    Taken together, Secure and Private require that the protocol be decentralized and maximally opaque to the surveillance economy.

    Interoperable means that DIDComm should work across programming languages, blockchains, vendors, OS/platforms, networks, legal jurisdictions, geos, cryptographies, and hardware--as well as across time. That's quite a list. It means that DIDComm intends something more than just compatibility within Aries; it aims to be a future-proof lingua franca of all self-sovereign interactions.

    Transport-agnostic means that it should be possible to use DIDComm over HTTP(S) 1.x and 2.0, WebSockets, IRC, Bluetooth, AMQP, NFC, Signal, email, push notifications to mobile devices, Ham radio, multicast, snail mail, carrier pigeon, and more.

    All software design involves tradeoffs. These goals, prioritized as shown, lead down an interesting path.

    "},{"location":"concepts/0005-didcomm/#message-based-asynchronous-and-simplex","title":"Message-Based, Asynchronous, and Simplex","text":"

    The dominant paradigm in mobile and web development today is duplex request-response. You call an API with certain inputs, and you get back a response with certain outputs over the same channel, shortly thereafter. This is the world of OpenAPI (Swagger), and it has many virtues.

    Unfortunately, many agents are not good analogs to web servers. They may be mobile devices that turn off at unpredictable intervals and that lack a stable connection to the network. They may need to work peer-to-peer, when the internet is not available. They may need to interact in time frames of hours or days, not with 30-second timeouts. They may not listen over the same channel that they use to talk.

    Because of this, the fundamental paradigm for DIDComm is message-based, asynchronous, and simplex. Agent X sends a message over channel A. Sometime later, it may receive a response from Agent Y over channel B. This is much closer to an email paradigm than a web paradigm.

    On top of this foundation, it is possible to build elegant, synchronous request-response interactions. All of us have interacted with a friend who's emailing or texting us in near-realtime. However, interoperability begins with a least-common-denominator assumption that's simpler.

    "},{"location":"concepts/0005-didcomm/#message-level-security-reciprocal-authentication","title":"Message-Level Security, Reciprocal Authentication","text":"

    The security and privacy goals, and the asynchronous+simplex design decision, break familiar web assumptions in another way. Servers are commonly run by institutions, and we authenticate them with certificates. People and things are usually authenticated to servers by some sort of login process quite different from certificates, and this authentication is cached in a session object that expires. Furthermore, web security is provided at the transport level (TLS); it is not an independent attribute of the messages themselves.

    In a partially disconnected world where a comm channel is not assumed to support duplex request-response, and where the security can't be ignored as a transport problem, traditional TLS, login, and expiring sessions are impractical. Furthermore, centralized servers and certificate authorities perpetuate a power and UX imbalance between servers and clients that doesn't fit with the peer-oriented DIDComm.

    DIDComm uses public key cryptography, not certificates from some parties and passwords from others. Its security guarantees are independent of the transport over which it flows. It is sessionless (though sessions can easily be built atop it). When authentication is required, all parties do it the same way.

    "},{"location":"concepts/0005-didcomm/#reference","title":"Reference","text":"

    The following RFCs profide additional information: * 0021: DIDComm Message Anatomy * 0020: Message Types * 0011: Decorators * 0008: Message ID and Threading * 0019: Encryption Envelope * 0025: Agent Transports

    "},{"location":"concepts/0005-didcomm/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community Aries Framework - .NET .NET framework for building agents of all types Streetcred.id Commercial mobile and web app built using Aries Framework - .NET Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases. Aries Framework - Go For building agents, hubs and other DIDComm ../../features in GoLang. Connect.Me Free mobile app from Evernym. Installed via app store on iOS and Android. Verity Commercially licensed enterprise agent, SaaS or on-prem. Aries Protocol Test Suite Pico Labs Pico Agents protocols: connections, trust_ping, basicmessage, routing"},{"location":"concepts/0006-ssi-notation/","title":"Aries RFC 0006: SSI Notation","text":""},{"location":"concepts/0006-ssi-notation/#summary","title":"Summary","text":"

    This RFC describes a simple, standard notation for various ../../concepts related to decentralized and self-sovereign identity (SSI).

    The notation could be used in design docs, other RFCs, source code comments, chat channels, scripts, debug logs, and miscellaneous technical materials throughout the Aries ecosystem. We hope it is also used in the larger SSI community.

    This RFC is complementary to official terms like the ones curated in the TOIP Concepts and Terminology Working group, the Sovrin Glossary, and so forth.

    "},{"location":"concepts/0006-ssi-notation/#motivation","title":"Motivation","text":"

    All technical materials in our ecosystem hinge on fundamental ../../concepts of self-sovereign identity such as controllers, keys, DIDs, and agents. We need a standard, documented notation to refer to such things, so we can use it consistently, and so we can link to the notation's spec for definitive usage.

    "},{"location":"concepts/0006-ssi-notation/#tutorial","title":"Tutorial","text":"

    The following explanation is meant to be read sequentially and should provide a friendly overview for most who encounter the RFC. See the Reference section for quick lookup.

    "},{"location":"concepts/0006-ssi-notation/#requirements","title":"Requirements","text":"

    This notation aims to be:

    The final requirement deserves special comment. Cryptologists are a major stakeholder in SSI theory. They already have many notational conventions, some more standardized than others. Generally, their notation derives from advanced math and uses specialized symbols and fonts. These experts also tend to intersect strongly with academic circles, where LaTeX and similar rendering technologies are common.

    Despite the intersection between SSI, cryptology, and academia, SSI has to serve a broader audience. Its practicioners are not just mathematicians; they may include support and IT staff, lawyers specializing in intellectual property, business people, auditors, regulators, and individuals sometimes called \"end users.\" In particular, SSI ecosystems are built and maintained by coders. Coders regularly write docs in markdown and html. They interact with one another on chat. They write emails and source code where comments might need to be embedded. They create UML diagrams. They type in shells. They paste code into slide decks and word processors. All of these behaviors militate against a notation that requires complex markup.

    Instead, we want something simple, clean, and universally supported. Hence the 7-bit ASCII requirement. A future version of this RFC, or an addendum to it, might explain how to map this 7-bit ASCII notation to various schemes that use mathematical symbols and are familiar to experts from other fields.

    "},{"location":"concepts/0006-ssi-notation/#solution","title":"Solution","text":""},{"location":"concepts/0006-ssi-notation/#controllers-and-subjects","title":"Controllers and Subjects","text":"

    An identified thing (the referent of an identifier) is called an identity subject. Identity subjects can include:

    The latter category may also act as an identity controller -- something that projects its intent with respect to identity onto the digital landscape.

    When an identity controller controls its own identity, we say that it has self sovereignty -- and we call it a self. (The term identity owner was originally used for an identity controller managing itself, but this hid some of the nuance and introduced legal ../../concepts of ownership that are problematic, so we'll avoid it here.)

    In our notation, selves (or identity controllers) are denoted with a single upper-case ASCII alpha, often corresponding to a first initial of their human-friendly name. For example, Alice might be represented as A. By preference, the first half of the alphabet is used (because \"x\", \"y\", and \"z\" tend to have other ad-hoc meanings). When reading aloud, the spoken form of a symbol like this is the name of the letter. The relevant ABNF fragment is:

    ```ABNF ucase-alpha = %x41-5A ; A-Z lcase-alpha = %x61-7A ; a-z digit = %x30-39 ; 0-9

    self = ucase-alpha ```

    Identity subjects that are not self-controlled are referenced in our notation using a single lower-case ASCII alpha. For example, a movie might be m. For clarity in scenarios where multiple subjects are referenced, it is best to choose letters that differ in something other than case.

      controlled = lcase-alpha\n\n  subject = self / controlled\n

    The set of devices, keys, endpoints, data, and other resources controlled by or for a given subject is called the subject's identity domain (or just domain for short). When the controller is a self, the domain is self-sovereign; otherwise, the domain is controlled. Either way, the domain of an identity subject is like its private universe, so the name or symbol of a subject is often used to denote its domain as well; context eliminates ambiguity. You will see examples of this below.

    "},{"location":"concepts/0006-ssi-notation/#association","title":"Association","text":"

    Elements associated with a domain are named in a way that makes their association clear, using a name@context pattern familiar from email addresses: 1@A (\u201cone at A\u201d) is agent 1 in A\u2019s sovereign domain. (Note how we use an erstwhile identity owner symbol, A, to reference a domain here, but there is no ambiguity.) This fully qualified form of a subject reference is useful for clarification but is often not necessary.

    In addition to domains, this same associating notation may be used where a relationship is the context, because sometimes the association is to the relationship rather than to a participant. See the DID example in the next section.

    "},{"location":"concepts/0006-ssi-notation/#agents","title":"Agents","text":"

    Agents are not subjects. They neither control or own a domain; rather, they live and act within it. They take instructions from the domain's controller. Agents (and hubs, and other things like them) are the first example of elements associated with an identity subject. Despite this, agent-ish things are the primary focus of interactions within SSI ecosystems.

    Additionally, agents are distinct from devices, even though we often (and inaccurately) used them interchangeably. We may say things like \"Alice's iPhone sends a message\" when we more precisely mean \"the agent on Alice's iPhone sends a message.\" In reality, there may be zero, one, or more than one agents running on a particular device.

    Agents are numbered and are represented by up to three digits and then with an association. In most discussions, one digit is plenty, but three digits are allowed so agents can be conveniently grouped by prefix (e.g., all edge agents in Alice's domain might begin with 1, and all cloud might begin with 2).

    agent = 1*3digit \"@\" subject\n
    "},{"location":"concepts/0006-ssi-notation/#devices","title":"Devices","text":"

    Devices are another element inside a subject's domain. They are represented with two or more lower-case ASCII alphanumerics or underscore characters, where the first char cannot be a digit. They end with an association: bobs_car@B, drone4@F, alices_iphone9@A.

    name-start-char = lcase-alpha / \"_\"            ; a-z or underscore\nname-other-char = digit / lcase-alpha / \"_\"    ; 0-9 or a-z or underscore\ndevice = name-start-char 1*name-other-char \"@\" subject\n
    "},{"location":"concepts/0006-ssi-notation/#cross-domain-relationships","title":"Cross-Domain Relationships","text":""},{"location":"concepts/0006-ssi-notation/#short-form-more-common","title":"Short Form (more common)","text":"

    Alice\u2019s pairwise relationship with Bob is represented with colon notation: A:B. This is read aloud as \u201cA to B\u201d (preferred because it\u2019s short; alternatives such as \u201cthe A B relationship\u201d or \u201cA colon B\u201d or \u201cA with respect to B\u201d are also valid). When written in the other order, it represents the same relationship as seen from Bob\u2019s point of view. Note that passive subjects may also participate in relationships: A:bobs_car. (Contrast Intra-Domain Relationships below.)

    N-wise relationships (e.g., doctor, hospital, patient) are written with the perspective-governing subject's identifier, a single colon, then by all other identifiers for members of the relationship, in alphabetical order, separated by +: A:B+C, B:A+C. This is read aloud as in \"A to B plus C.\"

    next-subject = \"+\" subject\nshort-relationship = subject \":\" subject *next-subject\n
    "},{"location":"concepts/0006-ssi-notation/#long-form","title":"Long Form","text":"

    Short form is convenient and brief, but it is inconsistent because each party to the relationship describes it differently. Sometimes this may be undesirable, so a long and consistent form is also supported. The long form of both pairwise and N-way relationships lists all participants to the right of the colon, in alphabetical order. Thus the long forms of the Alice to Bob relationship might be A:A+B (for Alice's view of this relationship) and B:A+B (for Bob's view). For a doctor, hospital, patient relationship, we might have D:D+H+P, H:D+H+P, and P:D+H+P. Note how the enumeration of parties to the right of the colon is consistent.

    Long form and short form are allowed to vary freely; any tools that parses this notation should treat them as synonyms and stylistic choices only.

    The ABNF for long form is identical to short form, except that we are guaranteed that after the colon, we will see at least two parties and one + character:

    long-relationship = subject \":\" subject 1*next-subject\n
    "},{"location":"concepts/0006-ssi-notation/#generalized-relationships","title":"Generalized Relationships","text":""},{"location":"concepts/0006-ssi-notation/#contexts","title":"Contexts","text":"

    Some models for SSI emphasize the concept of personas or contexts. These are essentially \"masks\" that an identity controller enables, exposing a limited subset of the subject's identity to an audience that shares that context. For example, Alice might assume one persona in her employment relationships, another for government interactions, another for friends, and another when she's a customer.

    Contexts or personas can be modeled as a relationship with a generalized audience: A:Work, A:Friends.

    general-audience = ucase-alpha 1*name-other-char\ngeneral-relationship = subject \":\" general-audience\nrelationship = short-relationship / long-relationship / general-relationship\n
    "},{"location":"concepts/0006-ssi-notation/#any","title":"Any","text":"

    The concept of public DIDs suggests that someone may think about a relationship as unbounded, or as not varying no matter who the other subject is. For example, a company may create a public DID and advertise it to the world, intending for this connection point to begin relationships with customers, partners, and vendors alike. While best practice suggests that such relationships be used with care, and that they primarily serve to bootstrap pairwise relationships, the notation still needs to represent the possibility.

    The token Any is reserved for these semantics. If Acme Corp is represented as A, then Acme's public persona could be denoted with A:Any. When Any is used, it is never the subject whose perspective is captured; it is always a faceless \"other\". This means that Any appears only on the right side of a colon in a relationship, and it probably doesn't make sense to combine it with other participants since it would subsume them all.

    "},{"location":"concepts/0006-ssi-notation/#self","title":"Self","text":"

    It is sometimes useful to model a relationship with onesself. This is done with the reserved token Self.

    "},{"location":"concepts/0006-ssi-notation/#intra-domain-relationships","title":"Intra-Domain Relationships","text":"

    Within a domain, relationships among agents or devices is sometimes interesting. Such relationships use the ~ (tilde) character. Thus, the intra-domain relationship between Alice's agent 1 and agent 2 is written 1~2 and read as \"one tilde two\".

    "},{"location":"concepts/0006-ssi-notation/#constituents","title":"Constituents","text":"

    Items that belong to a domain rather than having independent identity of their own (for example, data, money, keys) use dot notation for containment or ownership: A.ls, (A\u2019s link secret), A.policy, etc.

    Names for constituents use the same rules as names for agents and devices.

    Alice\u2019s DID for her relationship with Bob is an inert constituent datum, but it is properly associated with the relationship rather than just Alice. It is thus represented with A.did@A:B. (The token did is reserved for DIDs). This is read as \u201cA\u2019s DID at A to B\u201d. Bob\u2019s complementary DID would be B.did@B:A.

    inert = name-start-char 1*name-other-char\nnested = \".\" inert\nowned-inert = subject 1*nested\n\nassociated-to = identity-owner / relationship\nassociated = subject 0*nested \"@\" associated-to\n

    If A has a cloud agent 2, then the public key (verification key or verkey) and private, secret key (signing key or sigkey) used by 2 in A:B would be: 2.pk@A:B and 2.sk@A:B. This is read as \u201c2 dot P K at A to B\u201d and \u201c2 dot S K at A to B\u201d. Here, 2 is known to belong to A because it takes A\u2019s perspective on A:B--it would be equivalent but unnecessary to write A.2.pk@A:B.

    "},{"location":"concepts/0006-ssi-notation/#did-docs-and-did-references","title":"DID Docs and DID References","text":"

    The mention of keys belonging to agents naturally raises the question of DID docs and the things they contain. How do they relate to our notation?

    DIDs are designed to be URIs, and items that carry an id property within a DID Doc can be referenced with standard URI fragment notation. This allows someone, for example, to refer to the first public key used by one of the agents owned by Alice with a notation like: did:sov:VUrvFeWW2cPv9hkNZ2ms2a;#key1.

    This notation is important and useful, but it is somewhat orthogonal to the concerns of this RFC. In the context of SSI notation, we are not DID-centric; we are subject centric, and subject are identified by a single capital alpha instead of by their DID. This helps with brevity. It lets us ignore the specific DID value and instead focus on the higher level semantics; compare:

    {A.did@A:B}/B --> B

    ...to:

    did:sov:PXqKt8sVsDu9T7BpeNqBfe sends its DID for did:sov:6tb15mkMRagD7YA3SBZg3p to did:sov:6tb15mkMRagD7YA3SBZg3p, using the agent possessing did:sov:PXqKt8sVsDu9T7BpeNqBfe;#key1 to encrypt with the corresponding signing key.

    We expect DID reference notation (the verbose text above) to be relevant for concrete communication between computers, and SSI notation (the terse equivalent shown first) to be more convenient for symbolic, higher level discussions between human beings. Occasionally, we may get very specific and map SSI notation into DID notation (e.g., A.1.vk = did:sov:PXqKt8sVsDu9T7BpeNqBfe;#key1).

    "},{"location":"concepts/0006-ssi-notation/#counting-and-iteration","title":"Counting and Iteration","text":"

    Sometimes, a concept or value evolves over time. For example, a given discussion might need to describe a DID Doc or an endpoint or a key across multiple state changes. In mathematical notation, this would typically be modeled with subscripts. In our notation, we use square brackets, and we number beginning from zero. A.pk[0]@A:B would be the first pubkey used by A in the A:B relationship; A.pk[1]@A:B would be the second pubkey, and so on. Likewise, a sequence of messages could be represented with msg[0], msg[1], and msg[2].

    "},{"location":"concepts/0006-ssi-notation/#messages","title":"Messages","text":"

    Messages are represented as quoted string literals, or with the reserved token msg, or with kebab-case names that explain their semantics, as in cred-offer:

    string-literal = %x22 c-literal %x22\nkebab-char = lcase-alpha / digit\nkebab-suffix = \"-\" 1*hint-char\nkebab-msg = 1*kebab-char *kebab-suffix\nmessage = \"msg\" / string-literal / kebab-msg\n
    "},{"location":"concepts/0006-ssi-notation/#payments","title":"Payments","text":"

    Economic activity is part of rich SSI ecosystems, and requires notation. A payment address is denoted with the pay reserved token; A.pay[4] would be A's fifth payment address. The public key and secret key for a payment address use the ppk and psk reserved token, respectively. Thus, one way to reference the payment keys for that payment address would be A.pay[4].ppk and A.pay[4].psk. (Keys are normally held by agents, not by people--and every agent has its own keys. Thus, another notation for the public key pertaining to this address might be A.1.pay[4].ppk. This is an area of clumsiness that needs further study.)

    "},{"location":"concepts/0006-ssi-notation/#encryption","title":"Encryption","text":"

    Encryption deserves special consideration in the SSI world. It often figures prominently in discussions about security and privacy, and our notation needs to be able to represent it carefully.

    The following crypto operations are recognized by the notation, without making a strong claim about how the operations are implemented. (For example, inline Diffie Helman and an ephemeral symmetric key might be used for the *_crypt algorithms. What is interesting to the notation isn't the low-level details, but the general semantics achieved.)

    The notation for these crypto primitives uses curly braces around the message, with suffixes to clarify semantics. Generally, it identifies a recipient as an identity owner or thing, without clarifying the key that's used--the pairwise key for their DID is assumed.

    asymmetric   = \"/\"                                   ; suffix\nsymmetric    = \"*\"                                   ; suffix\nsign         = \"#\"                                   ; suffix\nmultiplex    = \"%\"                                   ; suffix\nverify       = \"?\"                                   ; suffix\n\nanon-crypt   = \"{\" message \"}\" asymmetric subject          ; e.g., {\"hi\"}/B\n\n                ; sender is first subject in relationship, receiver is second\nauth-crypt   = \"{\" message \"}\" asymmetric short-relationship ; e.g., {\"hi\"}/A:B\n\nsym-crypt    = \"{\" message \"}\" symmetric subject           ; e.g., {\"hi\"}*B\n\nverify       = \"{\" message \"}\" verify subject              ; e.g., {\"hi\"}?B\n

    The relative order of suffixes reflects whether encryption or signing takes place first: {\"hello\"}*B# says that symmetric encryption happens first, and then a signature is computed over the cypertext; {\"hello\"#}*B says that plaintext is signed, and then both the plaintext and the signature are encrypted. (The {\"hello\"}#*B variant is nonsensical because it splits the encryption notation in half).

    All suffixes can be further decorated with a parenthesized algorithm name, if precision is required: {\"hello\"}*(aes256)B or {\"hello\"}/(rsa1024)A:B or {\"hello\"#(ecdsa)}/B.

    With signing, usually the signer and sender are assumed to be identical, and the notation omits any clarification about the signer. However, this can be added after # to be explicit. Thus, {msg#B}/C would be a message with plaintext signed by B, anon-encrypted for C. Similarly, {msg#(ring-rabin)BGJM}/A:C would be a message with plaintext signed according to a Rabin ring signature algorithm, by B, G, J, and M, and then auth-encrypted by A for C.

    Signing verification would be over the corresponding message and which entities perform the action. {msg#A}?B would be a message with plaintext signed by A verified by B. {msg#(threshold-sig)ABC}?DE would be a plaintext message signed according to a threshold signature algorithm by A, B, C and then verified by D and E.

    Multiplexed asymmetric encryption is noted above, but has not yet been described. This is a technique whereby a message body is encrypted with an ephemeral symmetric key, and then the ephemeral key is encrypted asymmetrically for multiple potential recipients (each of which has a unique but tiny payload [the key] to decrypt, which in turn unlocks the main payload). The notation for this looks like {msg}%BCDE for multiplexed anon_crypt (sender is anonymous), and like {msg}%A:BCDE for multiplexed auth_crypt (sender is authenticated by their private key).

    "},{"location":"concepts/0006-ssi-notation/#other-punctuation","title":"Other punctuation","text":"

    Message sending is represented with arrows: -> is most common, though <- is also reasonable in some cases. Message content and notes about sending can be embedded in the hyphens of sending arrow, as in this example, where the notation says an unknown party uses http to transmit \"hello\", anon-enrcypted for Alice:

    <unknown> -- http: {\"hello\"}/A --> 1

    Parentheses have traditional meaning (casual usage in written language, plus grouping and precedence).

    Angle braces < and > are for placeholders; any reasonable explanatory text may appear inside the angle braces, so to represent Alice's relationship with a not-yet-known subject, the notation might show something like A:<TBD>.

    "},{"location":"concepts/0006-ssi-notation/#reference","title":"Reference","text":""},{"location":"concepts/0006-ssi-notation/#examples","title":"Examples","text":""},{"location":"concepts/0006-ssi-notation/#reserved-tokens","title":"Reserved Tokens","text":""},{"location":"concepts/0006-ssi-notation/#abnf","title":"ABNF","text":"
    ucase-alpha    = %x41-5A                        ; A-Z\nlcase-alpha    = %x61-7A                        ; a-z\ndigit          = %x30-39                        ; 0-9\nname-start-char = lcase-alpha / \"_\"             ; a-z or underscore\nname-other-char = digit / lcase-alpha / \"_\"     ; 0-9 or a-z or underscore\n\nidentity-owner = ucase-alpha\nthing = lcase-alpha\nsubject = identity-owner / thing\n\nagent = 1*3digit \"@\" subject\ndevice = name-start-char 1*name-other-char \"@\" subject\n\nnext-subject = \"+\" subject\nshort-relationship = subject \":\" subject *next-subject\nlong-relationship = subject \":\" subject 1*next-subject\ngeneral-audience = ucase-alpha 1*name-other-char\ngeneral-relationship = subject \":\" general-audience\nrelationship = short-relationship / long-relationship / general-relationship\n\ninert = name-start-char 1*name-other-char\nnested = \".\" inert\nowned-inert = subject 1*nested\n\nassociated-to = identity-owner / relationship\nassociated = subject 0*nested \"@\" associated-to\n\nstring-literal = %x22 c-literal %x22\nkebab-char = lcase-alpha / digit\nkebab-suffix = \"-\" 1*hint-char\nkebab-msg = 1*kebab-char *kebab-suffix\nmessage = \"msg\" / string-literal / kebab-msg\n\nasymmetric   = \"/\"                                   ; suffix\nsymmetric    = \"*\"                                   ; suffix\nsign         = \"#\"                                   ; suffix\nmultiplex    = \"%\"                                   ; suffix\n\nanon-crypt   = \"{\" message \"}\" asymmetric subject          ; e.g., {\"hi\"}/B\n\n                ; sender is first subject in relationship, receiver is second\nauth-crypt   = \"{\" message asymmetric short-relationship ; e.g., {\"hi\"}/A:B\n\nsym-crypt    = \"{\" message \"}\" symmetric subject           ; e.g., {\"hi\"}*B\n
    "},{"location":"concepts/0006-ssi-notation/#drawbacks","title":"Drawbacks","text":""},{"location":"concepts/0006-ssi-notation/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0006-ssi-notation/#prior-art","title":"Prior art","text":"

    Also, experiments with superscripts and subscripts in this format led to semantic dead ends or undesirable nesting when patterns were applied consistently. For example, one thought had us representing Alice's verkey, signing key, and DID for her Bob relationship with ABVK, ABSK. and ABDID. This was fine until we asked how to represent the verkey for Alice's agent in the Alice to Bob relationship; is that ABDIDVK? And what about Alice's link secret, that isn't relationship-specific? And how would we handle N-way relationships?

    "},{"location":"concepts/0006-ssi-notation/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0006-ssi-notation/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Peer DID Method Spec uses notation in diagrams"},{"location":"concepts/0008-message-id-and-threading/","title":"Aries RFC 0008: Message ID and Threading","text":""},{"location":"concepts/0008-message-id-and-threading/#summary","title":"Summary","text":"

    Definition of the message @id field and the ~thread decorator.

    "},{"location":"concepts/0008-message-id-and-threading/#motivation","title":"Motivation","text":"

    Referring to messages is useful in many interactions. A standard method of adding a message ID promotes good patterns in message families. When multiple messages are coordinated in a message flow, the threading pattern helps avoid having to re-roll the same spec for each message family that needs it.

    "},{"location":"concepts/0008-message-id-and-threading/#tutorial","title":"Tutorial","text":""},{"location":"concepts/0008-message-id-and-threading/#message-ids","title":"Message IDs","text":"

    Message IDs are specified with the @id attribute, which comes from JSON-LD. The sender of the message is responsible for creating the message ID, and any message can be identified by the combination of the sender and the message ID. Message IDs should be considered to be opaque identifiers by any recipients.

    "},{"location":"concepts/0008-message-id-and-threading/#message-id-requirements","title":"Message ID Requirements","text":""},{"location":"concepts/0008-message-id-and-threading/#example","title":"Example","text":"
    {\n    \"@type\": \"did:example:12345...;spec/example_family/1.0/example_type\",\n    \"@id\": \"98fd8d72-80f6-4419-abc2-c65ea39d0f38\",\n    \"example_attribute\": \"stuff\"\n}\n

    The following was pulled from this document written by Daniel Hardman and stored in the Sovrin Foundation's protocol repository.

    "},{"location":"concepts/0008-message-id-and-threading/#threaded-messages","title":"Threaded Messages","text":"

    Message threading will be implemented as a decorator to messages, for example:

    {\n    \"@type\": \"did:example:12345...;spec/example_family/1.0/example_type\",\n    \"@id\": \"98fd8d72-80f6-4419-abc2-c65ea39d0f38\",\n    \"~thread\": {\n        \"thid\": \"98fd8d72-80f6-4419-abc2-c65ea39d0f38\",\n        \"pthid\": \"1e513ad4-48c9-444e-9e7e-5b8b45c5e325\",\n        \"sender_order\": 3,\n        \"received_orders\": {\"did:sov:abcxyz\":1},\n        \"goal_code\": \"aries.vc.issue\"\n    },\n    \"example_attribute\": \"example_value\"\n}\n

    The ~thread decorator is generally required on any type of response, since this is what connects it with the original request.

    While not recommended, the initial message of a new protocol instance MAY have an empty ({}) ~thread item. Aries agents receiving a message with an empty ~thread item MUST gracefully handle such a message.

    "},{"location":"concepts/0008-message-id-and-threading/#thread-object","title":"Thread object","text":"

    A thread object has the following fields discussed below:

    "},{"location":"concepts/0008-message-id-and-threading/#thread-id-thid","title":"Thread ID (thid)","text":"

    Because multiple interactions can happen simultaneously, it's important to differentiate between them. This is done with a Thread ID or thid.

    If the Thread object is defined and a thid is given, the Thread ID is the value given there. But if the Thread object is not defined in a message, the Thread ID is implicitly defined as the Message ID (@id) of the given message and that message is the first message of a new thread.

    "},{"location":"concepts/0008-message-id-and-threading/#sender-order-sender_order","title":"Sender Order (sender_order)","text":"

    It is desirable to know how messages within a thread should be ordered. However, it is very difficult to know with confidence the absolute ordering of events scattered across a distributed system. Alice and Bob may each send a message before receiving the other's response, but be unsure whether their message was composed before the other's. Timestamping cannot resolve an impasse. Therefore, there is no unified absolute ordering of all messages within a thread--but there is an ordering of all messages emitted by a each participant.

    In a given thread, the first message from each party has a sender_order value of 0, the second message sent from each party has a sender_order value of 1, and so forth. Note that both Alice and Bob use 0 and 1, without regard to whether the other party may be known to have used them. This gives a strong ordering with respect to each party's messages, and it means that any message can be uniquely identified in an interaction by its thid, the sender DID and/or key, and the sender_order.

    "},{"location":"concepts/0008-message-id-and-threading/#received-orders-received_orders","title":"Received Orders (received_orders)","text":"

    In an interaction, it may be useful for the recipient of a message to know if their last message was received. A received_orders value addresses this need, and could be included as a best practice to help detect missing messages.

    In the example above, if Alice is the sender, and Bob is identified by did:sov:abcxyz, then Alice is saying, \"Here's my message with index 3 (sender_order=3), and I'm sending it in response to your message 1 (received_orders: {<bob's DID>: 1}. Apparently Alice has been more chatty than Bob in this exchange.

    The received_orders field is plural to acknowledge the possibility of multiple parties. In pairwise interactions, this may seem odd. However, n-wise interactions are possible (e.g., in a doctor ~ hospital ~ patient n-wise relationship). Even in pairwise, multiple agents on either side may introduce other actors. This may happen even if an interaction is designed to be 2-party (e.g., an intermediate party emits an error unexpectedly).

    In an interaction with more parties, the received_orders object has a key/value pair for each actor/sender_order, where actor is a DID or a key for an agent:

    \"received_orders\": {\"did:sov:abcxyz\":1, \"did:sov:defghi\":14}\n

    Here, the received_orders fragment makes a claim about the last sender_order that the sender observed from did:sov:abcxyz and did:sov:defghi. The sender of this fragment is presumably some other DID, implying that 3 parties are participating. Any parties unnamed in received_orders have an undefined value for received_orders. This is NOT the same as saying that they have made no observable contribution to the thread. To make that claim, use the special value -1, as in:

    \"received_orders\": {\"did:sov:abcxyz\":1, \"did:sov:defghi\":14, \"did:sov:jklmno\":-1}\n
    "},{"location":"concepts/0008-message-id-and-threading/#example_1","title":"Example","text":"

    As an example, Alice is an issuer and she offers a credential to Bob.

    "},{"location":"concepts/0008-message-id-and-threading/#nested-interactions-parent-thread-id-or-pthid","title":"Nested interactions (Parent Thread ID or pthid)","text":"

    Sometimes there are interactions that need to occur with the same party, while an existing interaction is in-flight.

    When an interaction is nested within another, the initiator of a new interaction can include a Parent Thread ID (pthid). This signals to the other party that this is a thread that is branching off of an existing interaction.

    "},{"location":"concepts/0008-message-id-and-threading/#nested-example","title":"Nested Example","text":"

    As before, Alice is an issuer and she offers a credential to Bob. This time, she wants a bit more information before she is comfortable providing a credential.

    All of the steps are the same, except the two bolded steps that are part of a nested interaction.

    "},{"location":"concepts/0008-message-id-and-threading/#implicit-threads","title":"Implicit Threads","text":"

    Threads reference a Message ID as the origin of the thread. This allows any message to be the start of a thread, even if not originally intended. Any message without an explicit ~thread attribute can be considered to have the following ~thread attribute implicitly present.

    \"~thread\": {\n    \"thid\": <same as @id of the outer message>,\n    \"sender_order\": 0\n}\n
    "},{"location":"concepts/0008-message-id-and-threading/#implicit-replies","title":"Implicit Replies","text":"

    A message that contains a ~thread block with a thid different from the outer message @id, but no sender_order is considered an implicit reply. Implicit replies have a sender_order of 0 and an received_orders:{other:0}. Implicit replies should only be used when a further message thread is not anticipated. When further messages in the thread are expected, a full regular ~thread block should be used.

    Example Message with am Implicit Reply:

    {\n    \"@id\": \"<@id of outer message>\",\n    \"~thread\": {\n        \"thid\": \"<different than @id of outer message>\"\n    }\n}\n
    Effective Message with defaults in place:
    {\n    \"@id\": \"<@id of outer message>\",\n    \"~thread\": {\n        \"thid\": \"<different than @id of outer message>\"\n        \"sender_order\": 0,\n        \"received_orders\": { \"DID of sender\":0 }\n    }\n}\n

    "},{"location":"concepts/0008-message-id-and-threading/#reference","title":"Reference","text":""},{"location":"concepts/0008-message-id-and-threading/#drawbacks","title":"Drawbacks","text":"

    Why should we not do this?

    "},{"location":"concepts/0008-message-id-and-threading/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0008-message-id-and-threading/#prior-art","title":"Prior art","text":"

    If you're aware of relevant prior-art, please add it here.

    "},{"location":"concepts/0008-message-id-and-threading/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0008-message-id-and-threading/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases. Connect.Me Free mobile app from Evernym. Installed via app store on iOS and Android. Verity Commercially licensed enterprise agent, SaaS or on-prem. Aries Protocol Test Suite"},{"location":"concepts/0011-decorators/","title":"Aries RFC 0011: Decorators","text":""},{"location":"concepts/0011-decorators/#summary","title":"Summary","text":"

    Explain how decorators work in DID communication.

    "},{"location":"concepts/0011-decorators/#motivation","title":"Motivation","text":"

    Certain semantic patterns manifest over and over again in communication. For example, all communication needs the pattern of testing the type of message received. The pattern of identifying a message and referencing it later is likely to be useful in a high percentage of all protocols that are ever written. A pattern that associates messages with debugging/tracing/timing metadata is equally relevant. And so forth.

    We need a way to convey metadata that embodies these patterns, without complicating schemas, bloating core definitions, managing complicated inheritance hierarchies, or confusing one another. It needs to be elegant, powerful, and adaptable.

    "},{"location":"concepts/0011-decorators/#tutorial","title":"Tutorial","text":"

    A decorator is an optional chunk of JSON that conveys metadata. Decorators are not declared in a core schema but rather supplementary to it. Decorators add semantic content broadly relevant to messaging in general, and not so much tied to the problem domain of a specific type of interaction.

    You can think of decorators as a sort of mixin for agent-to-agent messaging. This is not a perfect analogy, but it is a good one. Decorators in DIDComm also have some overlap (but not a direct congruence) with annotations in Java, attributes in C#, and both decorators and annotations in python.

    "},{"location":"concepts/0011-decorators/#simple-example","title":"Simple Example","text":"

    Imagine we are designing a protocol and associated messages to arrange meetings between two people. We might come up with a meeting_proposal message that looks like this:

    {\n  \"@id\": \"e2987006-a18a-4544-9596-5ad0d9390c8b\",\n  \"@type\": \"did:sov:8700e296a1458aad0d93;spec/meetings/1.0/proposal\",\n  \"proposed_time\": \"2019-12-23 17:00\",\n  \"proposed_place\": \"at the cathedral, Barf\u00fcsserplatz, Basel\",\n  \"comment\": \"Let's walk through the Christmas market.\"\n}\n

    Now we tackle the meeting_proposal_response messages. Maybe we start with something exceedingly simple, like:

    {\n  \"@id\": \"d9390ce2-8ba1-4544-9596-9870065ad08a\",\n  \"@type\": \"did:sov:8700e296a1458aad0d93;spec/meetings/1.0/response\",\n  \"agree\": true,\n  \"comment\": \"See you there!\"\n}\n

    But we quickly realize that the asynchronous nature of messaging will expose a gap in our message design: if Alice receives two meeting proposals from Bob at the same time, there is nothing to bind a response back to the specific proposal it addresses.

    We could extend the schema of our response so it contains an thread that references the @id of the original proposal. This would work. However, it does not satsify the DRY principle of software design, because when we tackle the protocol for negotiating a purchase between buyer and seller next week, we will need the same solution all over again. The result would be a proliferation of schemas that all address the same basic need for associating request and response. Worse, they might do it in different ways, cluttering the mental model for everyone and making the underlying patterns less obvious.

    What we want instead is a way to inject into any message the idea of a thread, such that we can easily associate responses with requests, errors with the messages that triggered them, and child interactions that branch off of the main one. This is the subject of the message threading RFC, and the solution is the ~thread decorator, which can be added to any response:

    {\n  \"@id\": \"d9390ce2-8ba1-4544-9596-9870065ad08a\",\n  \"@type\": \"did:sov:8700e296a1458aad0d93;spec/meetings/1.0/response\",\n  \"~thread\": {\"thid\": \"e2987006-a18a-4544-9596-5ad0d9390c8b\"},\n  \"agree\": true,\n  \"comment\": \"See you there!\"\n}\n
    This chunk of JSON is defined independent of any particular message schema, but is understood to be available in all DIDComm schemas.

    "},{"location":"concepts/0011-decorators/#basic-conventions","title":"Basic Conventions","text":"

    Decorators are defined in RFCs that document a general pattern such as message threading RFC or message localization. The documentation for a decorator explains its semantics and offers examples.

    Decorators are recognized by name. The name must begin with the ~ character (which is reserved in DIDComm messages for decorator use), and be a short, single-line string suitable for use as a JSON attribute name.

    Decorators may be simple key:value pairs \"~foo\": \"bar\". Or they may associate a key with a more complex structure:

    \"~thread\": {\n  \"thid\": \"e2987006-a18a-4544-9596-5ad0d9390c8b\",\n  \"pthid\": \"0c8be298-45a1-48a4-5996-d0d95a397006\",\n  \"sender_order\": 0\n}\n

    Decorators should be thought of as supplementary to the problem-domain-specific fields of a message, in that they describe general communication issues relevant to a broad array of message types. Entities that handle messages should treat all unrecognized fields as valid but meaningless, and decorators are no exception. Thus, software that doesn't recognize a decorator should ignore it.

    However, this does not mean that decorators are necessarily optional. Some messages may intend something tied so tightly to a decorator's semantics that the decorator effectively becomes required. An example of this is the relationship between a general error reporting mechanism and the ~thread decorator: it's not very helpful to report errors without the context that a thread provides.

    Because decorators are general by design and intent, we don't expect namespacing to be a major concern. The community agrees on decorators that everybody will recognize, and they acquire global scope upon acceptance. Their globalness is part of their utility. Effectively, decorator names are like reserved words in a shared public language of messages.

    Namespacing is also supported, as we may discover legitimate uses. When namespaces are desired, dotted name notation is used, as in ~mynamespace.mydecoratorname. We may elaborate this topic more in the future.

    Decorators are orthogonal to JSON-LD constructs in DIDComm messages.

    "},{"location":"concepts/0011-decorators/#versioning","title":"Versioning","text":"

    We hope that community-defined decorators are very stable. However, new fields (a non-breaking change) might need to be added to complex decorators; occasionally, more significant changes might be necessary as well. Therefore, decorators do support semver-style versioning, but in a form that allows details to be ignored unless or until they become important. The rules are:

    1. As with all other aspects of DIDComm messages, unrecognized fields in decorators must be ignored.
    2. Version information can be appended to the name of a decorator, as in ~mydecorator/1. Only a major version (never minor or patch) is used, since:
      • Minor version variations should not break decorator handling code.
      • The dot character . is reserved for namespacing within field names.
      • The extra complexity is not worth the small amount of value it might add.
    3. A decorator without a version is considered to be synonymous with version 1.0, and the version-less form is preferred. This allows version numbers to be added only in the uncommon cases where they are necessary.
    "},{"location":"concepts/0011-decorators/#decorator-scope","title":"Decorator Scope","text":"

    A decorator may be understood to decorate (add semantics) at several different scopes. The discussion thus far has focused on message decorators, and this is by far the most important scope to understand. But there are more possibilities.

    Suppose we wanted to decorate an individual field. This can be done with a field decorator, which is a sibling field to the field it decorates. The name of decorated field is combined with a decorator suffix, as follows:

    {\n  \"note\": \"Let's have a picnic.\",\n  \"note~l10n\": { ... }\n}\n
    In this example, taken from the localization pattern, note~l10n decorates note.

    Besides a single message or a single field, consider the following scopes as decorator targets:

    "},{"location":"concepts/0011-decorators/#reference","title":"Reference","text":"

    This section of this RFC will be kept up-to-date with a list of globally accepted decorators, and links to the RFCs that define them.

    "},{"location":"concepts/0011-decorators/#drawbacks","title":"Drawbacks","text":"

    By having fields that are meaningful yet not declared in core schemas, we run the risk that parsing and validation routines will fail to enforce details that are significant but invisible. We also accept the possibility that interop may look good on paper, but fail due to different understandings of important metadata.

    We believe this risk will take care of itself, for the most part, as real-life usage accumulates and decorators become a familiar and central part of the thinking for developers who work with agent-to-agent communication.

    "},{"location":"concepts/0011-decorators/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    There is ongoing work in the #indy-semantics channel on Rocket.Chat to explore the concept of overlays. These are layers of additional meaning that accumulate above a schema base. Decorators as described here are quite similar in intent. There are some subtle differences, though. The most interesting is that decorators as described here may be applied to things that are not schema-like (e.g., to a message family as a whole, or to a connection, not just to an individual message).

    We may be able to resolve these two worldviews, such that decorators are viewed as overlays and inherit some overlay goodness as a result. However, it is unlikely that decorators will change significantly in form or substance as a result. We thus believe the current mental model is already RFC-worthy, and represents a reasonable foundation for immediate use.

    "},{"location":"concepts/0011-decorators/#prior-art","title":"Prior art","text":"

    See references to similar ../../features in programming languages like Java, C#, and Python, mentiond above.

    See also this series of blog posts about semantic gaps and the need to manage intent in a declarative style: [ Lacunas Everywhere, Bridging the Lacuna Humana, Introducing Marks, Mountains, Molehills, and Markedness ]

    "},{"location":"concepts/0011-decorators/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0011-decorators/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries RFCs: RFC 0008, RFC 0017, RFC 0015, RFC 0023, RFC 0043, RFC 0056, RFC 0075 many implemented RFCs depend on decorators... Indy Cloud Agent - Python message threading Aries Framework - .NET message threading Streetcred.id message threading Aries Cloud Agent - Python message threading, attachments Aries Static Agent - Python message threading Aries Framework - Go message threading Connect.Me message threading Verity message threading Aries Protocol Test Suite message threading"},{"location":"concepts/0013-overlays/","title":"Aries RFC 0013: Overlays","text":""},{"location":"concepts/0017-attachments/","title":"Aries RFC 0017: Attachments","text":""},{"location":"concepts/0017-attachments/#summary","title":"Summary","text":"

    Explains the three canonical ways to attach data to an agent message.

    "},{"location":"concepts/0017-attachments/#motivation","title":"Motivation","text":"

    DIDComm messages use a structured format with a defined schema and a small inventory of scalar data types (string, number, date, etc). However, it will be quite common for messages to supplement formalized exchange with arbitrary data--images, documents, or types of media not yet invented.

    We need a way to \"attach\" such content to DIDComm messages. This method must be flexible, powerful, and usable without requiring new schema updates for every dynamic variation.

    "},{"location":"concepts/0017-attachments/#tutorial","title":"Tutorial","text":""},{"location":"concepts/0017-attachments/#messages-versus-data","title":"Messages versus Data","text":"

    Before explaining how to associate data with a message, it is worth pondering exactly how these two categories of information differ. It is common for newcomers to DIDComm to argue that messages are just data, and vice versa. After all, any data can be transmitted over DIDComm; doesn't that turn it into a message? And any message can be saved; doesn't that make it data?

    While it is true that messages and data are highly related, some semantic differences matter:

    Some examples:

    The line between these two ../../concepts may not be perfectly crisp in all cases, and that is okay. It is clear enough, most of the time, to provide context for the central question of this RFC, which is:

    How do we send data along with messages?

    "},{"location":"concepts/0017-attachments/#3-ways","title":"3 Ways","text":"

    Data can be \"attached\" to DIDComm messages in 3 ways:

    1. Inlining
    2. Embedding
    3. Appending
    "},{"location":"concepts/0017-attachments/#inlining","title":"Inlining","text":"

    In inlining, data is directly assigned as the value paired with a JSON key in a DIDComm message. For example, a message about arranging a rendezvous may inline data about a location:

    This inlined data is in Google Maps pinning format. It has a meaning at rest, outside the message that conveys it, and the versioning of its structure may evolve independently of the versioning of the rendezvous protocol.

    Only JSON data can be inlined, since any other data format would break JSON format rules.

    "},{"location":"concepts/0017-attachments/#embedding","title":"Embedding","text":"

    In embedding, a JSON data structure called an attachment descriptor is assigned as the value paired with a JSON key in a DIDComm message. (Or, an array of attachment descriptors could be assigned.) By convention, the key name for such attachment fields ends with ~attach, making it a field-level decorator that can share common handling logic in agent code. The attachment descriptor structure describes the MIME type and other properties of the data, in much the same way that MIME headers and body describe and contain an attachment in an email message. Given an imaginary protocol that photographers could use to share their favorite photo with friends, the embedded data might manifest like this:

    Embedding is a less direct mechanism than inlining, because the data is no longer readable by a human inspecting the message; it is base64url-encoded instead. A benefit of this approach is that the data can be any MIME type instead of just JSON, and that the data comes with useful metadata that can facilitate saving it as a separate file.

    "},{"location":"concepts/0017-attachments/#appending","title":"Appending","text":"

    Appending is accomplished using the ~attach decorator, which can be added to any message to include arbitrary data. The decorator is an array of attachment descriptor structures (the same structure used for embedding). For example, a message that conveys evidence found at a crime scene might include the following decorator:

    "},{"location":"concepts/0017-attachments/#choosing-the-right-approach","title":"Choosing the right approach","text":"

    These methods for attaching sit along a continuum that is somewhat like the continuum between strong, statically typed languages versus dynamic, duck-typed languages in programming. The more strongly typed the attachments are, the more strongly bound the attachments are to the protocol that conveys them. Each choice has advantages and disadvantages.

    Inlined data is strongly typed; the schema for its associated message must specify the name of the data field, plus what type of data it contains. Its format is always some kind of JSON--often JSON-LD with a @type and/or @context field to provide greater clarity and some independence of versioning. Simple and small data is the best fit for inlining. As mentioned earlier, the Connection Protocol inlines a DID Doc in its connection_request and connection_response messages.

    Embedded data is still associated with a known field in the message schema, but it can have a broader set of possible formats. A credential exchange protocol might embed a credential in the final message that does credential issuance.

    Appended attachments are the most flexible but also the hardest to run through semantically sophisticated processing. They do not require any specific declaration in the schema of a message, although they can be referenced in fields defined by the schema via their nickname (see below). A protocol that needs to pass an arbitrary collection of artifacts without strong knowledge of their semantics might find this helpful, as in the example mentioned above, where scheduling a venue causes various human-usable payloads to be delivered.

    "},{"location":"concepts/0017-attachments/#ids-for-attachments","title":"IDs for attachments","text":"

    The @id field within an attachment descriptor is used to refer unambiguously to an appended (or less ideally, embedded) attachment, and works like an HTML anchor. It is resolved relative to the root @id of the message and only has to be unique within a message. For example, imagine a fictional message type that's used to apply for an art scholarship, that requires photos of art demonstrating techniques A, B, and C. We could have 3 different attachment descriptors--but what if the same work of art demonstrates both technique A and technique B? We don't want to attach the same photo twice...

    What we can do is stipulate that the datatype of A_pic, B_pic, and C_pic is an attachment reference, and that the references will point to appended attachments. A fragment of the result might look like this:

    Another example of nickname use appeared in the first example of appended attachments above, where the notes field refered to the @ids of the various attachments.

    This indirection offers several benefits:

    We could use this same technique with embedded attachments (that is, assign a nickname to an embedded attachment, and refer to that nickname in another field where attached data could be embedded), but this is not considered best practice. The reason is that it requires a field in the schema to have two possible data types--one a string that's a nickname reference, and one an attachment descriptor. Generally, we like fields to have a single datatype in a schema.

    "},{"location":"concepts/0017-attachments/#content-formats","title":"Content Formats","text":"

    There are multiple ways to include content in an attachment. Only one method should be used per attachment.

    "},{"location":"concepts/0017-attachments/#base64url","title":"base64url","text":"

    This content encoding is an obvious choice for any content different than JSON. You can embed content of any type using this method. Examples are plentiful throughout the document. Note that this encoding is always base64url encoding, not plain base64, and that padding is not required. Code that reads this encoding SHOULD tolerate the presence or absence of padding and base64 versus base64url encodings equally well, but code that writes this encoding SHOULD omit the padding to guarantee alignment with encoding rules in the JOSE (JW*) family of specs.

    "},{"location":"concepts/0017-attachments/#json","title":"json","text":"

    If you are embedding an attachment that is JSON, you can embed it directly in JSON format to make access easier, by replacing data.base64 with data.json, where the value assigned to data.json is the attached content:

    This is an overly trivial example of GeoJSON, but hopefully it illustrates the technique. In cases where there is no mime type to declare, it may be helpful to use JSON-LD's @type construct to clarify the specific flavor of JSON in the embedded attachment.

    "},{"location":"concepts/0017-attachments/#links","title":"links","text":"

    All examples discussed so far include an attachment by value--that is, the attachment's bytes are directly inlined in the message in some way. This is a useful mode of data delivery, but it is not the only mode.

    Another way that attachment data can be incorporated is by reference. For example, you can link to the content on a web server by replacing data.base64 or data.json with data.links in an attachment descriptor:

    When you provide such a link, you are creating a logical association between the message and an attachment that can be fetched separately. This makes it possible to send brief descriptors of attachments and to make the downloading of the heavy content optional (or parallelizable) for the recipient.

    The links field is plural (an array) to allow multiple locations to be offered for the same content. This allows an agent to fetch attachments using whichever mechanism(s) are best suited to its individual needs and capabilities.

    "},{"location":"concepts/0017-attachments/#supported-uri-types","title":"Supported URI Types","text":"

    The set of supported URI types in an attachment link is limited to:

    Additional URI types may be added via updates to this RFC.

    If an attachment link with an unsupported URI is received, the agent SHOULD respond with a Problem Report indicated the problem.

    An ecosystem (coordinating set of agents working in a specific business area) may agree to support other URI types within that ecosystem. As such, implementing a mechanism to easily add support for other attachment link URI types might be useful, but is not required.

    "},{"location":"concepts/0017-attachments/#signing-attachments","title":"Signing Attachments","text":"

    In some cases it may be desirable to sign an attachment in addition to or instead of signing the message as a whole. Consider a home-buying protocol; the home inspection needs to be signed even when it is removed from a messaging flow. Attachments may also be signed by a party separate from the sender of the message, or using a different signing key when the sender is performing key rotation.

    Embedded and appended attachments support signatures by the addition of a data.jws field containing a signature in JWS (RFC 7515) format with Detached Content. The payload of the JWS is the raw bytes of the attachment, appropriately base64url-encoded per JWS rules. If these raw bytes are incorporated by value in the DIDComm message, they are already base64url-encoded in data.base64 and are thus directly substitutable for the suppressed data.jws.payload field; if they are externally referenced, then the bytes must be fetched via the URI in data.links and base64url-encoded before the JWS can be fully reconstituted. Signatures over inlined JSON attachments are not currently defined as this depends upon a canonical serialization for the data.

    Sample JWS-signed attachment:

    {\n  \"@type\": \"https://didcomm.org/xhomebuy/1.0/home_insp\",\n  \"inspection_date\": \"2020-03-25\",\n  \"inspection_address\": \"123 Villa de Las Fuentes, Toledo, Spain\",\n  \"comment\": \"Here's that report you asked for.\",\n  \"report~attach\": {\n    \"mime-type\": \"application/pdf\",\n    \"filename\": \"Garcia-inspection-March-25.pdf\",\n    \"data\": {\n      \"base64\": \"eyJ0eXAiOiJKV1QiLA0KICJhbGciOiJIUzI1NiJ... (bytes omitted to shorten)\",\n      \"jws\": {\n        // payload: ...,  <-- omitted: refer to base64 content when validating\n        \"header\": {\n          \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n        },\n        \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n        \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n      }\n    }\n  }\n}\n

    Here, the JWS structure inlines a public key value in did:key format within the unprotected header's kid field. It may also use a DID URL to reference a key within a resolvable DIDDoc. Supported DID URLs should specify a timestamp and/or version for the containing document.

    The JWS protected header consists of at least the following parameter indicating an Edwards curve digital signature:

    {\n  \"alg\": \"EdDSA\"\n}\n

    Additional protected and unprotected header parameters may be included in the JWS and must be ignored by implementations if not specifically supported. Any registered header parameters defined by the JWS RFC must be used according to the specification if present.

    Multiple signatures may be included using the JWS General Serialization syntax. When a single signature is present, the Flattened Serialization syntax should be preferred. Because each JWS contains an unprotected header with the signing key information, the JWS Compact Serialization cannot be supported.

    "},{"location":"concepts/0017-attachments/#size-considerations","title":"Size Considerations","text":"

    DIDComm messages should be small, as a general rule. Just as it's a bad idea to send email messages with multi-GB attachments, it would be bad to send DIDComm messages with huge amounts of data inside them. Remember, a message is about advancing a protocol; usually that can be done without gigabytes or even megabytes of JSON fields. Remember as well that DIDComm messages may be sent over channels having size constraints tied to the transport--an HTTP POST or Bluetooth or NFC or AMQP payload of more than a few MB may be problematic.

    Size pressures in messaging are likely to come from attached data. A good rule of thumb might be to not make DIDComm messages bigger than email or MMS messages--whenever more data needs to be attached, use the inclusion-by-reference technique to allow the data to be fetched separately.

    "},{"location":"concepts/0017-attachments/#security-implications","title":"Security Implications","text":"

    Attachments are a notorious vector for malware and mischief with email. For this reason, agents that support attachments MUST perform input validation on attachments, and MUST NOT invoke risky actions on attachments until such validation has been performed. The status of input validation with respect to attachment data MUST be reflected in the Message Trust Context associated with the data's message.

    "},{"location":"concepts/0017-attachments/#privacy-implications","title":"Privacy Implications","text":"

    When attachments are inlined, they enjoy the same security and transmission guarantees as all agent communication. However, given the right context, a large inlined attachment may be recognizable by its size, even if it is carefully encrypted.

    If attachment content is fetched from an external source, then new complications arise. The security guarantees may change. Data streamed from a CDN may be observable in flight. URIs may be correlating. Content may not be immutable or tamper-resistant.

    However, these issues are not necessarily a problem. If a DIDComm message wants to attach a 4 GB ISO file of a linux distribution, it may be perfectly fine to do so in the clear. Downloading it is unlikely to introduce strong correlation, encryption is unnecessary, and the torrent itself prevents malicious modification.

    Code that handles attachments will need to use wise policy to decide whether attachments are presented in a form that meets its needs.

    "},{"location":"concepts/0017-attachments/#reference","title":"Reference","text":""},{"location":"concepts/0017-attachments/#attachment-descriptor-structure","title":"Attachment Descriptor structure","text":""},{"location":"concepts/0017-attachments/#drawbacks","title":"Drawbacks","text":"

    By providing 3 different choices, we impose additional complexity on agents that will receive messages. They have to handle attachments in 3 different modes.

    "},{"location":"concepts/0017-attachments/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Originally, we only proposed the most flexible method of attaching--appending. However, feedback from the community suggested that stronger binding to schema was desirable. Inlining was independently invented, and is suggested by JSON-LD anyway. Embedding without appending eliminates some valuable ../../features such as unnamed and undeclared ad-hoc attachments. So we ended up wanting to support all 3 modes.

    "},{"location":"concepts/0017-attachments/#prior-art","title":"Prior art","text":"

    Multipart MIME (see RFCs 822, 1341, and 2045) defines a mechanism somewhat like this. Since we are using JSON instead of email messages as the core model, we can't use these mechanisms directly. However, they are an inspiration for what we are showing here.

    "},{"location":"concepts/0017-attachments/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0017-attachments/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Cloud Agent - Python in credential exchange Streetcred.id Commercial mobile and web app built using Aries Framework - .NET"},{"location":"concepts/0020-message-types/","title":"Aries RFC 0020: Message Types","text":""},{"location":"concepts/0020-message-types/#summary","title":"Summary","text":"

    Define structure of message type strings used in agent to agent communication, describe their resolution to documentation URIs, and offer guidelines for protocol specifications.

    "},{"location":"concepts/0020-message-types/#motivation","title":"Motivation","text":"

    A clear convention to follow for agent developers is necessary for interoperability and continued progress as a community.

    "},{"location":"concepts/0020-message-types/#tutorial","title":"Tutorial","text":"

    A \"Message Type\" is a required attribute of all communications sent between parties. The message type instructs the receiving agent how to interpret the content and what content to expect as part of a given message.

    Types are specified within a message using the @type attribute:

    {\n    \"@type\": \"<message type string>\",\n    // other attributes\n}\n

    Message types are URIs that may resolve to developer documentation for the message type, as described in Protocol URIs. We recommend that message type URIs be HTTP URLs.

    "},{"location":"concepts/0020-message-types/#aries-core-message-namespace","title":"Aries Core Message Namespace","text":"

    https://didcomm.org/ is used to namespace protocols defined by the community as \"core protocols\" or protocols that agents should minimally support.

    The didcomm.org DNS entry is currently controlled by the Decentralized Identity Foundation (DIF) based on their role in standardizing the DIDComm Messaging specification.

    "},{"location":"concepts/0020-message-types/#protocols","title":"Protocols","text":"

    Protocols provide a logical grouping for message types. These protocols, along with each type belonging to that protocol, are to be defined in future RFCs or through means appropriate to subprojects.

    "},{"location":"concepts/0020-message-types/#protocol-versioning","title":"Protocol Versioning","text":"

    Version numbering should essentially follow Semantic Versioning 2.0.0, excluding patch version number. To summarize, a change in the major protocol version number indicates a breaking change while the minor protocol version number indicates non-breaking additions.

    "},{"location":"concepts/0020-message-types/#message-type-design-guidelines","title":"Message Type Design Guidelines","text":"

    These guidelines are guidelines on purpose. There will be situations where a good design will have to choose between conflicting points, or ignore all of them. The goal should always be clear and good design.

    "},{"location":"concepts/0020-message-types/#respect-reserved-attribute-names","title":"Respect Reserved Attribute Names","text":"

    Reserved attributes are prefixed with an @ sign, such as @type. Don't use this prefix for an attribute, even if use of that specific attribute is undefined.

    "},{"location":"concepts/0020-message-types/#avoid-ambiguous-attribute-names","title":"Avoid ambiguous attribute names","text":"

    Data, id, and package, are often terrible names. Adjust the name to enhance meaning. For example, use message_id instead of id.

    "},{"location":"concepts/0020-message-types/#avoid-names-with-special-characters","title":"Avoid names with special characters","text":"

    Technically, attribute names can be any valid json key (except prefixed with @, as mentioned above). Practically, you should avoid using special characters, including those that need to be escaped. Underscores and dashes [_,-] are totally acceptable, but you should avoid quotation marks, punctuation, and other symbols.

    "},{"location":"concepts/0020-message-types/#use-attributes-consistently-within-a-protocol","title":"Use attributes consistently within a protocol","text":"

    Be consistent with attribute names between the different types within a protocol. Only use the same attribute name for the same data. If the attribute values are similar, but not exactly the same, adjust the name to indicate the difference.

    "},{"location":"concepts/0020-message-types/#nest-attributes-only-when-useful","title":"Nest Attributes only when useful","text":"

    Attributes do not need to be nested under a top level attribute, but can be to organize related attributes. Nesting all message attributes under one top level attribute is usually not a good idea.

    "},{"location":"concepts/0020-message-types/#design-examples","title":"Design Examples","text":""},{"location":"concepts/0020-message-types/#example-1","title":"Example 1","text":"
    {\n    \"@type\": \"did:example:00000;spec/pizzaplace/1.0/pizzaorder\",\n    \"content\": {\n        \"id\": 15,\n        \"name\": \"combo\",\n        \"prepaid?\": true,\n        \"ingredients\": [\"pepperoni\", \"bell peppers\", \"anchovies\"]\n    }\n}\n

    Suggestions: Ambiguous names, unnecessary nesting, symbols in names.

    "},{"location":"concepts/0020-message-types/#example-1-fixed","title":"Example 1 Fixed","text":"
    {\n    \"@type\": \"did:example:00000;spec/pizzaplace/1.0/pizzaorder\",\n    \"table_id\": 15,\n    \"pizza_name\": \"combo\",\n    \"prepaid\": true,\n    \"ingredients\": [\"pepperoni\", \"bell peppers\", \"anchovies\"]\n}\n
    "},{"location":"concepts/0020-message-types/#reference","title":"Reference","text":""},{"location":"concepts/0020-message-types/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community Aries Framework - .NET .NET framework for building agents of all types Streetcred.id Commercial mobile and web app built using Aries Framework - .NET Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases. Aries Framework - Go For building agents, hubs and other DIDComm ../../features in GoLang. Connect.Me Free mobile app from Evernym. Installed via app store on iOS and Android. Verity Commercially licensed enterprise agent, SaaS or on-prem."},{"location":"concepts/0021-didcomm-message-anatomy/","title":"Aries RFC 0021: DIDComm Message Anatomy","text":""},{"location":"concepts/0021-didcomm-message-anatomy/#summary","title":"Summary","text":"

    Explain the basics of DID communication messages at a high level, and link to other RFCs to promote deeper exploration.

    "},{"location":"concepts/0021-didcomm-message-anatomy/#motivation","title":"Motivation","text":"

    Promote a deeper understanding of the DIDComm message anatomy through a overarching view of the two distinct levels of messages in a single place.

    "},{"location":"concepts/0021-didcomm-message-anatomy/#tutorial","title":"Tutorial","text":"

    DIDComm messages are comprised of the following two main layers, which are not dissimilar to how postal messages occur in the real world.

    "},{"location":"concepts/0021-didcomm-message-anatomy/#envelope-level","title":"Envelope Level","text":"

    As the name suggests, envelope borrows from the analogy of how physical messages are handled in the postal system, this message format level acts as the digital envelope for DIDComm messages.

    There are two main variations of the envelope level format which are defined to cater for the different audiences and use cases DIDComm messages serve.

    "},{"location":"concepts/0021-didcomm-message-anatomy/#1-encrypted","title":"1. Encrypted","text":"

    This format is for when the audience of the message is a DID or DID's known to the sender, in this case the message can be prepared and encrypted with the key information present in the audiences DID docs.

    Within this encrypted format, there are multiple sub-formats which give rise to different properties.

    1. Anonymous Encrypted format This format is when a message is encrypted to a recipient in an anonymous fashion, and it does not include any sender information.
    2. Authenticated Encrypted format This format is when a message is encrypted to a recipient and sender information is included through the use of authenticated encryption. With this format only the true recipient(s) can both decrypt the message and authenticate its content is truly from the sender.
    3. Signed Encrypted format This format is when a message is encrypted to the recipient and sender information is included along with a non-repudiable signature. In this case the recipient(s) is still the only party that can decrypt the message. However, because the underlying message includes non-repudiability, authentication of the decrypted message content can be done by any party who knows the sender.
    "},{"location":"concepts/0021-didcomm-message-anatomy/#2-signed-unencrypted","title":"2. Signed Unencrypted","text":"

    This format is for when the audience of the message is unknown (for example some form of public challenge). This format is signed, so that when a member of the audience receives the message they can authenticate the message with its non-repudiable signature.

    "},{"location":"concepts/0021-didcomm-message-anatomy/#serialization-format","title":"Serialization Format","text":"

    All of the envelope level formats are achieved through JOSE based structures. The encrypted formats uses a JWE structure, whereas the signed unencrypted format uses a JWS structure.

    Details on the encrypted forms are found here

    Details on the signed un-encrypted are TBC

    "},{"location":"concepts/0021-didcomm-message-anatomy/#content-level","title":"Content Level","text":"

    This level to continue the postal metaphor is the content inside the envelope and contains the message.

    At this level, several conventions are defined around how messages are structured, which facilitates in message identification and processing.

    The most important ../../concepts to introduce about these conventions are the following.

    "},{"location":"concepts/0021-didcomm-message-anatomy/#message-type","title":"Message Type","text":"

    Every message contains a message type which allows the context of the message to be established and therefore process the content, see here for more information. It is also important to note that in DIDComm, the message identification does not just identify the message, the message type also identifies the associated protocol. These protocols are essentially a group of related messages that are together required to achieve some form of multi-step flow see here for more information.

    "},{"location":"concepts/0021-didcomm-message-anatomy/#message-id","title":"Message Id","text":"

    Every message contains a message id which is uniquely generated by the sender, this allows unique identification of the message. See here for more information.

    "},{"location":"concepts/0021-didcomm-message-anatomy/#decorators","title":"Decorators","text":"

    DIDComm messages at a content level allow for the support of re-usable conventions that are present across multiple messages in order to handle the same functionality in a consistent manner.

    A relevant analogy for decorators, is that they are like HTTP headers in a HTTP request. The same HTTP header is often reused as a convention across multiple requests to achieve cross cutting functionality.

    See here for more details.

    "},{"location":"concepts/0021-didcomm-message-anatomy/#serialization-format_1","title":"Serialization Format","text":"

    At present all content level messages are represented as JSON. Further more these messages are also JSON-LD sympathetic however they do not have full and direct support for JSON-LD.

    "},{"location":"concepts/0021-didcomm-message-anatomy/#reference","title":"Reference","text":"

    All references are defined inline where required.

    "},{"location":"concepts/0021-didcomm-message-anatomy/#drawbacks","title":"Drawbacks","text":"

    Why should we not do this?

    "},{"location":"concepts/0021-didcomm-message-anatomy/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0021-didcomm-message-anatomy/#prior-art","title":"Prior art","text":""},{"location":"concepts/0021-didcomm-message-anatomy/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0021-didcomm-message-anatomy/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0029-message-trust-contexts/","title":"Aries RFC 0029: Message Trust Contexts","text":""},{"location":"concepts/0029-message-trust-contexts/#summary","title":"Summary","text":"

    Introduces the concept of Message Trust Contexts and describes how they are populated and used.

    "},{"location":"concepts/0029-message-trust-contexts/#motivation","title":"Motivation","text":"

    An important aim of DID Communication is to let parties achieve high trust. Such trust is vital in cases where money changes hands and identity is at stake. However, sometimes lower trust is fine; playing tic-tac-toe ought to be safe through agents, even with a malicious stranger.

    We may intuitively understand the differences in these situations, but intuition isn't the best guide when designing a secure ecosystem. Trust is a complex, multidimensional phenomenon. We need a formal way to analyze it, and to test its suitability in particular circumstances.

    "},{"location":"concepts/0029-message-trust-contexts/#tutorial","title":"Tutorial","text":"

    When Alice sends a message to Bob, how much should Bob trust it?

    This is not a binary question, with possible answers of \"completely\" or \"not at all\". Rather, it is a nuanced question that should consider many factors. Some clarifying questions might include:

    "},{"location":"concepts/0029-message-trust-contexts/#message-trust-contexts","title":"Message Trust Contexts","text":"

    The DID Communication ecosystem formalizes the idea of a Message Trust Context (MTC) to expose such questions, make their answers explicit, and encourage thoughtful choices based on the answers.

    An MTC is an object that holds trust context for a message. This context follows a message throughout its processing journey inside the agent that receives it, and it should be analyzed and updated for decision-making purposes throughout.

    Protocols should be designed with standard MTCs in mind. Thus, it is desirable that all implementations share common names for certain ../../concepts, so we can discuss them conveniently in design docs, error messages, logs, and so forth. The standard dimensions of trust tracked in an MTC break down into two groups:

    "},{"location":"concepts/0029-message-trust-contexts/#crypto-related","title":"Crypto-related","text":""},{"location":"concepts/0029-message-trust-contexts/#input-validations","title":"Input validations","text":"

    In code, these types of trust are written using whatever naming convention matches the implementer's programming language, so authenticated_origin and authenticatedOrigin are synonyms of each other and of Authenticated Origin.

    "},{"location":"concepts/0029-message-trust-contexts/#notation","title":"Notation","text":"

    In protocol designs, the requirements of a message trust context should be declared when message types are defined. For example, the credential_offer message in the credential_issuance protocol should not be accepted unless it has Integrity and Authenticated Origin in its MTC (because otherwise a MITM could interfere). The definition of the message type should say this. Its RFC does this by notating:

    mtc: +integrity +authenticated_origin\n

    When a loan is digitally signed, we probably need:

    mtc: +integrity +authenticated_origin +nonrepudiation\n

    The labels for these trust types are long, but they can be shortened if they remain unambiguous. Notice, too, that all of the official MTC fields have unique intial letters. We can therefore abbreviate unambiguously:

    mtc: +i +a +n\n

    Any type of trust that does not appear in MTC notation is assumed to be undefined (meaning no claim is made about it either way, perhaps because it hasn't been evaluated or because it doesn't matter). However, sometimes we need to make a lack of trust explicit. We might claim in a protocol definition that a particular type of trust is definitely not required. Or we might want to show that we evaluated a particular trust at runtime, and had a negative outcome. In such cases, we can do this:

    mtc: +i +a -n\n

    Here, we are explicitly denying that nonrepudiation is part of the trust context.

    For further terseness in our notation, spaces can be omitted:

    mtc: +i+a-n\n

    Finally, an mtc that makes no explicit positive or negative claims (undefined) is written as:

    mtc: ?\n

    This MTC notation is a supplement to SSI Notation and should be treated as equally normative. Such notation might be useful in log files and error messages, among other things. See Using a Message Trust Context at Runtime below.

    "},{"location":"concepts/0029-message-trust-contexts/#custom-trust","title":"Custom Trust","text":"

    Specific agents may make trust distinctions that are helpful in their own problem domains. For example, some agents may evaluate trust on the physical location or IP address of a sender, or on the time of day that a message arrives. Others may use DIDComm for internal processes that have unique trust requirements over and above those that matter in interoperable scenarios, such as whether a message emanates from a machine running endpoint compliance software, or whether it has passed through intrusion detection or data loss prevention filters.

    Agent implementations are encouraged to add their own trust dimensions to their own implementations of a Message Trust Context, as long as they do not redefine the standard labels. In cases where custom trust types introduce ambiguity with trust labels, MTC notation requires enough letters to disambiguate labels. So if a complex custom MTC has fields named intrusion_detect_ok, ipaddr_ok (which both start like the standard integrity), and endpoint_compliance (which has no ambiguity with a standard token) it might be notated as:

    mtc: +c+a+inte+intr+ip-n-p-e\n

    Here, inte matches the standard label integrity, whereas intr and ip are known to be custom because they don't match a standard label; e is custom but only a single letter because it is unambiguous.

    "},{"location":"concepts/0029-message-trust-contexts/#populating-a-message-trust-context-at-runtime","title":"Populating a Message Trust Context at Runtime","text":"

    A Message Trust Context comes into being when it arrives on the wire at the receiving agent and begins its processing flow.

    The first step may be an input validation to confirm that the message doesn't exceed a max size. If so, the empty MTC is updated with +s.

    Another early step is decryption. This should allow population of the confidentiality and authenticated_origin dimensions, at least.

    Subsequent layers of code that do additional analysis should update the MTC as appropriate. For example, if a signature is not analyzed and validated until after the decryption step, the signature's presence or absence should cause nonrepudiation and maybe integrity to be updated. Similarly, once the plaintext of a message is known to be a valid enough to deserialize into an object, the MTC acquires +deserialize_ok. Later, when the fields of the message's native object representation have been analyzed to make sure they conform to a particular structure, it should be updated again with +key_ok. And so forth.

    "},{"location":"concepts/0029-message-trust-contexts/#using-a-message-trust-context-at-runtime","title":"Using a Message Trust Context at Runtime","text":"

    As message processing happens, the MTC isn't just updated. It should constantly be queried, and decisions should be made on the basis of what the MTC says. These decisions can vary according to the preferences of agent developers and the policies of agent owners. Some agents may choose not to accept any messages that are -a, for example, while others may be content to talk with anonymous senders. The recommendations of protocol designers should never be ignored, however; it is probably wrong to accept a -n message that signs a loan, even if agent policy is lax about other things. Formally declared MTCs in a protocol design may be linked to security proofs...

    Part of the intention with the terse MTC notation is that conversations about agent trust should be easy and interoperable. When agents send one another problem-report messages, they can turn MTCs into human-friendly text, but also use this notation: \"Unable to accept a payment from message that lacks Integrity guarantees (-i).\" This notation can help diagnose trust problems in logs. It may also be helpful with message tracing, feature discovery, and agent testing.

    "},{"location":"concepts/0029-message-trust-contexts/#attachments","title":"Attachments","text":"

    MTCs apply to the entirety of the associated message's attributes. However, embedded and appended message attachments present the unique situation of nested content with the potential for a trust context that differs from the parent message.

    The attachment descriptor, used for both embedded and appended attachments, shares the same MTC as the parent message. Unpacked attachment data have their own Trust Contexts populated as appropriate depending on how the data was retrieved, whether the attachment is signed, whether an integrity checksum was provided and verified, etc.

    Attachments delivered by the parent message, i.e. as base64url-encoded data, inherit relevant trust contexts from the parent, such as confidentiality and authenticated_origin, when the message was delivered as an authenticated encrypted message.

    Attachments retrieved from a remote resource populate their trust context as relevant to the retrieval mechanism.

    "},{"location":"concepts/0029-message-trust-contexts/#reference","title":"Reference","text":"

    A complete reference implementation of MTCs in python is attached to this RFC (see mtc.py). It could easily be extended with custom trust dimensions, and it would be simple to port to other programming languages. Note that the implementation includes unit tests written in pytest style, and has only been tested on python 3.x.

    "},{"location":"concepts/0029-message-trust-contexts/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes MTC reference impl Reference impl in python, checked in with RFC. Includes unit tests. Aries Protocol Test Suite Aries Static Agent - Python Largely inspired by reference implementation; MTC populated and made available to handlers."},{"location":"concepts/0046-mediators-and-relays/","title":"Aries RFC 0046: Mediators and Relays","text":""},{"location":"concepts/0046-mediators-and-relays/#summary","title":"Summary","text":"

    The mental model for agent-to-agent messaging (A2A) messaging includes two important communication primitives that have a meaning unique to our ecosystem: mediator and relay.

    A mediator is a participant in agent-to-agent message delivery that must be modeled by the sender. It has its own keys and will deliver messages only after decrypting an outer envelope to reveal a forward request. Many types of mediators may exist, but two important ones should be widely understood, as they commonly manifest in DID Docs:

    1. A service that hosts many cloud agents at a single endpoint to provide herd privacy (an \"agency\") is a mediator.
    2. A cloud-based agent that routes between/among the edges of a sovereign domain is a mediator.

    A relay is an entity that passes along agent-to-agent messages, but that can be ignored when the sender considers encryption choices. It does not decrypt anything. Relays can be used to change the transport for a message (e.g., accept an HTTP POST, then turn around and emit an email; accept a Bluetooth transmission, then turn around and emit something in a message queue). Mix networks like TOR are an important type of relay.

    Read on to explore how agent-to-agent communication can model complex topologies and flows using these two primitives.

    "},{"location":"concepts/0046-mediators-and-relays/#motivation","title":"Motivation","text":"

    When we describe agent-to-agent communication, it is convenient to think of an interaction only in terms of Alice and Bob and their agents. We say things like: \"Alice's agent sends a message to Bob's agent\" -- or perhaps \"Alice's edge agent sends a message to Bob's cloud agent, which forwards it to Bob's edge agent\".

    Such statements adopt a useful level of abstraction--one that's highly recommended for most discussions. However, they make a number of simplifications. By modeling the roles of mediators and relays in routing, we can support routes that use multiple transports, routes that are not fully known (or knowable) to the sender, routes that pass through mix networks, and other advanced and powerful ../../concepts.

    "},{"location":"concepts/0046-mediators-and-relays/#tutorial","title":"Tutorial","text":""},{"location":"concepts/0046-mediators-and-relays/#key-concepts","title":"Key Concepts","text":"

    Let's define mediators and relays by exploring how they manifest in a series of communication scenarios between Alice and Bob.

    "},{"location":"concepts/0046-mediators-and-relays/#scenario-1-base","title":"Scenario 1 (base)","text":"

    Alice and Bob are both employees of a large corporation. They work in the same office, but have never met. The office has a rule that all messages between employees must be encrypted. They use paper messages and physical delivery as the transport. Alice writes a note, encrypts it so only Bob can read it, puts it in an envelope addressed to Bob, and drops the envelope on a desk that she has been told belongs to Bob. This desk is in fact Bob's, and he later picks up the message, decrypts it, and reads it.

    In this scenario, there is no mediator, and no relay.

    "},{"location":"concepts/0046-mediators-and-relays/#scenario-2-a-gatekeeper","title":"Scenario 2: a gatekeeper","text":"

    Imagine that Bob hires an executive assistant, Carl, to filter his mail. Bob won't open any mail unless Carl looks at it and decides that it's worthy of Bob's attention.

    Alice has to change her behavior. She continues to package a message for Bob, but now she must account for Carl as well. She take the envelope for Bob, and places it inside a new envelope addressed to Carl. Inside the outer envelope, and next to the envelope destined for Bob, Alice writes Carl an encrypted note: \"This inner envelope is for Bob. Please forward.\"

    Here, Carl is acting as a mediator. He is mostly just passing messages along. But because he is processing a message himself, and because Carl is interposed between Alice and Bob, he affects the behavior of the sender. He is a known entity in the route.

    "},{"location":"concepts/0046-mediators-and-relays/#scenario-3-transparent-indirection","title":"Scenario 3: transparent indirection","text":"

    All is the same as the base scenario (Carl has been fired), except that Bob is working from home when Alice's message lands on his desk. Bob has previously arranged with his friend Darla, who lives near him, to pick up any mail that's on his desk and drop it off at his house at the end of the work day. Darla sees Alice's note and takes it home to Bob.

    In this scenario, Darla is acting as a relay. Note that Bob arranges for Darla to do this without notifying Alice, and that Alice does not need to adjust her behavior in any way for the relay to work.

    "},{"location":"concepts/0046-mediators-and-relays/#scenario-4-more-indirection","title":"Scenario 4: more indirection","text":"

    Like scenario 3, Darla brings Bob his mail at home. However, Bob isn't at home when his mail arrives. He's had to rush out on an errand, but he's left instructions with his son, Emil, to open any work mail, take a photo of the letter, and text him the photo. Emil intends to do this, but the camera on his phone misfires, so he convinces his sister, Francis, to take the picture on her phone and email it to him. Then he texts the photo to Bob, as arranged.

    Here, Emil and Francis are also acting as relays. Note that nobody knows about the full route. Alice thinks she's delivering directly to Bob. So does Darla. Bob knows about Darla and Emil, but not about Francis.

    Note, too, how the transport is changing from physical mail to email to text.

    To the party immediately upstream (closer to the sender), a relay is indistinguishable from the next party downstream (closer to the recipient). A party anywhere in the chain can insert one or more relays upstream from themselves, as long as those relays are not upstream of another named party (sender or mediator).

    "},{"location":"concepts/0046-mediators-and-relays/#more-scenarios","title":"More Scenarios","text":"

    Mediators and relays can be combined in any order and any amount in variations on our fictional scenario. Bob could employ Carl as a mediator, and Carl could work from home and arrange delivery via George, then have his daughter Hannah run messages back to Bob's desk at work. Carl could hire his own mediator. Darla could arrange or Ivan to substitute for her when she goes on vacation. And so forth.

    "},{"location":"concepts/0046-mediators-and-relays/#more-traditional-usage","title":"More Traditional Usage","text":"

    The scenarios used above are somewhat artificial. Our most familiar agent-to-agent scenarios involve edge agents running on mobile devices and accessible through bluetooth or push notification, and cloud agents that use electronic protocols as their transport. Let's see how relays and mediators apply there.

    "},{"location":"concepts/0046-mediators-and-relays/#scenario-5-traditional-base","title":"Scenario 5 (traditional base)","text":"

    Alice's cloud agent wants to talk to Bob's cloud agent. Bob's cloud agent is listening at http://bob.com/agent. Alice encrypts a message for Bob and posts it to that URL.

    In this scenario, we are using a direct transport with neither a mediator nor a relay.

    If you are familiar with common routing patterns and you are steeped in HTTP, you are likely objecting at this point, pointing out ways that this description diverges from best practice, including what's prescribed in other RFC. You may be eager to explain why this is a privacy problem, for example.

    You are not wrong, exactly. But please suspend those concerns and hang with me. This is about what's theoretically possible in the mental model. Besides, I would note that virtually the same diagram could be used for a Bluetooth agent conversation:

    "},{"location":"concepts/0046-mediators-and-relays/#scenario-6-herd-hosting","title":"Scenario 6: herd hosting","text":"

    Let's tweak Scenario 5 slightly by saying that Bob's agent is one of thousands that are hosted at the same URL. Maybe the URL is now http://agents-r-us.com/inbox. Now if Alice wants to talk to Bob's cloud agent, she has to cope with a mediator. She wraps the encrypted message for Bob's cloud agent inside a forward message that's addressed to and encrypted for the agent of agents-r-us that functions as a gatekeeper.

    This scenario is one that highlights an external mediator--so-called because the mediator lives outside the sovereign domain of the final recipient.

    "},{"location":"concepts/0046-mediators-and-relays/#scenario-7-intra-domain-dispatch","title":"Scenario 7: intra-domain dispatch","text":"

    Now let's subtract agents-r-us. We're back to Bob's cloud agent listening directly at http://bob.com/agent. However, let's say that Alice has a different goal--now she wants to talk to the edge agent running on Bob's mobile device. This agent doesn't have a permanent IP address, so Bob uses his own cloud agent as a mediator. He tells Alice that his mobile device agent can only be reached via his cloud agent.

    Once again, this causes Alice to modify her behavior. Again, she wraps her encrypted message. The inner message is enclosed in an outer envelope, and the outer envelope is passed to the mediator.

    This scenario highlights an internal mediator. Internal and external mediators introduce similar ../../features and similar constraints; the relevant difference is that internal mediators live within the sovereign domain of the recipient, and may thus be worthy of greater trust.

    "},{"location":"concepts/0046-mediators-and-relays/#scenario-8-double-mediation","title":"Scenario 8: double mediation","text":"

    Now let's combine. Bob's cloud agent is hosted at agents-r-us, AND Alice wants to reach Bob's mobile:

    This is a common pattern with HTTP-based cloud agents plus mobile edge agents, which is the most common deployment pattern we expect for many users of self-sovereign identity. Note that the properties of the agency and the routing agent are not particularly special--they are just an external and an internal mediator, respectively.

    "},{"location":"concepts/0046-mediators-and-relays/#related-concepts","title":"Related Concepts","text":""},{"location":"concepts/0046-mediators-and-relays/#routes-are-one-way-not-duplex","title":"Routes are One-Way (not duplex)","text":"

    In all of this discussion, note that we are analyzing only a flow from Alice to Bob. How Bob gets a message back to Alice is a completely separate question. Just because Carl, Darla, Emil, Francis, and Agents-R-Us may be involved in how messages flow from Alice to Bob, does not mean they are involved in flow the opposite direction.

    Note how this breaks the simple assumptions of pure request-response technologies like HTTP, that assume the channel in (request) is also the channel out (response). Duplex request-response can be modeled with A2A, but doing so requires support that may not always be available, plus cooperative behavior governed by the ~thread decorator.

    "},{"location":"concepts/0046-mediators-and-relays/#conventions-on-direction","title":"Conventions on Direction","text":"

    For any given one-way route, the direction of flow is always from sender to receiver. We could use many different metaphors to talk about the \"closer to sender\" and \"closer to receiver\" directions -- upstream and downstream, left and right, before and after, in and out. We've chosen to standardize on two:

    "},{"location":"concepts/0046-mediators-and-relays/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Connect.Me Free mobile app from Evernym. Installed via app store on iOS and Android. Verity Commercially licensed enterprise agent, SaaS or on-prem. DIDComm mediator Open source cloud-based mediator with Firebase support."},{"location":"concepts/0047-json-ld-compatibility/","title":"Aries RFC 0047: JSON-LD Compatibility","text":""},{"location":"concepts/0047-json-ld-compatibility/#summary","title":"Summary","text":"

    Explains the goals of DID Communication with respect to JSON-LD, and how Aries proposes to accomplish them.

    "},{"location":"concepts/0047-json-ld-compatibility/#motivation","title":"Motivation","text":"

    JSON-LD is a familiar body of conventions that enriches the expressive power of plain JSON. It is natural for people who arrive in the DID Communication (DIDComm) ecosystem to wonder whether we are using JSON-LD--and if so, how. We need a coherent answer that clarifies our intentions and that keeps us true to those intentions as the ecosystem evolves.

    "},{"location":"concepts/0047-json-ld-compatibility/#tutorial","title":"Tutorial","text":"

    The JSON-LD spec is a recommendation work product of the W3C RDF Working Group Since it was formally recommended as version 1.0 in 2014, the JSON for Linking Data Community Group has taken up not-yet-standards-track work on a 1.1 update.

    JSON-LD has significant gravitas in identity circles. It gives to JSON some capabilities that are sorely needed to model the semantic web, including linking, namespacing, datatyping, signing, and a strong story for schema (partly through the use of JSON-LD on schema.org).

    However, JSON-LD also comes with some conceptual and technical baggage. It can be hard for developers to master its subtleties; it requires very flexible parsing behavior after built-in JSON support is used to deserialize; it references a family of related specs that have their own learning curve; the formality of its test suite and libraries may get in the way of a developer who just wants to read and write JSON and \"get stuff done.\"

    In addition, the problem domain of DIDComm is somewhat different from the places where JSON-LD has the most traction. The sweet spot for DIDComm is small, relatively simple JSON documents where code behavior is strongly bound to the needs of a specific interaction. DIDComm needs to work with extremely simple agents on embedded platforms. Such agents may experience full JSON-LD support as an undue burden when they don't even have a familiar desktop OS. They don't need arbitrary semantic complexity.

    If we wanted to use email technology to send a verifiable credential, we would model the credential as an attachment, not enrich the schema of raw email message bodies. DIDComm invites a similar approach.

    "},{"location":"concepts/0047-json-ld-compatibility/#goal","title":"Goal","text":"

    The DIDComm messaging effort that began in the Indy community wants to benefit from the accessibility of ordinary JSON, but leave an easy path for more sophisticated JSON-LD-driven patterns when the need arises. We therefore set for ourselves this goal:

    Be compatible with JSON-LD, such that advanced use cases can take advantage of it where it makes sense, but impose no dependencies on the mental model or the tooling of JSON-LD for the casual developer.

    "},{"location":"concepts/0047-json-ld-compatibility/#what-the-casual-developer-needs-to-know","title":"What the Casual Developer Needs to Know","text":"

    That's it.

    "},{"location":"concepts/0047-json-ld-compatibility/#details","title":"Details","text":"

    Compatibility with JSON-LD was evaluated against version 1.1 of the JSON-LD spec, current in early 2019. If material changes in the spec are forthcoming, a new analysis may be worthwhile. Our current understanding follows.

    "},{"location":"concepts/0047-json-ld-compatibility/#type","title":"@type","text":"

    The type of a DIDComm message, and its associated route or handler in dispatching code, is given by the JSON-LD @type property at the root of a message. JSON-LD requires this value to be an IRI. DIDComm DID references are fully compliant. Instances of @type on any node other than a message root have JSON-LD meaning, but no predefined relevance in DIDComm.

    "},{"location":"concepts/0047-json-ld-compatibility/#id","title":"@id","text":"

    The identifier for a DIDComm message is given by the JSON-LD @id property at the root of a message. JSON-LD requires this value to be an IRI. DIDComm message IDs are relative IRIs, and can be converted to absolute form as described in RFC 0217: Linkable Message Paths. Instances of @id on any node other than a message root have JSON-LD meaning, but no predefined relevance in DIDComm.

    "},{"location":"concepts/0047-json-ld-compatibility/#context","title":"@context","text":"

    This is JSON-LD\u2019s namespacing mechanism. It is active in DIDComm messages, but can be ignored for simple processing, in the same way namespaces in XML are often ignored for simple tasks.

    Every DIDComm message has an associated @context, but we have chosen to follow the procedure described in section 6 of the JSON-LD spec, which focuses on how ordinary JSON can be intepreted as JSON-LD by communicating @context out of band.

    DIDComm messages communicate the context out of band by specifying it in the protocol definition (e.g., RFC) for the associated message type; thus, the value of @type indirectly gives the relevant @context. In advanced use cases, @context may appear in a DIDComm message, supplementing this behavior.

    "},{"location":"concepts/0047-json-ld-compatibility/#ordering","title":"Ordering","text":"

    JSON-LD specifies that the order of items in arrays is NOT significant, and notes (correctly) that this is the opposite of the standard assumption for plain JSON. This makes sense when viewed through the lens of JSON-LD\u2019s role as a transformation of RDF.

    Since we want to violate as few assumptions as possible for a developer with general knowledge of JSON, DIDComm messages reverse this default, making arrays an ordered construct, as if all DIDComm message @contexts contained something like:

    \"each field\": { \"@container\": \"@list\"}\n
    To contravene the default, use a JSON-LD construction like this in @context:

    \"myfield\": { \"@container\": \"@set\"}\n
    "},{"location":"concepts/0047-json-ld-compatibility/#decorators","title":"Decorators","text":"

    Decorators are JSON fragments that can be included in any DIDComm message. They enter the formally defined JSON-LD namespace via a JSON-LD fragment that is automatically imputed to every DIDComm message:

    \"@context\": {\n  \"@vocab\": \"https://github.com/hyperledger/aries-rfcs/\"\n}\n

    All decorators use the reserved prefix char ~ (tilde). For more on decorators, see the Decorator RFC.

    "},{"location":"concepts/0047-json-ld-compatibility/#signing","title":"Signing","text":"

    JSON-LD is associated but not strictly bound to a signing mechanism, LD-Signatures. It\u2019s a good mechanism, but it comes with some baggage: you must canonicalize, which means you must resolve every \u201cterm\u201d (key name) to its fully qualified form by expanding contexts before signing. This raises the bar for JSON-LD sophistication and library dependencies.

    The DIDComm community is not opposed to using LD Signatures for problems that need them, but has decided not to adopt the mechanism across the board. There is another signing mechanism that is far simpler, and adequate for many scenarios. We\u2019ll use whichever scheme is best suited to circumstances.

    "},{"location":"concepts/0047-json-ld-compatibility/#type-coercion","title":"Type Coercion","text":"

    DIDComm messages generally do not need this feature of JSON-LD, because there are well understood conventions around date-time datatypes, and individual RFCs that define each message type can further clarify such subtleties. However, it is available on a message-type-definition basis (not ad hoc).

    "},{"location":"concepts/0047-json-ld-compatibility/#node-references","title":"Node References","text":"

    JSON-LD lets one field reference another. See example 93 (note that the ref could have just been \u201c#me\u201d instead of the fully qualified IRI). We may need this construct at some point in DIDComm, but it is not in active use yet.

    "},{"location":"concepts/0047-json-ld-compatibility/#internationalization-and-localization","title":"Internationalization and Localization","text":"

    JSON-LD describes a mechanism for this. It has approximately the same ../../features as the one described in Aries RFC 0043, with a few exceptions:

    Because of these misalignments, the DIDComm ecosystem plans to use its own solution to this problem.

    "},{"location":"concepts/0047-json-ld-compatibility/#additional-json-ld-constructs","title":"Additional JSON-LD Constructs","text":"

    The following JSON-LD keywords may be useful in DIDComm at some point in the future: @base, @index, @container (cf @list and @set), @nest, @value, @graph, @prefix, @reverse, @version.

    "},{"location":"concepts/0047-json-ld-compatibility/#drawbacks","title":"Drawbacks","text":"

    By attempting compatibility but only lightweight usage of JSON-LD, we are neither all-in on JSON-LD, nor all-out. This could cause confusion. We are making the bet that most developers won't need to know or care about the details; they'll simply learn that @type and @id are special, required fields on messages. Designers of protocols will need to know a bit more.

    "},{"location":"concepts/0047-json-ld-compatibility/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0047-json-ld-compatibility/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0047-json-ld-compatibility/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0049-repudiation/","title":"Aries RFC 0049: Repudiation","text":""},{"location":"concepts/0049-repudiation/#summary","title":"Summary","text":"

    Explain DID Communication's perspective on repudiation, and how this influences the DIDComm approach to digital signatures.

    "},{"location":"concepts/0049-repudiation/#motivation","title":"Motivation","text":"

    A very common mistake among newcomers to cryptography is to assume that digital signatures are the best way to prove the origin of data. While it is true that digital signatures can be used in this way, over-signing creates a digital exhaust that can lead to serious long-term privacy problems. We do use digital signatures, but we want to be very deliberate about when and why--and by default, we want to use a more limited technique called authenticated encryption. This doc explains the distinction and its implications.

    "},{"location":"concepts/0049-repudiation/#tutorial","title":"Tutorial","text":"

    If Carol receives a message that purports to come from Alice, she may naturally ask:

    Do I know that this really came from Alice?

    This is a fair question, and an important one. There are two ways to answer it:

    Both of these approaches can answer Carol's question, but they differ in who can trust the answer. If Carol knows Alice is the sender, but can't prove it to anybody else, then we say the message is publicly repudiable; if Carol can prove the origin to others, then we say the message is non-repudiable.

    The repudiable variant is accomplished with a technique called authenticated encryption.

    The non-repudiable variant is accomplished with digital signatures.

    "},{"location":"concepts/0049-repudiation/#how-authenticated-encryption-works","title":"How Authenticated Encryption Works","text":"

    Repudiable sending may sound mysterious, but it's actually quite simple. Alice and Carol can negotiate a shared secret and trust one another not to leak it. Thereafter, if Alice sends Carol a message that uses the shared secret (e.g., it's encrypted by a negotiated symmetric encryption key), then Carol knows the sender must be Alice. However, she can't prove it to anyone, because Alice's immediate counter-response could be, \"Carol could have encrypted this herself. She knows the key, too.\" Notice that this only works in a pairwise channel.

    "},{"location":"concepts/0049-repudiation/#signatures","title":"Signatures","text":"

    Non-repudiable messages are typically accomplished with digital signatures. With signatures, everyone can examine a signature to verify its provenance.

    Fancy signature schemes such as ring signatures may represent intermediate positions, where the fact that a signature was provided by a member of a group is known--but not which specific member did the signing.

    "},{"location":"concepts/0049-repudiation/#why-and-when-to-use-each-strategy","title":"Why and When To Use Each Strategy","text":"

    A common mistake is to assume that digital signatures should be used everywhere because they give the most guarantees. This is a misunderstanding of who needs which guarantees under which conditions.

    If Alice tells a secret to Carol, who should decide whether the secret is reshared--Alice, or Carol?

    In an SSI paradigm, the proper, desirable default is that a sender of secrets should retain the ability to decide if their secrets are shareable, not give that guarantee away.

    If Alice sends a repudiable message, she gets a guarantee that Carol can't reshare it in a way that damages Alice. On the other hand, if she sends a message that's digitally signed, she has no control over where Carol shares the secret and proves its provenance. Hopefully Carol has Alice's best interests at heart, and has good judgment and solid cybersecurity...

    There are certainly cases where non-repudiation is appropriate. If Alice is entering into a borrower:lender relationship with Carol, Carol needs to prove to third parties that Alice, and only Alice, incurred the legal obligation.

    DIDComm supports both modes of communication. However, properly modeled interactions tend to favor repudiable messages; non-repudiation must be a deliberate choice. For this reason, we assume repudiable until an explicit signature is required (in which case the sign() crypto primitive is invoked). This matches the physical world, where most communication is casual and does not carry the weight of legal accountability--and should not.

    "},{"location":"concepts/0049-repudiation/#unknown-recipients","title":"Unknown Recipients","text":"

    Imagine that Alice wants to broadcast a message. She doesn't know who will receive it, so she can't use authenticated encryption. Yet she wants anyone who receives it to know that it truly comes from her.

    In this situation, digital signatures are required. Note, however, that Alice is trading some privacy for her ability to publicly prove message origin.

    "},{"location":"concepts/0049-repudiation/#reference","title":"Reference","text":"

    Authenticated encryption is not something we invented. It is well described in the documentation for libsodium. It is implemented there, and also in the pure javascript port, TweetNacl.

    "},{"location":"concepts/0049-repudiation/#drawbacks","title":"Drawbacks","text":"

    The main reason not to emphasize authenticated encryption over digital signatures is that we seem to encounter a steady impedance from people who are signature-oriented. It is hard and time-consuming to reset expectations. However, we have concluded that the gains in privacy are worth the effort.

    "},{"location":"concepts/0049-repudiation/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0050-wallets/","title":"Aries RFC 0050: Wallets","text":""},{"location":"concepts/0050-wallets/#summary","title":"Summary","text":"

    Specify the external interfaces of identity wallets in the Indy ecosystem, as well as some background ../../concepts, theory, tradeoffs, and internal implementation guidelines.

    "},{"location":"concepts/0050-wallets/#motivation","title":"Motivation","text":"

    Wallets are a familiar component metaphor that SSI has adopted from the world of cryptocurrencies. The translation isn't perfect, though; crypto wallets have only a subset of the ../../features that an identity wallet needs. This causes problems, as coders may approach wallets in Indy with assumptions that are more narrow than our actual design target.

    Since wallets are a major vector for hacking and cybersecurity issues, casual or fuzzy wallet requirements are a recipe for frustration or disaster. Divergent and substandard implementations could undermine security more broadly. This argues for as much design guidance and implementation help as possible.

    Wallets are also a unit of identity portability--if an identity owner doesn't like how her software is working, she should be able to exercise her self- sovereignty by taking the contents of her wallet to a new service. This implies that wallets need certain types of interoperability in the ecosystem, if they are to avoid vendor lock-in.

    All of these reasons--to clarify design scope, to provide uniform high security, and to guarantee interop--suggest that we need a formal RFC to document wallet architecture.

    "},{"location":"concepts/0050-wallets/#tutorial","title":"Tutorial","text":"

    (For a slide deck that gives a simplified overview of all the content in this RFC, please see http://bit.ly/2JUcIiT. The deck also includes a link to a recorded presentation, if you prefer something verbal and interactive.)

    "},{"location":"concepts/0050-wallets/#what-is-an-identity-wallet","title":"What Is an Identity Wallet?","text":"

    Informally, an identity wallet (preferably not just \"wallet\") is a digital container for data that's needed to control a self-sovereign identity. We borrow this metaphor from physical wallets:

    Notice that we do not carry around in a physical wallet every document, key, card, photo, piece of currency, or credential that we possess. A wallet is a mechanism of convenient control, not an exhaustive repository. A wallet is portable. A wallet is worth safeguarding. Good wallets are organized so we can find things easily. A wallet has a physical location.

    What does suggest about identity wallets?

    "},{"location":"concepts/0050-wallets/#types-of-sovereign-data","title":"Types of Sovereign Data","text":"

    Before we give a definitive answer to that question, let's take a detour for a moment to consider digital data. Actors in a self-sovereign identity ecosystem may own or control many different types of data:

    ...and much more. Different subsets of data may be worthy of different protection efforts:

    The data can also show huge variety in its size and in its richness:

    Because of the sensitivity difference, the size and richness difference, joint ownership, and different needs for access in different circumstances, we may store digital data in many different locations, with different backup regimes, different levels of security, and different cost profiles.

    "},{"location":"concepts/0050-wallets/#whats-out-of-scope","title":"What's Out of Scope","text":""},{"location":"concepts/0050-wallets/#not-a-vault","title":"Not a Vault","text":"

    This variety suggests that an identity wallet as a loose grab-bag of all our digital \"stuff\" will give us a poor design. We won't be able to make good tradeoffs that satisfy everybody; some will want rigorous, optimized search; others will want to minimize storage footprint; others will be concerned about maximizing security.

    We reserve the term vault to refer to the complex collection of all an identity owner's data:

    Note that a vault can contain an identity wallet. A vault is an important construct, and we may want to formalize its interface. But that is not the subject of this spec.

    "},{"location":"concepts/0050-wallets/#not-a-cryptocurrency-wallet","title":"Not A Cryptocurrency Wallet","text":"

    The cryptocurrency community has popularized the term \"wallet\"--and because identity wallets share with crypto wallets both high-tech crypto and a need to store secrets, it is tempting to equate these two ../../concepts. However, an identity wallet can hold more than just cryptocurrency keys, just as a physical wallet can hold more than paper currency. Also, identity wallets may need to manage hundreds of millions of relationships (in the case of large organizations), whereas most crypto wallets manage a small number of keys:

    "},{"location":"concepts/0050-wallets/#not-a-gui","title":"Not a GUI","text":"

    As used in this spec, an identity wallet is not a visible application, but rather a data store. Although user interfaces (superb ones!) can and should be layered on top of wallets, from indy's perspective the wallet itself consists of a container and its data; its friendly face is a separate construct. We may casually refer to an application as a \"wallet\", but what we really mean is that the application provides an interface to the underlying wallet.

    This is important because if a user changes which app manages his identity, he should be able to retain the wallet data itself. We are aiming for a better portability story than browsers offer (where if you change browsers, you may be able to export+import your bookmarks, but you have to rebuild all sessions and logins from scratch).

    "},{"location":"concepts/0050-wallets/#personas","title":"Personas","text":"

    Wallets have many stakeholders. However, three categories of wallet users are especially impactful on design decisions, so we define a persona for each.

    "},{"location":"concepts/0050-wallets/#alice-individual-identity-owner","title":"Alice (individual identity owner)","text":"

    Alice owns several devices, and she has an agent in the cloud. She has a thousand relationships--some with institutions, some with other people. She has a couple hundred credentials. She owns three different types of cryptocurrency. She doesn\u2019t issue or revoke credentials--she just uses them. She receives proofs from other entities (people and orgs). Her main tool for exercising a self-sovereign identity is an app on a mobile device.

    "},{"location":"concepts/0050-wallets/#faber-intitutional-identity-owner","title":"Faber (intitutional identity owner)","text":"

    Faber College has an on-prem data center as well as many resources and processes in public and private clouds. It has relationships with a million students, alumni, staff, former staff, applicants, business partners, suppliers, and so forth. Faber issues credentials and must manage their revocation. Faber may use crypto tokens to sell and buy credentials and proofs.

    "},{"location":"concepts/0050-wallets/#the-org-book-trust-hub","title":"The Org Book (trust hub)","text":"

    The Org Book holds credentials (business licenses, articles of incorporation, health permits, etc) issued by various government agencies, about millions of other business entities. It needs to index and search credentials quickly. Its data is public. It serves as a reference for many relying parties--thus its trust hub role.

    "},{"location":"concepts/0050-wallets/#use-cases","title":"Use Cases","text":"

    The specific uses cases for an identity wallet are too numerous to fully list, but we can summarize them as follows:

    As an identity owner (any of the personas above), I want to manage identity and its relationships in a way that guarantees security and privacy:

    "},{"location":"concepts/0050-wallets/#managing-secrets","title":"Managing Secrets","text":"

    Certain sensitive things require special handling. We would never expect to casually lay an ebola zaire sample on the counter in our bio lab; rather, it must never leave a special controlled isolation chamber.

    Cybersecurity in wallets can be greatly enhanced if we take a similar tack with high-value secrets. We prefer to generate such secrets in their final resting place, possibly using a seed if we need determinism. We only use such secrets in their safe place, instead of passing them out to untrusted parties.

    TPMs, HSMs, and so forth follow these rules. Indy\u2019s current wallet interface does, too. You can\u2019t get private keys out.

    "},{"location":"concepts/0050-wallets/#composition","title":"Composition","text":"

    The foregoing discussions about cybersecurity, the desirability of design guidance and careful implementation, and wallet data that includes but is not limited to secrets motivates the following logical organization of identity wallets in Indy:

    The world outside a wallet interfaces with the wallet through a public interface provided by indy-sdk, and implemented only once. This is the block labeled encryption, query (wallet core) in the diagram. The implementation in this layer guarantees proper encryption and secret-handling. It also provides some query ../../features. Records (items) to be stored in a wallet are referenced by a public handle if they are secrets. This public handle might be a public key in a key pair, for example. Records that are not secrets can be returned directly across the API boundary.

    Underneath, this common wallet code in libindy is supplemented with pluggable storage-- a technology that provides persistence and query ../../features. This pluggable storage could be a file system, an object store, an RDBMS, a NoSQL DB, a Graph DB, a key~value store, or almost anything similar. The pluggable storage is registered with the wallet layer by providing a series of C-callable functions (callbacks). The storage layer doesn't have to worry about encryption at all; by the time data reaches it, it is encrypted robustly, and the layer above the storage takes care of translating queries to and from encrypted form for external consumers of the wallet.

    "},{"location":"concepts/0050-wallets/#tags-and-queries","title":"Tags and Queries","text":"

    Searchability in wallets is facilitated with a tagging mechanism. Each item in a wallet can be associated with zero or more tags, where a tag is a key=value pair. Items can be searched based on the tags associated with them, and tag values can be strings or numbers. With a good inventory of tags in a wallet, searching can be robust and efficient--but there is no support for joins, subqueries, and other RDBMS-like constructs, as this would constrain the type of storage plugin that could be written.

    An example of the tags on a wallet item that is a credential might be:

      item-name = \"My Driver's License\"\n  date-issued = \"2018-05-23\"\n  issuer-did = \"ABC\"\n  schema = \"DEF\"\n

    Tag names and tag values are both case-sensitive.

    Because tag values are normally encrypted, most tag values can only be tested using the $eq, $neq or $in operators (see Wallet Query Language, next). However, it is possible to force a tag to be stored in the wallet as plain text by naming it with a special prefix, ~ (tilde). This enables operators like $gt, $lt, and $like. Such tags lose their security guarantees but provide for richer queries; it is up to applications and their users to decide whether the tradeoff is appropriate.

    "},{"location":"concepts/0050-wallets/#wallet-query-language","title":"Wallet Query Language","text":"

    Wallets can be searched and filtered using a simple, JSON-based query language. We call this Wallet Query Language (WQL). WQL is designed to require no fancy parsing by storage plugins, and to be easy enough for developers to learn in just a few minutes. It is inspired by MongoDB's query syntax, and can be mapped to SQL, GraphQL, and other query languages supported by storage backends, with minimal effort.

    Formal definition of WQL language is the following:

    query = {subquery}\nsubquery = {subquery, ..., subquery} // means subquery AND ... AND subquery\nsubquery = $or: [{subquery},..., {subquery}] // means subquery OR ... OR subquery\nsubquery = $not: {subquery} // means NOT (subquery)\nsubquery = \"tagName\": tagValue // means tagName == tagValue\nsubquery = \"tagName\": {$neq: tagValue} // means tagName != tagValue\nsubquery = \"tagName\": {$gt: tagValue} // means tagName > tagValue\nsubquery = \"tagName\": {$gte: tagValue} // means tagName >= tagValue\nsubquery = \"tagName\": {$lt: tagValue} // means tagName < tagValue\nsubquery = \"tagName\": {$lte: tagValue} // means tagName <= tagValue\nsubquery = \"tagName\": {$like: tagValue} // means tagName LIKE tagValue\nsubquery = \"tagName\": {$in: [tagValue, ..., tagValue]} // means tagName IN (tagValue, ..., tagValue)\n
    "},{"location":"concepts/0050-wallets/#sample-wql-query-1","title":"Sample WQL Query 1","text":"

    Get all credentials where subject like \u2018Acme%\u2019 and issue_date > last week. (Note here that the name of the issue date tag begins with a tilde, telling the wallet to store its value unencrypted, which makes the $gt operator possible.)

    {\n  \"~subject\": {\"$like\": \"Acme%\"},\n  \"~issue_date\": {\"$gt\": 2018-06-01}\n}\n
    "},{"location":"concepts/0050-wallets/#sample-wql-query-2","title":"Sample WQL Query 2","text":"

    Get all credentials about me where schema in (a, b, c) and issuer in (d, e, f).

    {\n  \"schema_id\": {\"$in\": [\"a\", \"b\", \"c\"]},\n  \"issuer_id\": {\"$in\": [\"d\", \"e\", \"f\"]},\n  \"holder_role\": \"self\"\n}\n
    "},{"location":"concepts/0050-wallets/#encryption","title":"Encryption","text":"

    Wallets need very robust encryption. However, they must also be searchable, and the encryption must be equally strong regardless of which storage technology is used. We want to be able to hide data patterns in the encrypted data, such that an attacker cannot see common prefixes on keys, or common fragments of data in encrypted values. And we want to rotate the key that protects a wallet without having to re-encrypt all its content. This suggests that a trivial encryption scheme, where we pick a symmetric key and encrypt everything with it, is not adequate.

    Instead, wallet encryption takes the following approach:

    The 7 \"column\" keys are concatenated and encrypted with a wallet master key, then saved into the metadata of the wallet. This allows the master key to be rotated without re-encrypting all the items in the wallet.

    Today, all encryption is done using ChaCha20-Poly1305, with HMAC-SHA256. This is a solid, secure encryption algorithm, well tested and widely supported. However, we anticipate the desire to use different cipher suites, so in the future we will make the cipher suite pluggable.

    The way the individual fields are encrypted is shown in the following diagram. Here, data is shown as if stored in a relational database with tables. Wallet storage may or may not use tables, but regardless of how the storage distributes and divides the data, the logical relationships and the encryption shown in the diagram apply.

    "},{"location":"concepts/0050-wallets/#pluggable-storage","title":"Pluggable Storage","text":"

    Although Indy infrastructure will provide only one wallet implementation it will allow to plug different storages for covering of different use cases. Default storage shipped with libindy will be sqlite based and well suited for agents running on edge devices. The API endpoint register_wallet_storage will allow Indy Developers to register a custom storage implementation as a set of handlers.

    A storage implementation does not need any special security ../../features. It stores data that was already encrypted by libindy (or data that needs no encryption/protection, in the case of unencrypted tag values). It searches data in whatever form it is persisted, without any translation. It returns data as persisted, and lets the common wallet infrastructure in libindy decrypt it before return it to the user.

    "},{"location":"concepts/0050-wallets/#secure-enclaves","title":"Secure Enclaves","text":"

    Secure Enclaves are purposely designed to manage, generate, and securely store cryptographic material. Enclaves can be either specially designed hardware (e.g. HSM, TPM) or trusted execution environments (TEE) that isolate code and data from operating systems (e.g. Intel SGX, AMD SVE, ARM Trustzone). Enclaves can replace common cryptographic operations that wallets perform (e.g. encryption, signing). Some secrets cannot be stored in wallets like the key that encrypts the wallet itself or keys that are backed up. These cannot be stored in enclaves as keys stored in enclaves cannot be extracted. Enclaves can still protect these secrets via a mechanism called wrapping.

    "},{"location":"concepts/0050-wallets/#enclave-wrapping","title":"Enclave Wrapping","text":"

    Suppose I have a secret, X, that needs maximum protection. However, I can\u2019t store X in my secure enclave because I need to use it for operations that the enclave can\u2019t do for me; I need direct access. So how to I extend enclave protections to encompass my secret?

    I ask the secure enclave to generate a key, Y, that will be used to protect X. Y is called a wrapping key. I give X to the secure enclave and ask that it be encrypted with wrapping key Y. The enclave returns X\u2019 (ciphertext of X, now called a wrapped secret), which I can leave on disk with confidence; it cannot be decrypted to X without involving the secure enclave. Later, when I want to decrypt, I give wrapped secret X\u2019 to the secure enclave and ask it to give me back X by decrypting with wrapping key Y.

    You could ask whether this really increases security. If you can get into the enclave, you can wrap or unwrap at will.

    The answer is that an unwrapped secret is protected by only one thing--whatever ACLs exist on the filesystem or storage where it resides. A wrapped secret is protected by two things--the ACLs and the enclave. OS access may breach either one, but pulling a hard drive out of a device will not breach the enclave.

    "},{"location":"concepts/0050-wallets/#paper-wallets","title":"Paper Wallets","text":"

    It is possible to persist wallet data to physical paper (or, for that matter, to etched metal or other physical media) instead of a digital container. Such data has attractive storage properties (e.g., may survive natural disasters, power outages, and other challenges that would destroy digital data). Of course, by leaving the digital realm, the data loses its accessibility over standard APIs.

    We anticipate that paper wallets will play a role in backup and recovery, and possibly in enabling SSI usage by populations that lack easy access to smartphones or the internet. Our wallet design should be friendly to such usage, but physical persistence of data is beyond the scope of Indy's plugin storage model and thus not explored further in this RFC.

    "},{"location":"concepts/0050-wallets/#backup-and-recovery","title":"Backup and Recovery","text":"

    Wallets need a backup and recovery feature, and also a way to export data and import it. Indy's wallet API includes an export function and an import function that may be helpful in such use cases. Today, the export is unfiltered--all data is exported. The import is also all-or-nothing and must be to an empty wallet; it is not possible to import selectively or to update existing records during import.

    A future version of import and export may add filtering, overwrite, and progress callbacks. It may also allow supporting or auxiliary data (other than what the wallet directly persists) to be associated with the export/import payload.

    For technical details on how export and import work, please see the internal design docs.

    "},{"location":"concepts/0050-wallets/#reference","title":"Reference","text":""},{"location":"concepts/0050-wallets/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    We could implement wallets purely as built already in the cryptocurrency world. This would give us great security (except for crypto wallets that are cloud based), and perhaps moderately good usability.

    However, it would also mean we could not store credentials in wallets. Indy would then need an alternate mechanism to scan some sort of container when trying to satisfy a proof request. And it would mean that a person's identity would not be portable via a single container; rather, if you wanted to take your identity to a new place, you'd have to copy all crypto keys in your crypto wallet, plus copy all your credentials using some other mechanism. It would also fragment the places where you could maintain an audit trail of your SSI activities.

    "},{"location":"concepts/0050-wallets/#prior-art","title":"Prior art","text":"

    See comment about crypto wallets, above.

    "},{"location":"concepts/0050-wallets/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0050-wallets/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy SDK Most agents that implement wallets get their wallet support from Indy SDK. These are not listed separately."},{"location":"concepts/0051-dkms/","title":"Aries RFC 0051: Decentralized Key Management","text":""},{"location":"concepts/0051-dkms/#summary","title":"Summary","text":"

    Describes a general approach to key management in a decentralized, self-sovereign world. We expect Aries to embody the principles described here; this doc is likely to color numerous protocols and ecosystem ../../features.

    "},{"location":"concepts/0051-dkms/#motivation","title":"Motivation","text":"

    A decentralized key management system (DKMS) is an approach to cryptographic key management where there is no central authority. DKMS leverages the security, immutability, availability, and resiliency properties of distributed ledgers to provide highly scalable key distribution, verification, and recovery.

    Key management is vital to exercising sovereignty in a digital ecosystem, and decentralization is a vital principle as well. Therefore, we need a coherent and comprehensive statement of philosophy and architecture on this vital nexus of topics.

    "},{"location":"concepts/0051-dkms/#tutorial","title":"Tutorial","text":"

    The bulk of the content for this RFC is located in the official architecture documentation -- dkms-v4.md; readers are encouraged to go there to learn more. Here we present only the highest-level background context, for those who may be unaware of some basics.

    "},{"location":"concepts/0051-dkms/#background-concepts","title":"Background Concepts","text":""},{"location":"concepts/0051-dkms/#key-types","title":"Key Types","text":"

    DKMS uses the following key types: 1. Master keys: Keys that are not cryptographically protected. They are distributed manually or initially installed and protected by procedural controls and physical or electronic isolation. 2. Key encrypting keys: Symmetric or public keys used for key transport or storage of other keys. 3. Data keys: Used to provide cryptographic operations on user data (e.g., encryption, authentication).

    The keys at one level are used to protect items at a lower level. Consequently, special measures are used to protect master keys, including severely limiting access and use, hardware protection, and providing access to the key only under shared control.

    "},{"location":"concepts/0051-dkms/#key-loss","title":"Key Loss","text":"

    Key loss means the owner no longer controls the key and it can assume there is no further risk of compromise. For example devices unable to function due to water, electricity, breaking, fire, hardware failure, acts of God, etc.

    "},{"location":"concepts/0051-dkms/#compromise","title":"Compromise","text":"

    Key compromise means that private keys and/or master keys have become or can become known either passively or actively.

    "},{"location":"concepts/0051-dkms/#recovery","title":"Recovery","text":"

    In decentralized identity management, recovery is important since identity owners have no \u201chigher authority\u201d to turn to for recovery. 1. Offline recovery uses physical media or removable digital media to store recovery keys. 2. Social recovery employs entities trusted by the identity owner called \"trustees\" who store recovery data on an identity owners behalf\u2014typically in the trustees own agent(s).

    These methods are not exclusive and should be combined with key rotation and revocation for proper security.

    "},{"location":"concepts/0051-dkms/#reference","title":"Reference","text":"
    1. Design and architecture
    2. Public Registry for Agent Authorization Policy. An identity owner creates a policy on the ledger that defines its agents and their authorizations. Agents while acting on the behalf of the identity owner need to prove that they are authorised. More details
    3. Shamir Secret
    4. Trustee Protocols
    "},{"location":"concepts/0051-dkms/#drawbacks-rationale-and-alternatives-prior-art-unresolved-questions","title":"Drawbacks, Rationale and alternatives, Prior art, Unresolved Questions","text":"

    The material that's normally in these sections of a RFC appears in the official architecture documentation -- dkms-v4.md.

    "},{"location":"concepts/0051-dkms/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy SDK partial: backup Connect.Me partial: backup, sync to cloud"},{"location":"concepts/0051-dkms/agent-authz-policy-ledger-interactions/","title":"Agent Authz policy (changes for ledger)","text":"

    Objective: Prove agents are authorized to provide proof of claims and authorize and de-authorize other agents

    "},{"location":"concepts/0051-dkms/agent-authz-policy-ledger-interactions/#assumptions","title":"Assumptions","text":"
    1. The ledger maintains a global accumulator that holds commitments sent by the agents.
    2. The global accumulator is maintained by the each node so every node knows the accumulator private key
    3. Agent auth policy txns are stored at the identity ledger.
    4. Each auth policy is uniquely identified by a policy address I.
    5. One agent can belong to several authz policies, thus several different I's.
    6. An agent can have several authorizations. Following are the list of authorizations:
    7. PROVE
    8. PROVE_GRANT
    9. PROVE_REVOKE
    10. ADMIN
    "},{"location":"concepts/0051-dkms/agent-authz-policy-ledger-interactions/#transactions","title":"Transactions","text":""},{"location":"concepts/0051-dkms/agent-authz-policy-ledger-interactions/#agent_authz","title":"AGENT_AUTHZ","text":"

    An authz policy is created/updated by an AGENT_AUTHZ transaction. A transaction creating a new authz policy:

    {\n    identifier: <transaction sender's verification key>\n    signature: <signature created by the sender's public key>,\n    req_id: <a nonce>,\n    operation: {\n        type: AGENT_AUTHZ,\n        address: <policy address, I>,\n        verkey: <optional, verification key of the agent>,\n        authorization: <optional, a bitset>,\n        commitment: <optional>\n    }\n} \n
    address: The policy address, this is a unique identifier of an authz policy. Is a large number (size/range TBD). If the ledger has never seen the provided policy address, it considers the transaction a creation of a new authz policy else it is considered an update of an existing policy identifier by the address. verkey: An ed25519 verkey of the agent to which the authorization corresponds. This is optional when a new policy is being created as identifier is sufficient. This verkey should be kept different from any DID verkey to avoid correlation. authorization: A bitset indicating which authorizations are being given to the agent, it is ignored when creating a new policy (the ledger does not know I). The various bits indicate different authorizations:

    0 None (revoked)\n1 ADMIN (all)\n2 PROVE\n3 PROVE_GRANT\n4 PROVE_REVOKE\n5 \"Reserved for future\"\n6 \"Reserved for future\"\n7  ... \n   ... \n

    While creating a new policy, this field's value is ignored and the creator agent has all authorizations. For any subsequent policy transactions, the ledger checks if the sender (author to be precise, since anyone can send a transaction once a signature has been done) of transaction has the authorization to make the transaction, eg. The author of txn has PROVE_GRANT if it is giving a PROVE authorization to another agent. Future Work: When we support m-of-n authorization, verkey would be a map stating the policy and the verkeys

    commitment: This is a number (size/range TBD) given by the agent when it is being given a PROVE authorization. Thus this field is only needed when a policy is being created or an agent is being given the PROVE authorization. The ledger upon receiving this commitment checks if the commitment is prime and if it is then it updates the global accumulator with this commitment. Efficient primality testing algorithms like BPSW or ECPP can be used but the exact algorithm is yet to be decided. If the commitment is not prime (in case of creation or update of policy address) then the transaction is rejected. The ledger rejects the transaction if it has already seen the commitment as part of another transaction. In case of creation of new policy or an agent being given PROVE authorization, the ledger responds with the accumulator value after the update with this commitment.

    "},{"location":"concepts/0051-dkms/agent-authz-policy-ledger-interactions/#get_agent_authz","title":"GET_AGENT_AUTHZ","text":"

    This query is sent by any client to check what the authz policy of any address I is

    {\n    ...,\n    operation: {\n        type: GET_AGENT_AUTHZ,\n        address: <policy address, I>,\n    }\n} \n

    The ledger replies with all the agents, their associated authorizations and the commitments of the address I.

    "},{"location":"concepts/0051-dkms/agent-authz-policy-ledger-interactions/#get_agent_authz_accum","title":"GET_AGENT_AUTHZ_ACCUM","text":"

    This query is sent by anyone to get the value of the accumulator.

    {\n    ...,\n    operation: {\n        type: GET_AGENT_AUTHZ_ACCUM,\n    accum_id: <id of either the provisioned agents accumulator or the revoked agent accumulator>\n    }\n} \n
    The ledger returns the global accumulator with the id. Both accumulators are add only; the client checks that commitment is present in one accumulator AND not present in other accumulator.

    "},{"location":"concepts/0051-dkms/agent-authz-policy-ledger-interactions/#data-structures","title":"Data structures","text":""},{"location":"concepts/0051-dkms/agent-authz-policy-ledger-interactions/#ledger","title":"Ledger","text":"

    Each authz transaction goes in the identity ledger.

    "},{"location":"concepts/0051-dkms/agent-authz-policy-ledger-interactions/#state-trie","title":"State trie.","text":"

    The state stores: 1. Accumulator: The accumulator is stored in the trie at name <special byte denoting an authz prove accumulator> with value as the value of accumulator. 2. Policies: The state stores one name for each policy, the name is <special byte denoting an authz policy>:<policy address>, the value at this name is a hash. The hash is determined deterministically serializing (RLP encoding from ethereum, we already use this) this data structure:

    [\n  [<agent verkey1>, <authorization bitset>, [commitment>]],\n  [<agent verkey2>, <authorization bitset>, [commitment>]],\n  [<agent verkey3>, <authorization bitset>, [commitment>]],\n]\n

    The hash of above can then be used to lookup (it is not, more on this later) the exact authorization policy in a separate name-value store. This is done to keep the database backing the state (trie) smaller.

    "},{"location":"concepts/0051-dkms/agent-authz-policy-ledger-interactions/#caches","title":"Caches","text":"

    There is an agent_authz cache used for optimisations are: The cache is a name-value store (leveldb) and offers a constant lookup time for lookup by name. 1. Policy values: The authorization of each agent per policy. The values for the keys are rlp encoding of the list of at most 2 items, authorization bitset with each bit respresenting a different auth, commitment is optional and relevant only when agent has the PROVE authorization.

    {\n  <policy address 1><delimiter><agent verkey 1>: <authorization bitset>:<commitment>,\n  <policy address 1><delimiter><agent verkey 2>: <authorization bitset>:<commitment>,\n  <policy address 1><delimiter><agent verkey 3>: <authorization bitset>:<commitment>,\n  <policy address 2><delimiter><agent verkey 1>: <authorization bitset>:<commitment>,\n  <policy address 2><delimiter><agent verkey 2>: <authorization bitset>:<commitment>,\n  ....\n}\n
    These names are used by the nodes during processing any transaction.

    1. Accumulator value: Value of each accumulator is stored corresponding to the special byte indicating the global accumulator.
      {\n  <special_byte>: <accumulator value>,\n}\n

    During processing of any write transaction, the node updates the ledger, state and caches after the txn is successful but for querying (client as well as its own like validation, etc) it only uses caches since caches are more efficient than state trie. The state trie is only used for state proofs.

    1. TODO: Maintaining a set of commitments: Each node maintains a set of see commitments and does not allow duplicate commitments. Its kept in key value store with constant lookup time for commitment lookup.
    "},{"location":"concepts/0051-dkms/agent-authz-policy-ledger-interactions/#code-organisation","title":"Code organisation.","text":"

    These changes would be implemented as a separate plugin. The plugin will not introduce new ledger or state but will introduce the cache described above. The plugin will introduce a new request handler which will subclass the DomainRequestHandler. The plugin's new request handler will introduce 1 write_type and 2 query_types and methods to handle those.

    "},{"location":"concepts/0051-dkms/dkms-v4/","title":"DKMS (Decentralized Key Management System) Design and Architecture V4","text":"

    2019-03-29

    Authors: Drummond Reed, Jason Law, Daniel Hardman, Mike Lodder

    Contributors: Christopher Allen, Devin Fisher, Nathan George, Lovesh Harchandani, Dmitry Khovratovich, Corin Kochenower, Brent Zundel, Nathan George

    Advisors: Stephen Wilson

    STATUS: This design and architecture for a decentralized key management system (DKMS) has been developed by Evernym Inc. under a contract with the U.S. Department of Homeland Security Science & Technology Directorate. This fourth draft is being released on 29 Mar 2019 to begin an open public review and comment process in preparation for DKMS to be submitted to a standards development organization such as OASIS for formal standardization.

    Acknowledgements:

    Table of Contents

    1. Introduction
    2. Design Goals and Requirements
    3. High Level Architecture
    4. Ledger Architecture
    5. Key Management Architecture
    6. Recovery Methods
    7. Recovery From Key Loss
    8. Recovery From Key Compromise
    9. DKMS Protocol
    10. Protocol Flows
    11. Open Issues and Future Work
    12. Future Standardization
    "},{"location":"concepts/0051-dkms/dkms-v4/#1-introduction","title":"1. Introduction","text":""},{"location":"concepts/0051-dkms/dkms-v4/#11-overview","title":"1.1. Overview","text":"

    DKMS (Decentralized Key Management System) is a new approach to cryptographic key management intended for use with blockchain and distributed ledger technologies (DLTs) where there are no centralized authorities. DKMS inverts a core assumption of conventional PKI (public key infrastructure) architecture, namely that public key certificates will be issued by centralized or federated certificate authorities (CAs). With DKMS, the initial \"root of trust\" for all participants is any distributed ledger or decentralized protocol that supports a new form of root identity record called a DID (decentralized identifier).

    A DID is a globally unique identifier that is generated cryptographically and self-registered with the identity owner\u2019s choice of a DID-compatible distributed ledger or decentralized protocol so no central registration authority is required. Each DID points to a DID document\u2014a JSON or JSON-LD object containing the associated public verification key(s) and addresses of services such as off-ledger agent(s) supporting secure peer-to-peer interactions with the identity owner. For more on DIDs, see the DID Primer. For more on peer-to-peer interactions, see the DID Communication explainer.

    Since no third party is involved in the initial registration of a DID and DID document, it begins as \"trustless\". From this starting point, trust between DID-identified peers can be built up through the exchange of verifiable credentials\u2014credentials about identity attributes that include cryptographic proof of authenticity of authorship. These proofs can be verified by reference to the issuer\u2019s DID and DID document. For more about verifiable credentials, see the Verifiable Credentials Primer.

    This decentralized web of trust model leverages the security, immutability, availability, and resiliency properties of distributed ledgers to provide highly scalable key distribution, verification, and recovery. This inversion of conventional public key infrastructure (PKI) into decentralized PKI (DPKI) removes centralized gatekeepers, making the benefits of PKI accessible to everyone. However this lack of centralized authorities for DKMS shifts the majority of responsibility for key management directly to participating identity owners. This demands the decentralized equivalent of the centralized cryptographic key management systems (CKMS) that are the current best practice in most enterprises. The purpose of this document is to specify a design and architecture that fulfills this market need.

    "},{"location":"concepts/0051-dkms/dkms-v4/#12-market-need","title":"1.2. Market Need","text":"

    X.509 public key certificates, as used in the TLS/SSL protocol for HTTPS secure Web browsing, have become the most widely adopted PKI in the world. However this system requires that all certificates be obtained from a relatively small list of trusted authorities\u2014and that any changes to these certificates also be approved by someone in this chain of trust.

    This creates political and structural barriers to establishing and updating authoritative data. This friction is great enough that only a small fraction of Internet users are currently in position to use public/private key cryptography for their own identity, security, privacy, and trust management. This inability for people and organizations to interact privately as independent, verifiable peers on their own terms has many consequences:

    1. It forces individuals and smaller organizations to rely on large federated identity providers and certificate authorities who are in a position to dictate security, privacy and business policies.

    2. It restricts the number of ways in which peers can discover each other and build new trust relationships\u2014which in turn limits the health and resiliency of the digital economy.

    3. It discourages the use of modern cryptography for increased security and privacy, weakening our cybersecurity infrastructure.

    Decentralized technologies such as distributed ledgers and edge protocols can remove these barriers and make it much easier to share and verify public keys. This enables each entity to manage its own authoritative key material without requiring approval from other parties. Furthermore, those changes can be seen immediately by the entity\u2019s peers without requiring them to change their software or \"certificate store\".

    Maturing DLTs and protocols will bring DPKI into the mainstream\u2014a combination of DIDs for decentralized identification and DKMS for decentralized key management. DPKI will provide a simple, secure, way to generate strong public/private key pairs, register them for easy discovery and verification, and rotate and retire them as needed to maintain strong security and privacy.

    "},{"location":"concepts/0051-dkms/dkms-v4/#13-benefits","title":"1.3. Benefits","text":"

    DKMS architecture and DPKI provides the following major benefits:

    1. No single point of failure. With DKMS, there is no central CA or other registration authority whose failure can jeopardize large swaths of users.

    2. Interoperability. DKMS will enable any two identity owners and their applications to perform key exchange and create encrypted P2P connections without reliance on proprietary software, service providers, or federations.

    3. Portability. DKMS will enable identity owners to avoid being locked into any specific implementation of a DKMS-compatible wallet, agent, or agency. Identity owners should\u2014with the appropriate security safeguards\u2014be able to use the DKMS protocol itself to move the contents of their wallet (though not necessarily the actual cryptographic keys) between compliant DKMS implementations.

    4. Resilient trust infrastructure. DKMS incorporates all the advantages of distributed ledger technology for decentralized access to cryptographically verifiable data. It then adds on top of it a distributed web of trust where any peer can exchange keys, form connections, and issue/accept verifiable credentials from any other peer.

    5. Key recovery. Rather than app-specific or domain-specific key recovery solutions, DKMS can build robust key recovery directly into the infrastructure, including agent-automated encrypted backup, DKMS key escrow services, and social recovery of keys, for example by backing up or sharding keys across trusted DKMS connections and agents.

    "},{"location":"concepts/0051-dkms/dkms-v4/#2-design-goals-and-requirements","title":"2. Design Goals and Requirements","text":""},{"location":"concepts/0051-dkms/dkms-v4/#21-conventional-ckms-requirements-nist-800-130-analysis","title":"2.1. Conventional CKMS Requirements: NIST 800-130 Analysis","text":"

    As a general rule, DKMS requirements are a derivation of CKMS requirements, adjusted for the lack of centralized authorities or systems for key management operations. Evernym\u2019s DKMS team and subcontractors performed an extensive analysis of the applicability of conventional CKMS requirements to DKMS using NIST Special Publication 800-130: A Framework for Designing Cryptographic Key Management Systems. For a summary of the results, see:

    The most relevant special requirements are highlighted in the following sections.

    "},{"location":"concepts/0051-dkms/dkms-v4/#22-decentralization","title":"2.2. Decentralization","text":"

    The DKMS design MUST NOT assume any reliance on a centralized authority for the system as a whole. The DKMS design MUST assume all participants are independent actors identified with DIDs conformant with the Decentralized Identifiers (DID) specification but otherwise acting in their own decentralized security and privacy domains. The DKMS design MUST support options for decentralized key recovery.

    What distinguishes DKMS from conventional CKMS is the fact that the entire design assumes decentralization: outside of the \"meta-policies\" established by the DKMS specification itself, there is no central authority to dictate policies that apply to all users. So global DKMS infrastructure must achieve interoperability organically based on a shared set of specifications, just like the Internet.

    Note that the need to maintain decentralization is most acute when it comes to key recovery: the advantages of decentralization are nullified if key recovery mechanisms reintroduce centralization.

    "},{"location":"concepts/0051-dkms/dkms-v4/#23-privacy-and-pseudonymity","title":"2.3. Privacy and Pseudonymity","text":"

    The DKMS design MUST NOT introduce new means of correlating participants by virtue of using the DKMS standards. The DKMS design SHOULD increase privacy and security by enabling the use of pseudonyms, selective disclosure, and encrypted private channels of communication.

    Conventional PKI and CKMS rarely have anti-correlation as a primary requirement. DKMS should ensure that participants will have more, not less, control over their privacy as well as their security. This facet of DKMS requires an vigilant application of all the principles of Privacy by Design.

    "},{"location":"concepts/0051-dkms/dkms-v4/#24-usability","title":"2.4. Usability","text":"

    DIDs and DKMS components intended to be used by individual identity owners MUST be safely usable without any special training or knowledge of cryptography or key management.

    In many ways this follows from decentralization: in a DKMS, there is no central authority to teach everyone how to use it or require specific user training. It must be automated and intuitive to a very high degree, similar to the usability achieved by modern encrypted OTT messaging products like Whatsapp, iMessage, and Signal.

    According to the BYU Internet Security Research Lab, this level of usability is a necessary property of any successfully deployed system. \"We spent the 1990s building and deploying security that wasn\u2019t really needed, and now that it\u2019s actually desirable, we\u2019re finding that nobody can use it\" [Guttman and Grigg, IEEE Security and Privacy, 2005]. The DKMS needs to be able to support a broad spectrum of applications, with both manual and automatic key management, in order to satisfy the numerous security and usability requirements of those applications.

    Again, this requirement is particularly acute when it comes to key recovery. Because there is no central authority to fall back on, the key recovery options must not only be anticipated and implemented in advance, but they must be easy enough for a non-technical user to employ while still preventing exploitation by an attacker.

    "},{"location":"concepts/0051-dkms/dkms-v4/#25-automation","title":"2.5. Automation","text":"

    To maximize usability, the DKMS design SHOULD automate as many key management functions as possible while still meeting security and privacy requirements.

    This design principle follows directly from the usability requirement, and also from the inherent complexity of maintaining the security, privacy, and integrity of cryptographic primitives combined with the general lack of knowledge of most Internet users about any of these subjects.

    "},{"location":"concepts/0051-dkms/dkms-v4/#26-key-derivation","title":"2.6. Key Derivation","text":"

    In DKMS design it is NOT RECOMMENDED to copy private keys directly between wallets, even over encrypted connections. It is RECOMMENDED to use derived keys whenever possible to enable agent-specific and device-specific revocation.

    This design principle is based on security best practices, and also the growing industry experience with the BIP32 standard for management of the large numbers of private keys required by Bitcoin and other cryptocurrencies. However DKMS architecture can also accomplish this goal in other ways, such as using key signing keys (\"key endorsement\").

    "},{"location":"concepts/0051-dkms/dkms-v4/#27-delegation-and-guardianship","title":"2.7. Delegation and Guardianship","text":"

    The DKMS design MUST enable key management to be delegated by one identity owner to another, including the DID concept of delegation.

    Although DKMS infrastructure enables \"self-sovereign identity\"\u2014digital identifiers and identity wallets that are completely under the control of an identity owner and cannot be taken away by a third-party\u2014not all individuals have the ability to be self-sovereign. They may be operating at a physical, economic, or network disadvantage that requires another identity owner (individual or org) to act as an agent on their behalf.

    Other identity owners may simply prefer to have others manage their keys for purposes of convenience, efficiency, or safety. In either case, this means DKMS architecture needs to incorporate the concept of delegation as defined in the Decentralized Identifiers (DID) specification and in the Sovrin Glossary.

    "},{"location":"concepts/0051-dkms/dkms-v4/#28-portability","title":"2.8. Portability","text":"

    The DKMS design MUST enable an identity owner\u2019s DKMS-compliant key management capabilities to be portable across multiple DKMS-compliant devices, applications, and service providers.

    While the NIST 800-130 specifications have an entire section on interoperability, those requirements are focused primarily on interoperability of CKMS components with each other and with external CKMS systems. They do not encompass the need for a decentralized identity owner to be able to port their key management capabilities from one CKMS device, application, or service provider to another.

    This is the DID and DKMS equivalent of telephone number portability, and it is critical not only for the general acceptance of DKMS infrastructure, but to support the ability of DID owners to act with full autonomy and independence. As with telephone number portability, it also helps ensure a robust and competitive marketplace for DKMS-compliant products and services. (NOTE: Note that \"portability\" here refers to the ability of a DID owner to use the same DID across multiple devices, software applications, service providers, etc. It does not mean that a particular DID that uses a particular DID method is portable across different distributed ledgers. DID methods are ledger-specific.)

    "},{"location":"concepts/0051-dkms/dkms-v4/#29-extensibility","title":"2.9. Extensibility","text":"

    The DKMS design SHOULD be capable of being extended to support new cryptographic algorithms, keys, data structures, and modules, as well as new distributed ledger technologies and other security and privacy innovations.

    Section 7 of NIST 800-130 includes several requirements for conventional CKMS to be able to transition to newer and stronger cryptographic algorithms, but it does not go as far as is required for DKMS infrastructure, which must be capable of adapting to evolving Internet security and privacy infrastructure as well as rapid advances in distributed ledger technologies.

    It is worth noting that the DKMS specifications will not themselves include a trust framework (also called a governance framework; rather, one or more trust frameworks can be layered over them to formalize certain types of extensions. This provides a flexible and adaptable method of extending DKMS to meet the needs of specific communities.

    "},{"location":"concepts/0051-dkms/dkms-v4/#210-simplicity","title":"2.10. Simplicity","text":"

    Given the inherent complexity of key management, the DKMS design SHOULD aim to be as simple and interoperable as possible by pushing complexity to the edges and to extensions.

    Simplicity and elegance of design are common traits of most successful decentralized systems, starting with the packet-based design of the Internet itself. The less complex a system is, the easier it is to debug, evaluate, and adapt to future changes. Especially in light of the highly comprehensive scope of NIST 800-130, this requirement highlights a core difference with conventional CKMS design: the DKMS specification should NOT try to do everything, e.g., enumerate every possible type of key or role of user or application, but let those be defined locally in a way that is interoperable with the rest of the system.

    "},{"location":"concepts/0051-dkms/dkms-v4/#211-open-system-and-open-standard","title":"2.11. Open System and Open Standard","text":"

    The DKMS design MUST be an open system based on open, royalty-free standards.

    While many CKMS systems are deployed using proprietary technology, the baseline DKMS infrastructure must, like the Internet itself, be an open, royalty-free system. It may, of course, have many proprietary extensions and solutions built on top of it.

    "},{"location":"concepts/0051-dkms/dkms-v4/#3-high-level-architecture","title":"3. High-Level Architecture","text":"

    At a high level, DKMS architecture consists of three logical layers:

    1. The DID layer is the foundational layer consisting of DIDs registered and resolved via distributed ledgers and/or decentralized protocols.

    2. The cloud layer consists of server-side agents and wallets that provide a means of communicating and mediating between the DID layer and the edge layer. This layer enables encrypted peer-to-peer communications for exchange and verification of DIDs, public keys, and verifiable credentials.

    3. The edge layer consists of the local devices, agents, and wallets used directly by identity owners to generate and store most private keys and perform most key management operations.

    Figure 1 is an overview of this three-layer architecture:

    Figure 1: The high-level three-layer DKMS architecture

    Figure 2 is a more detailed picture of the relationship between the different types of agents and wallets in DKMS architecture.

    Figure 2: Diagram of the types of agents and connections in DKMS architecture.

    "},{"location":"concepts/0051-dkms/dkms-v4/#31-the-did-decentralized-identifier-layer","title":"3.1. The DID (Decentralized Identifier) Layer","text":"

    The foundation for DKMS is laid by the DID specification. DIDs can work with any decentralized source of truth such as a distributed ledger or edge protocol for which a DID method\u2014a way of creating, reading, updating, and revoking a DID\u2014has been specified. As globally unique identifiers, DIDs are patterned after URNs (Uniform Resource Names): colon-delimited strings consisting of a scheme name followed by a DID method name followed by a method-specific identifier. Here is an example DID that uses the Sovrin DID method:

    did:sov:21tDAKCERh95uGgKbJNHYp

    Each DID method specification defines:

    1. The specific source of truth against which the DID method operates;

    2. The format of the method-specific identifier;

    3. The CRUD operations (create, read, update, delete) for DIDs and DID documents on that ledger.

    DID resolver code can then be written to perform these CRUD operations on the target system with respect to any DID conforming to that DID method specification. Note that some distributed ledger technologies (DLTs) and distributed networks are better suited to DIDs than others. The DID specification itself is neutral with regard to DLTs; it is anticipated that those DLTs that are best suited for the purpose of DIDs will see the highest adoption rates.there will be Darwinian selection of the DLTs that are best fit for the purpose of DIDs.

    From a digital identity perspective, the primary problem that DIDs and DID documents solve is the need for a universally available, decentralized root of trust that any application or system can rely upon to discover and verify credentials about the DID subject. Such a solution enables us to move \"beyond federation\" into a world where any peer can enter into trusted interactions with any other peer, just as the Internet enabled any two peers to connect and communicate.

    "},{"location":"concepts/0051-dkms/dkms-v4/#32-the-cloud-layer-cloud-agents-and-cloud-wallets","title":"3.2. The Cloud Layer: Cloud Agents and Cloud Wallets","text":"

    While the DID specification covers the bottom layer of a decentralized public key infrastructure, the DKMS spec will concentrate on the two layers above it. The first of these, the cloud layer, is the server-side infrastructure that mediates between the ultimate peers\u2014the edge devices used directly by identity owners\u2014and the DID layer.

    While not strictly necessary from a pure logical point-of-view, in practice this server-side DKMS layer plays a similar role in DID infrastructure as email servers play in SMTP email infrastructure or Web servers play in Web infrastructure. Like email or Web servers, cloud agents and cloud wallets are designed to be available 24 x 7 to send and receive communications on behalf of their identity owners. They are also designed to perform communications, encryption, key management, data management, and data storage and backup processes that are not typically feasible for edge devices given their typical computational power, bandwidth, storage capacity, reliability and/or availability.

    Cloud agents and wallets will typically be hosted by a service provider called an agency. Agencies could be operated by any type of service provider\u2014ISPs, telcos, search engines, social networks, banks, utility companies, governments, etc. A third party agency is not a requirement of DKMS architecture\u2014any identity owner can also host their own cloud agents.

    From an architectural standpoint, it is critical that the cloud layer be designed so that it does not \"recentralize\" any aspect of DKMS. In other words, even if an identity owner chooses to use a specific DKMS service provider for a specific set of cloud agent functions, the identity owner should be able to substitute another DKMS service provider at a later date and retain complete portability of her DKMS keys, data and metadata.

    Another feature of the cloud layer is that cloud agents can use DIDs and DID documents to automatically negotiate mutually authenticated secure connections with each other using DID Communication, a protocol being designed for this purpose.

    "},{"location":"concepts/0051-dkms/dkms-v4/#33-the-edge-layer-edge-agents-and-edge-wallets","title":"3.3. The Edge Layer: Edge Agents and Edge Wallets","text":"

    The edge layer is vital to DKMS because it is where identity owners interact directly with computing devices, operating systems, and applications. This layer consists of DKMS edge agents and edge wallets that are under the direct control of identity owners. When designed and implemented correctly, edge devices, agents, and wallets can also be the safest place to store private keys and other cryptographic material. They are the least accessible for network intrusion, and even a successful attack on any single client device would yield the private data for only a single user or at most a small family of users.

    Therefore, the edge layer is where most DKMS private keys and link secrets are generated and where most key operations and storage are performed. To meet the security and privacy requirements, DKMS architecture makes the following two assumptions:

    1. A DKMS agent is always installed in an environment that includes a secure element or Trusted Platform Module (for simplicity, this document will use the term \"secure element\" or \u201cSE\u201d for this module).

    2. Private keys used by the agent never leave the secure element.

    By default edge agents are always paired with a corresponding cloud agent due to the many DKMS operations that a cloud agent enables, including communications via the DKMS protocol to other edge and cloud agents. However this is not strictly necessary. As shown in Figure 1, edge agents could also communicate directly, peer-to-peer, via a protocol such as Bluetooth, NFC, or another mesh network protocol. Edge agents may also establish secure connections with cloud agents or with others using DID Communication.

    "},{"location":"concepts/0051-dkms/dkms-v4/#34-verifiable-credentials","title":"3.4. Verifiable Credentials","text":"

    By themselves, DIDs are \"trustless\", i.e., they carry no more inherent trust than an IP address. The primary difference is that they provide a mechanism for resolving the DID to a DID document containing the necessary cryptographic keys and endpoints to bootstrap secure communications with the associated agent.

    To achieve a higher level of trust, DKMS agents may exchange digitally signed credentials called verifiable credentials. Verifiable credentials are being standardized by the W3C Working Group of the same name. The purpose is summarized in the charter:

    It is currently difficult to express banking account information, education qualifications, healthcare data, and other sorts of machine-readable personal information that has been verified by a 3rd party on the Web. These sorts of data are often referred to as verifiable credentials. The mission of the Verifiable Credentials Working Group is to make expressing, exchanging, and verifying credentials easier and more secure on the Web.

    The following diagram from the W3C Verifiable Claims Working Group illustrates the primary roles in the verifiable credential ecosystem and the close relationship between DIDs and verifiable credentials.

    Figure 3: The W3C Verifiable Credentials ecosystem

    Note that what is being verified in a verifiable credential is the signature of the credential issuer. The strength of the actual credential depends on the degree of trust the verifier has in the issuer. For example, if a bank issues a credential saying that the subject of the credential has a certain credit card number, a merchant can rely on the credential if the merchant has a high degree of trust in the bank.

    The Verifiable Claims Working Group is standardizing both the format of credentials and of digital signatures on the credentials. Different digital signature formats require different cryptographic key material. For example, credentials that use a zero-knowledge signature format such as Camenisch-Lysyanskaya (CL) signatures require a \"master secret\" or \u201clink secret\u201d that enables the prover (the identity owner) to make proofs about the credential without revealing the underlying data or signatures in the credential (or the prover's DID with respect to the credential issuer). This allows for \"credential presentations\" that are unlinkable to each other. Link secrets are another type of cryptographic key material that must be stored in DKMS wallets.

    "},{"location":"concepts/0051-dkms/dkms-v4/#4-ledger-architecture","title":"4. Ledger Architecture","text":"

    A fundamental feature of DIDs and DKMS is that they will work with any modern blockchain, distributed ledger, distributed database, or distributed file system capable of supporting a DID method (which has a relatively simple set of requirements\u2014see the DID specification). For simplicity, this document will refer to all of these systems as \"ledgers\".

    There are a variety of ledger designs and governance models as illustrated in Figure 4.

    Figure 4: Blockchain and distributed ledger governance models

    Public ledgers are available for anyone to access, while private ledgers have restricted access. Permissionless ledgers allow anyone to run a validator node of the ledger (a node that participates in the consensus protocol), and thus require proof-of-work, proof-of-stake, or other protections against Sybil attacks. Permissioned ledgers restrict who can run a validator node, and thus can typically operate at a higher transaction rate.

    For decentralized identity management, a core requirement of DIDs and DKMS is that they can interoperate with any of these ledgers. However for privacy and scalability reasons, certain types of ledgers play specific roles in DKMS architecture.

    "},{"location":"concepts/0051-dkms/dkms-v4/#41-public-ledgers","title":"4.1. Public Ledgers","text":"

    Public ledgers, whether permissionless or permissioned, are crucial to DKMS infrastructure because they provide an open global root of trust. To the extent that a particular public ledger has earned the public\u2019s trust that it is strong enough to withstand attacks, tampering, or censorship, it is in a position to serve as a strong, universally-available root of trust for DIDs and the DID documents necessary for decentralized key management.

    Such a publicly available root of trust is particularly important for:

    1. Public DIDs (also called \"anywise DIDs\") that need to be recognized as trust anchors by a large number of verifiers.

    2. Schema and credential definitions needed for broad semantic interoperability of verifiable credentials.

    3. Revocation registries needed for revocation of verifiable credentials that use proofs.

    4. Policy registries needed for authorization and revocation of DKMS agents (see section 9.2).

    5. Anchoring transactions posted for verification or coordination purposes by smart contracts or other ledgers, including microledgers (below).

    "},{"location":"concepts/0051-dkms/dkms-v4/#42-private-ledgers","title":"4.2. Private Ledgers","text":"

    Although public ledgers may also be used for private DIDs\u2014DIDs that are intended for use only by a restricted audience\u2014this requires that their DID documents be carefully provisioned and managed to avoid any information that can be used for attack or correlation. This threat is lessened if private DIDs are registered and managed on a private ledger that has restricted access. However the larger the ledger, the more it will require the same precautions as a public ledger.

    "},{"location":"concepts/0051-dkms/dkms-v4/#43-microledgers","title":"4.3. Microledgers","text":"

    From a privacy perspective\u2014and particularly for compliance with privacy regulations such as the EU General Data Protection Regulation (GDPR)\u2014the ideal identifier is a pairwise pseudonymous DID. This DID (and its corresponding DID document) is only known to the two parties in a relationship.

    Because pairwise pseudonymous DID documents contain the public keys and service endpoints necessary for the respective DKMS agents to connect and send encrypted, signed messages to each other, there is no need for pairwise pseudonymous DIDs to be registered on a public ledger or even a conventional private ledger. Rather they can use microledgers.

    A microledger is essentially identical to a conventional private ledger except it has only as many nodes as it has parties to the relationship. The same cryptographic steps are used:

    1. Transactions are digitally signed by authorized private key(s).

    2. Transactions are cryptographically ordered and tamper evident.

    3. Transactions are replicated efficiently across agents using simple consensus protocols. These protocol, and the microledgers that provide their persistent state, constitute a root of trust for the relationship.

    Microledgers are effectively permissionless because anyone can operate one in cooperation with anyone else\u2014only the parties to the microledger relationship need to agree. If there is a danger of the parties to the microledger getting \"out of sync\" (e.g., if an attacker has compromised one party's agents such that the party's state is deadlocked, or one party's agents have all been lost so that the party is unable to receive a change-of-state from the other), the party\u2019s agents can register a dead drop point. This is a pre-established endpoint and keys both parties can use to re-sync their microledgers and restore their connection.

    Microledgers play a special role in DKMS architecture because they are used to maintain pairwise pseudonymous connections between DKMS agents. The use of microledgers also helps enormously with the problems of scale\u2014they can significantly reduce the load on public ledgers by moving management of pairwise pseudonymous DIDs and DID documents directly to DKMS agents.

    The protocols associated with microledgers include:

    Today, the only known example of this approach is the did:peer method. It is possible that alternative implementations will emerge.

    "},{"location":"concepts/0051-dkms/dkms-v4/#5-key-management-architecture","title":"5. Key Management Architecture","text":"

    DKMS adheres to the principle of key separation where keys for different purposes should be cryptographically separated. This avoids use of the same key for multiple purposes. Keys are classified based on usage and the nature of information being protected. Any change to a key requires that the relevant DID method ensure that the change comes from the identity owner or her authorized delegate. All requests by unauthorized entities must be ignored or flagged by the DKMS agent. If anyone else can change any key material, the security of the system is compromised.

    DKMS architecture addresses what keys are needed, how they are used, where they should be stored and protected, how long they should live, and how they are revoked and/or recovered when lost or compromised.

    "},{"location":"concepts/0051-dkms/dkms-v4/#51-key-types-and-key-descriptions","title":"5.1. Key Types and Key Descriptions","text":"

    NIST 800-130 framework requirement 6.1 requires a CKMS to specify and define each key type used. The following key layering and policies can be applied.

    1. Master keys:

      1. Keys at the highest level, in that they themselves are not cryptographically protected. They are distributed manually or initially installed and protected by procedural controls and physical or electronic isolation.

      2. MAY be used for deriving other keys;

      3. MUST NOT ever be stored in cleartext.

      4. SHOULD never be stored in a single encrypted form, but only:

        1. Saved in secure offline storage;

        2. Secured by high secure encrypted vaults, such as a secure element, TPM, or TEE.

        3. Distributed using a technique such as Shamir secret sharing;

        4. Derived from secure multiparty computation.

        5. Saved somewhere that requires secure interactions to access (which could mean slower retrieval times).

      5. SHOULD be used only for creating signatures as proof of delegation for other keys.

      6. MUST be forgotten immediately after use\u2013securely erased from memory, disk, and every location that accessed the key in plain text.

    2. Key encrypting keys

      1. Symmetric or public keys used for key transport or storage of other keys.

      2. MAY themselves be secured under other keys.

      3. If they are not ephemeral, they SHOULD be stored in secure access-controlled devices, used in those devices and never exposed.

    3. Data keys

      1. Used to provide cryptographic operations on user data (e.g., encryption, authentication). These are generally short-term symmetric keys; however, asymmetric signature private keys may also be considered data keys, and these are usually longer-term keys.

      2. SHOULD be dedicated to specific roles, such as authentication, securing communications, protecting storage, proving authorized delegation, constructing credentials, or generating proofs.

    The keys at one layer are used to protect items at a lower level. This constraint is intended to make attacks more difficult, and to limit exposure resulting from compromise of a specific key. For example, compromise of a key-encrypting-key (of which a master key is a special case) affects all keys protected thereunder. Consequently, special measures are used to protect master keys, including severely limiting access and use, hardware protection, and providing access to the key only under shared control.

    In addition to key layering hierarchy, keys may be classified based on temporal considerations:

    1. Long-term keys. These include master keys, often key-encrypting keys, and keys used to facilitate key agreement.

    2. Short-term keys. These include keys established by key transport or key agreement, often used as data keys or session keys for a single communications session.

    In general, communications applications involve short-term keys, while data storage applications require longer-term keys. Long-term keys typically protect short-term keys.

    The following policies apply to key descriptions:

    1. Any DKMS-compliant key SHOULD use a DID-compliant key description.

    2. This key description MUST be published at least in the governing DID method specification.

    3. This key description SHOULD be aggregated in the Key Description Registry maintained by the W3C Credentials Community Group.

    DKMS key management must encompass the keys needed by different DID methods as well as different verifiable credentials exchange protocols and signature formats. The following list includes the initial key types required by the Sovrin DID Method Spec and the Sovrin protocol for verifiable credentials exchange:

    1. Link secret: (one per entity) A high-entropy 256-bit integer included in every credential in blinded form. Used for proving credentials were issued to the same logical identity. A logical identity only has one link secret. The first DKMS agent provisioned by an identity owner creates this value and stores it in an encrypted wallet or in a secure element if available. Agents that receive credentials and present proofs must know this value. It can be transferred over secure channels between agents as necessary. If the link secret is changed, credentials issued with the new link secret value cannot be correlated with credentials using the old link secret value.

    2. DID keys: (one per relationship per agent) Ed25519 keys used for non-repudiation signing and verification for DIDs. Each agent manages their own set of DID keys.

    3. Agent policy keys: (one per agent) Ed25519 key pairs used with the agent policy registry. See section 9.2. The public key is stored with the agent policy registry. Transactions made to the policy registry are signed by the private key. The keys are used in zero-knowledge during proof presentation to show the agent is authorized by the identity owner to present the proof. Unauthorized agents MUST NOT be trusted by verifiers.

    4. Agent recovery keys: (a fraction per trustee) Ed25519 keys. A public key is stored by the agent and used for encrypting backups. The private key is saved to an offline medium or split into shares and given to trustees. To encrypt a backup, an ephemeral X25519 key pair is created where the ephemeral private key is used to perform a Diffie-Hellman agreement with the public recovery key to create a wallet encryption key. The private ephemeral key is forgotten and the ephemeral public key is stored with the encrypted wallet backup. To decrypt a backup, the private recovery key performs a Diffie-Hellman agreement with the ephemeral public key to create the same wallet encryption key.

    5. Wallet encryption keys: (one per wallet segment) 256 bit symmetric keys for encrypting wallets and backups. The key is generated by an agent then wrapped using secure enclaves (preferred) or derived from user inputs like strong passwords (see section 5.2). It MUST NOT be stored directly in secure enclaves when portability is a requirement.

    6. Wallet permission keys: (one per permission) Symmetric keys or Ed25519 keypairs that allow fine-grained permissions over various data stored in the wallet i.e. wallet read-only access, credential group write access, or write all access.

    "},{"location":"concepts/0051-dkms/dkms-v4/#52-key-generation","title":"5.2. Key Generation","text":"

    NIST 800-130 framework requirement 6.19 requires that a CKMS design shall specify the key-generation methods to be used in the CKMS for each type of key. The following policies can be applied.

    1. For any key represented in a DID document, the generation method MUST be included in the key description specification.

    2. Any parameters necessary to understand the generated key MUST be included in the key description.

    3. The key description SHOULD NOT include any metadata that enables correlation across key pairs.

    4. DKMS key types SHOULD use derivation functions that simplify and standardize key recovery.

    A secure method for key creation is to use a seed value combined with a derivation algorithm. Key derivation functions (KDF), pseudo random number generators (PRNG), and Bitcoin\u2019s BIP32 standard for hierarchical deterministic (HD) keys are all examples of key creation using a seed value with a derivation function or mapping.

    Hardware based key generation (like HSMs or TPMS) is usually more secure as they are typically designed to include more factors like white noise and temperature which are harder to corrupt.

    If KDFs or PRNGs are used, a passphrase, biometric input, or social data from multiple users combined with random salt SHOULD be used as the input to create the seed. Alternately a QR code or words from a list such as the PGP word list can be used. In either case, the input MUST NOT be stored anywhere connected to the Internet.

    "},{"location":"concepts/0051-dkms/dkms-v4/#53-multi-device-management","title":"5.3. Multi-Device Management","text":"

    Each device hosts an edge agent and edge wallet. All keys except for the link secret are unique per device. This allows for fine-grained (e.g., per relationship) control of authorized devices, as well as remote revocation. As part of the process for provisioning an edge agent, owners must choose what capabilities to grant. Capabilities must be flexible so owners can add or remove them depending on their needs.

    Wallet permissions SHOULD be controlled using keys that grant fixed permissions. One example of such a system is Cryptree.

    It is recommended that private keys never be reused across agents. If a secret is shared across agents, then there must be a way to remotely revoke the agent using a distributed ledger such that the secret is rendered useless on that agent. The DKMS architecture uses ledgers and diffused trust to enable fine grained control over individual keys and entire devices. An agent policy registry located on a ledger allows an owner to define agent authorizations and control over those authorizations. (See 9.2 Policy Registries). Agents must notify each other when a new agent is added to an authorized pool or removed in order to warn identity owners of unauthorized or malicious agents with a cloud agent acting as the synchronization hub.

    Techniques like distributed hash tables or gossip protocols SHOULD be employed to keep device data synchronized.

    "},{"location":"concepts/0051-dkms/dkms-v4/#54-key-portability-and-migration","title":"5.4. Key Portability and Migration","text":"

    As mentioned in section 2.8, portability of DKMS wallets and keys is an important requirement\u2014if agencies or other service providers could \"lock-in\" identity owners, DIDs and DKMS would no longer be decentralized. Thus the DKMS protocol MUST support identity owners migrating their edge agents and cloud agents to the agency of their choice (including self-hosting). Agency-to-agency migration is not fully defined in this version of DKMS architecture, but it will be specified in a future version. See section 11.

    "},{"location":"concepts/0051-dkms/dkms-v4/#6-recovery-methods","title":"6. Recovery Methods","text":"

    In key management, key recovery specifies how keys are reconstituted in case of loss or compromise. In decentralized identity management, recovery is even more important since identity owners have no \"higher authority\" to turn to for recovery.

    In this version of DKMS architecture, two recovery methods are recommended:

    1. Offline recovery uses physical media or removable digital media to store recovery keys.

    2. Social recovery employs \"trustees\" who store encrypted recovery data on an identity owners behalf\u2014typically in the trustees own agent(s).

    These methods are not exclusive, i.e., both can be employed for additional safety.

    Both methods operate against encrypted backups of the identity owner\u2019s digital identity wallet. Backups are encrypted by the edge agent with a backup recovery key. See section 5.1. While such backups may be stored in many locations, for simplicity this version of DKMS architecture assumes that cloud agents will provide an automated backup service for their respective edge agents.

    Future versions of this specification MAY specify additional recovery methods, include remote biometric recovery and recovery cooperatives.

    "},{"location":"concepts/0051-dkms/dkms-v4/#61-offline-recovery","title":"6.1. Offline Recovery","text":"

    Offline recovery is the conventional form of backup. It can be performed using many different methods. In DKMS architecture, the standard strategy is to store an encrypted backup of the identity owner\u2019s wallet at the owner\u2019s cloud agent, and then store a private backup recovery key offline. The private backup recovery key can be printed to a paper wallet as one or more QR codes or text strings. It can also be saved to a file on a detachable media device such as a removable disk, hardware wallet or USB key.

    The primary downside to offline recovery is that the identity owner must not only safely store the offline copy, but remember the location and be able to able to access the offline copy when it is needed to recover.

    "},{"location":"concepts/0051-dkms/dkms-v4/#62-social-recovery","title":"6.2. Social Recovery","text":"

    Social recovery has two advantages over offline recovery:

    1. The identity owner does not have to create an offline backup\u2014the social recovery setup process can be accomplished entirely online.

    2. The identity owner does not have to safely store and remember the location of the offline backup.

    However it is not a panacea:

    1. The identity owner still needs to remember her trustees.

    2. Social recovery opens the opportunity, however remote, for an identity owner\u2019s trustees to collude to take over the identity owner\u2019s digital identity wallet.

    A trustee is any person, institution, or service that agrees to assist an identity owner during recovery by (1) securely storing recovery material (called a \"share\") until a recovery is needed, and (2) positively identifying the identity owner and the authenticity of a recovery request before authorizing release of their shares.

    This second step is critical. Trustees MUST strongly authenticate an identity owner during recovery so as to detect if an attacker is trying exploit them to steal a key or secret. Software should aid in ensuring the authentication is strong, for example, confirming the trustee actually conversed with Alice, as opposed to getting an email from her.

    For social recovery, agents SHOULD split keys into shares and distribute them to trustees instead of sending each trustee a full copy. When recovery is needed, trustees can be contacted and the key will be recovered once enough shares have been received. An efficient and secure threshold secret sharing scheme, like Shamir's Secret Sharing, SHOULD be used to generate the shares and recombine them. The number of trustees to use is the decision of the identity owner, however it is RECOMMENDED to use at least three with a threshold of at least two.

    The shares may be encrypted by a key derived from a KDF or PRNG whose input is something only the identity owner knows, has, or is or any combination of these.

    Figure 5: Key sharing using Shamir Secret Sharing

    As the adoption interest in decentralized identity grows, social recovery has become a major focus of additional research and development in the industry. For example, at the Rebooting the Web of Trust #8 conference held in Barcelona 1-3 March 2019, six papers on the topic were submitted (note that several of these also have extensive bibliographies):

    1. A New Approach to Social Key Recovery by Christopher Allen and Mark Friedenbach

    2. Security Considerations of Shamir's Secret Sharing by Peg

    3. Implementing of Threshold Schemes by Daan Sprenkels

    4. Social Key Recovery Design and Implementation by Hank Chiu, Hankuan Yu, Justin Lin & Jon Tsai

    5. SLIP-0039: Shamir's Secret-Sharing for Mnemonic Codes by The TREZOR Team

    In addition, two new papers on the topic were started at the conference and are still in development at the time of publication:

    1. Shamir Secret Sharing Best Practices by Christopher Allen et al.

    2. Evaluating Social Schemes for Recovering Control of an Identifier by Sean Gilligan, Peg, Adin Schmahmann, and Andrew Hughes

    "},{"location":"concepts/0051-dkms/dkms-v4/#7-recovery-from-key-loss","title":"7. Recovery From Key Loss","text":"

    Key loss as defined in this document means the owner can assume there is no further risk of compromise. Such scenarios include devices unable to function due to water, electricity, breaking, fire, hardware failure, acts of God, etc.

    "},{"location":"concepts/0051-dkms/dkms-v4/#71-agent-policy-key-loss","title":"7.1. Agent Policy Key Loss","text":"

    Loss of an agent policy key means the agent no longer has proof authorization and cannot make updates to the agent policy registry on the ledger. Identity owners SHOULD have backup agent policy keys that can revoke the current active agent policy key from the agent policy registry and issue a new agent policy key to the replacement agent.

    "},{"location":"concepts/0051-dkms/dkms-v4/#72-did-key-loss","title":"7.2. DID Key Loss","text":"

    Loss of a DID key means the agent can no longer authenticate over the channel and cannot rotate the key. This key MUST be recoverable from the encrypted backup.

    "},{"location":"concepts/0051-dkms/dkms-v4/#73-link-secret-loss","title":"7.3. Link Secret Loss","text":"

    Loss of the link secret means the owner can no longer generate proofs for the verifiable credentials in her possession or be issued credentials under the same identity. The link secret MUST be recoverable from the encrypted backup.

    "},{"location":"concepts/0051-dkms/dkms-v4/#74-credential-loss","title":"7.4. Credential Loss","text":"

    Loss of credentials requires the owner to contact his credential issuers, reauthenticate, and request the issuers revoke existing credentials, if recovery from a backup is not possible. Credentials SHOULD be recoverable from the encrypted backup.

    "},{"location":"concepts/0051-dkms/dkms-v4/#75-relationship-state-recovery","title":"7.5. Relationship State Recovery","text":"

    Recovery of relationship state due to any of the above key-loss scenarios is enabled via the dead drop mechanism.

    "},{"location":"concepts/0051-dkms/dkms-v4/#8-recovery-from-key-compromise","title":"8. Recovery From Key Compromise","text":"

    Key compromise means that private keys and/or master keys have become or can become known either passively or actively.

    1. \"Passively\" means the identity owner is not aware of the compromise. An attacker may be eavesdropping or have remote communications with the agent but has not provided direct evidence of intrusion or malicious activity, such as impersonating the identity owner or committing fraud.

    2. \"Actively\" means the identity owner knows her keys have been exposed. For example, the owner is locked out of her own devices and/or DKMS agents and wallets, or becomes aware of abuse or fraud.

    To protect from either, there are techniques available: rotation, revocation, and quick recovery. Rotation helps to limit a passive compromise, while revocation and quick recovery help to limit an active one.

    "},{"location":"concepts/0051-dkms/dkms-v4/#81-key-rotation","title":"8.1. Key Rotation","text":"

    Keys SHOULD be changed periodically to limit tampering. When keys are rotated, the previous keys are revoked and new ones are added. It is RECOMMENDED for keys to expire for the following reasons:

    "},{"location":"concepts/0051-dkms/dkms-v4/#82-key-revocation","title":"8.2. Key Revocation","text":"

    DKMS keys MUST be revocable. Verifiers MUST be able to determine the revocation status of a DKMS key. It is not good enough to simply forget a key because that does not protect against key compromise. Control over who can update a revocation list MUST be enforced so attackers cannot maliciously revoke user keys. (Note that a key revoked by an attacker reveals that the attacker knows a secret.)

    "},{"location":"concepts/0051-dkms/dkms-v4/#83-agent-policy-key-compromise","title":"8.3. Agent Policy Key Compromise","text":"

    Compromise of an agent\u2019s policy key means an attacker can use the agent to impersonate the owner for proof presentation and make changes to the agent policy registry. Owners must be able to revoke any of their devices to prevent impersonation. For example, if the owner knows her device has been stolen, she will want to revoke all device permissions so even if the thief manages to break into the agent the DKMS data value is limited. Identity owners SHOULD have backup agent policy keys that are authorized to revoke the compromised key from the agent policy registry and issue a new agent policy key to the replacement agent.

    "},{"location":"concepts/0051-dkms/dkms-v4/#84-did-key-compromise","title":"8.4. DID Key Compromise","text":"

    Compromise of a DID key means an attacker can use the channel to impersonate the owner as well as potentially lock the owner out from further use if the attacker rotates the key before the owner realizes what has happened. This attack surface is minimized if keys are rotated on a regular basis. An identity owner MUST also be able to trigger a rotation manually upon discovery of a compromise. Owners SHOULD implement a diffuse trust model among multiple agents where a single compromised agent is not able to revoke a key because more than one agent is required to approve the action.

    "},{"location":"concepts/0051-dkms/dkms-v4/#85-link-secret-compromise","title":"8.5. Link Secret Compromise","text":"

    Compromise of the owner link secret means an attacker may impersonate the owner when receiving verifiable credentials or use existing credentials for proof presentation. Note that unless the attacker is also able to use an agent that has \"PROVE\" authorization, the verifier will be able to detect an unauthorized agent. At this point the owner SHOULD revoke her credentials and request for them to be reissued with a new link secret.

    "},{"location":"concepts/0051-dkms/dkms-v4/#86-credential-compromise","title":"8.6. Credential Compromise","text":"

    Compromise of a verifiable credential means an attacker has learned the attributes of the credential. Unless the attacker also manages to compromise the link secret and an authorized agent, he is not able to assert the credential, so the only loss is control of the underlying data.

    "},{"location":"concepts/0051-dkms/dkms-v4/#87-relationship-state-recovery","title":"8.7. Relationship State Recovery","text":"

    Recovery of relationship state due to any of the above key-compromise scenarios is enabled via the dead drop mechanism.

    "},{"location":"concepts/0051-dkms/dkms-v4/#9-dkms-protocol","title":"9. DKMS Protocol","text":""},{"location":"concepts/0051-dkms/dkms-v4/#91-microledger-transactions","title":"9.1. Microledger Transactions","text":"

    DKMS architecture uses microledgers to represent the state of the authorized keys in a relationship. Just as with conventional ledgers, the structure is such that the parties to a relationship can verify it at any moment in time, as can a third party for auditing purposes. Microledgers are used between two parties where each party signs transactions using their DID keys. This allow changes to DID keys to be propagated in a secure manner where each transaction is signed with an existing key authorized in earlier transactions.

    "},{"location":"concepts/0051-dkms/dkms-v4/#92-policy-registries","title":"9.2. Policy Registries","text":"

    Each Identity Owner creates an authorization policy on the ledger. The policy allows an agent to have some combination of authorizations. This is a public record, but no information needs to be shared with any other party. Its purpose is to allow for management of device authorization in a flexible way, by allowing for agents to prove in zero knowledge that they are authorized by the identity owner.

    When an agent is granted PROVE authorization, by adding a commitment to the agent's secret value to PROVE section of the authorization policy, the ledger adds the second commitment to the global prover registry. When an agent loses its PROVE authorization, the ledger removes the associated commitment from the prover registry. The ledger can enforce sophisticated owner defined rules like requiring multiple signatures to authorize updates to the Policy.

    An agent can now prove in zero knowledge that it is authorized because the ledger maintains a global registry for all agents with PROVE authorization for all identity owners. An agent can prove that its secret value and the policy address in which that value is given PROVE authorization are part of the global policy registry without revealing the secret value, or the policy address. By using a zero knowledge proof, the global policy registry does not enable correlation of any specific identity owner.

    "},{"location":"concepts/0051-dkms/dkms-v4/#93-authenticated-encryption","title":"9.3. Authenticated Encryption","text":"

    The use of DIDs and microledgers allows communication between agents to use authenticated encryption. Agents use their DID verification keys for authenticating each other whenever a communication channel is established. Microledgers allow DID keys to have rooted mutual authentication for any two parties with a DID. In the sequence diagrams in section 10, all agent-to-agent communications that uses authenticated encryption is indicated by bold blue arrows.

    "},{"location":"concepts/0051-dkms/dkms-v4/#94-recovery-connection","title":"9.4. Recovery connection","text":"

    Each Identity Owner begins a recovery operation by requesting their respective recovery information from trustees. After a trustee has confirmed the request originated with the identity owner and not a malicious party, a recovery connection is formed. This special type of connection is meant only for recovery purposes. Recovery connections are decommissioned when the minimum number of recovery shares have been received and the original encrypted wallet data has been restored. Identity owners can then resume normal connections because their keys have been recovered. Trustees SHOULD only send recovery shares to identity owners over a recovery connection.

    "},{"location":"concepts/0051-dkms/dkms-v4/#95-dead-drops","title":"9.5. Dead Drops","text":"

    In scenarios where two parties to a connection move agencies (and thus service endpoints) at the same time, or one party's agents have been compromised such that it can no longer send or receive relationship state changes, there is a need for recovery not just of keys and agents, but of the state of the relationship. These scenarios may include malicious compromise of agents by an attacker such that neither the party nor the attacker controls enough agents to meet the thresholds set in the DID Document or the Authorization Policy, or complete loss of all agents due to some catastrophic event.

    In some cases, relationship state may be recoverable via encrypted backup of the agent wallets. In the event that this is not possible, the parties can make use of a dead drop to recover their relationship state.

    A dead drop is established and maintained as part of a pairwise relationship. The dead drop consists of a service endpoint and the public keys needed to verify the package that may be retrieved from that endpoint. The keys needed for the dead drop are derived from a combination of a Master key and the pairwise DID of the relationship that is being recovered.

    "},{"location":"concepts/0051-dkms/dkms-v4/#10-protocol-flows","title":"10. Protocol Flows","text":"

    This section contains the UML sequence diagrams for all standard DKMS key management operations that use the DKMS protocol. Diagrams are listed in logical order of usage but may be reviewed in any order. Cross-references to reusable protocol sequences are represented as notes in blue. Other comments are in yellow.

    Table 1 is a glossary of the DKMS key names and types used in these diagrams.

    Key Name Description Apx-sv Agent Policy Secret Value for agent x Apx-svc Agent Policy Secret Value Commitment for agent x Apx-ac Agent Policy Address Commitment for agent x AAx-ID Alice's Agent to Agent Identifier for agent x AAx-vk Alice's Agent to Agent Public Verification Key for agent x AAx-sk Alice's Agent to Agent Private Signing Key for agent x ABDID Alice\u2019s DID for connection with Bob ABx Alice\u2019s key pair for connection with Bob for agent x ABx-vk Alice\u2019s Public Verification Key for connection with Bob for agent x ABx-sk Alice\u2019s Private Signing Key for connection with Bob for agent x AWx-k Wallet Encryption Key for agent x ALS Alice's Link Secret

    Table 1: DKMS key names used in this section

    "},{"location":"concepts/0051-dkms/dkms-v4/#101-edge-agent-start","title":"10.1. Edge Agent Start","text":"

    An identity owner\u2019s experience with DKMS begins with her first installation of a DKMS edge agent. This startup routine is reused by many other protocol sequences because it is needed each time an identity owner installs a new DKMS edge agent.

    The first step after successful installation is to prompt the identity owner whether he/she already has a DKMS identity wallet or is instantiating one for the first time. If the owner already has a wallet, the owner is prompted to determine if the new edge agent installation is for the purpose of adding a new edge agent, or recovering from a lost or compromised edge agent. Each of these options references another protocol pattern.

    "},{"location":"concepts/0051-dkms/dkms-v4/#102-provision-new-agent","title":"10.2. Provision New Agent","text":"

    Any time a new agent is provisioned\u2014regardless of whether it is an edge agent or a cloud agent\u2014the same sequence of steps are necessary to set up the associated wallet and secure communications with the new agent.

    As noted in section 3.3, DKMS architecture recommends that a DKMS agent be installed in an environment that includes a secure element. So the first step is for the edge agent to set up the credential the identity owner will use to unlock the secure element. On modern smartphones this will typically be a biometric, but it could be a PIN, passcode, or other factor, or a combination of factors.

    The edge agent then requests the secure element to create the key pairs necessary to establish the initial agent policies and to secure agent-to-agent communications. The edge agent also generates a ID to uniquely identify the agent across the identity owner\u2019s set of DKMS agents.

    Finally the edge agent requests the secure element to create a wallet encryption key and then uses it to encrypt the edge wallet.

    "},{"location":"concepts/0051-dkms/dkms-v4/#103-first-edge-agent","title":"10.3. First Edge Agent","text":"

    The first time a new identity owner installs an edge agent, it must also set up the DKMS components that enable the identity owner to manage multiple separate DIDs and verifiable credentials as if they were from one logically unified digital identity. It must also lay the groundwork for the identity owner to install additional DKMS agents on other devices, each of which will maintain its own DKMS identity wallet while still enabling the identity owner to act as if they were all part of one logically unified identity wallet.

    Link secrets are defined in section 5.1 and policy registries in section 9.2. The edge agent first needs to generate and store the link secret in the edge wallet. It then needs to generate the policy registry address and store it in the edge wallet. Now it is ready to update the agent policy registry.

    "},{"location":"concepts/0051-dkms/dkms-v4/#104-update-agent-policy-registry","title":"10.4. Update Agent Policy Registry","text":"

    As explained in section 9.2, an agent policy registry is the master control point that an identity owner uses to authorize and revoke DKMS agent proof authorization (edge or cloud).

    Each time the identity owner takes an action to add, revoke, or change the permissions for an agent, the policy registry is updated. For example, at the end of the protocol sequence in section 10.3, the action is to write the first policy registry entries that authorize the first edge agent.

    "},{"location":"concepts/0051-dkms/dkms-v4/#105-add-cloud-agent","title":"10.5. Add Cloud Agent","text":"

    The final step in first-time setup of an edge agent is creation of the corresponding cloud agent. As explained in section 3.3, the default in DKMS architecture is to always pair an edge agent with a corresponding cloud agent due to the many different key management functions this combination can automate.

    The process of registering a cloud agent begins with the edge agent contacting the agency agent. For purposes of this document, we will assume that the edge agent has a relationship with one or more agencies, and has a trusted method (such as a pre-installed DID) for establishing a secure connection using authenticated encryption.

    The target agency first returns a request for the consent required from the identity owner to register the cloud agent together with a request for the authorizations to be granted to the cloud agent. By default, cloud agents have no authorizations other than those granted by the identity owner. This enables identity owners to control what tasks a cloud agent may or may not perform on the identity owner\u2019s behalf.

    Once the identity owner has returned consent and the selected authorizations, the agency agent provisions the new cloud agent and registers the cloud agent\u2019s service endpoint using the agency\u2019s routing extension. Note that this service endpoint is used only in agent-to-agent communications that are internal to the identity owner\u2019s own agent domain. Outward-facing service endpoints are assigned as part of adding connections with their own DIDs.

    Once these tasks are performed, the results are returned to the edge agent and stored security in the edge wallet.

    "},{"location":"concepts/0051-dkms/dkms-v4/#106-add-new-edge-agent","title":"10.6. Add New Edge Agent","text":"

    Each time an identity owner installs a new edge agent after their first edge agent, the process must initialize the new agent and grant it the necessary authorizations to begin acting on the identity owner\u2019s behalf.

    Provisioning of the new edge agent (Edge Agent 2) starts by the identity owner installing the edge agent software (section 10.2) and then receiving instructions about how to provision the new edge agent from an existing edge agent (Edge Agent 1). Note that Edge Agent 1 must the authorization to add a new edge agent (not all edge agents have such authorization). The identity owner must also select the authorizations the edge agent will have (DKMS agent developers will compete to make such policy choices easy and intuitive for identity owners).

    There are multiple options for how the Edge Agent 2 may receive authorization from Edge Agent 1. One common method is for Edge Agent 1 to display a QR code or other machine-readable code scanned by Edge Agent 2. Another way is for Edge Agent 1 to provide a passcode or passphrase that the identity owner types into Edge Agent 2. Another method is sending an SMS or email with a helper URL. In all methods the ultimate result is that Edge Agent 2 must be able to connect via authenticated encryption with Edge Agent 1 in order to verify the connection and pass the new agent-to-agent encryption keys that will be used for secure communications between the two agents.

    Once this is confirmed by both agents, Edge Agent 1 will then use the Update Agent Policy Registry sequence (section 10.4) to add authorizations to the policy registry for Edge Agent 2.

    Once that is confirmed, provisioning of Edge Agent 2 is completed when Edge Agent 1 send the link secret and any verifiable credentials that the identity owner has authorized Edge Agent 2 to handle to Edge Agent 2, which securely stores them in Edge Agent 2\u2019s wallet.

    "},{"location":"concepts/0051-dkms/dkms-v4/#107-add-connection-to-public-did","title":"10.7. Add Connection to Public DID","text":"

    The primary purpose of DIDs and DKMS is to enable trusted digital connections. One of the most common use cases is when an identity owner needs to create a connection to an entity that has a public DID, for example any website that wants to support trusted decentralized identity connections with its users (for registration, authentication, verifiable credentials exchange, secure communications, etc.)

    Note that this sequence is entirely about agent-to-agent communications between DKMS agents to create a shared microledger and populate it with the pairwise pseudonymous DIDs that Alice and Org assign to each other together with the public keys and service endpoints they need to enable their agents to use authenticated encryption.

    First Alice\u2019s edge agent creates the key pair and DID that it will assign to Org and uses those to initialize a new microledger. It then sends a request for Alice\u2019s cloud agent to add its own key pair that Alice authorizes to act on that DID. These are returned to Alice\u2019s edge agent who adds them to the microledger.

    Next Alice\u2019s edge agent creates and sends a connection invitation to Alice\u2019s cloud agent. Alice\u2019s cloud agent resolves Org\u2019s DID to its DID document to discover the endpoint for Org\u2019s cloud agent (this resolution step is not shown in the diagram above). It then forwards the invitation to Org\u2019s cloud agent who in turn forwards it to the system operating as Org\u2019s edge agent.

    Org\u2019s edge agent performs the mirror image of the same steps Alice\u2019s edge agent took to create its own DID and key pair for Alice, adding those to the microledger, and authorizing its cloud agent to act on its behalf in this new relationship.

    When that is complete, Org\u2019s edge agent returns its microledger updates via authenticated encryption to its cloud agent which forwards them to Alice\u2019s cloud agent and finally to Alice\u2019s edge agent. This completes the connection and Alice is notified of success.

    "},{"location":"concepts/0051-dkms/dkms-v4/#108-add-connection-to-private-did-provisioned","title":"10.8. Add Connection to Private DID (Provisioned)","text":"

    The other common use case for trusted connections is private peer-to-peer connections between two parties that do not initially connect via one or the other\u2019s public DIDs. These connections can be initiated any way that one party can share a unique invitation address, i.e., via a URL sent via text, email, or posted on a blog, website, LinkedIn profile, etc.

    The flow in this sequence diagram is very similar to the flow in section 10.8 where Alice is connecting to a public organization. The only difference is that rather than beginning with Alice\u2019s edge agent knowing a public DID for the Org, Alice\u2019s edge agent knows Bob\u2019s invitation address. This is a service, typically provided by an agency, that enables Bob\u2019s cloud agent to accept connection invitations (typically with appropriate spam protections and other forms of connection invitation filtering).

    The end result is the same as in section 10.8: Alice and Bob have established a shared microledger with the pairwise pseudonymous DIDs and the public keys and endpoints they need to maintain their relationship. Note that with DIDs and DKMS, this is the first connection that Alice and Bob can maintain for life (and beyond) that is not dependent on any centralized service provider or registry. And this connection is available for Alice and Bob to use with any application they wish to authorize.

    "},{"location":"concepts/0051-dkms/dkms-v4/#109-add-connection-to-private-did-unprovisioned","title":"10.9. Add Connection to Private DID (Unprovisioned)","text":"

    This sequence is identical to section 10.8 except that Bob does not yet have a DKMS agent or wallet. So it addresses what is necessary for Alice to invite Bob to both start using a DKMS agent and to form a connection with Alice at the same time.

    The only difference between this sequence diagram and section 10.8 is the invitation delivery process. In 10.8, Bob already has a cloud agent, so the invitation can be delivered to an invitation address established at the hosting agency. In this sequence, Bob does not yet have cloud agent, so the invitation must be: a) anchored at a helper URL (typically provided by an agency), and b) delivered to Bob via some out-of-band means (typically an SMS, email, or other medium that can communicate a helper URL).

    When Bob receives the invitation, Bob clicks on the URL to go to the helper page and receive instructions about the invitation and how he can download a DKMS edge agent. He follows the instructions, installs the edge agent, which in turn provisions Bob\u2019s cloud agent. When provisioning is complete, Bob\u2019s edge agent retrieves Alice\u2019s connection invitation from the helper URL. Since Bob is now fully provisioned, the rest of the sequence proceeds identically to section 10.8.

    "},{"location":"concepts/0051-dkms/dkms-v4/#1010-rotate-did-keys","title":"10.10. Rotate DID Keys","text":"

    As described in section 8.1, key rotation is a core security feature of DKMS. This diagram illustrates athe protocol for key rotation.

    Key rotation may be triggered by expiration of a key or by an another event such as agent recovery. The process begins with the identity owner\u2019s edge agent generating its own new keys. If keys also need to be rotated in the cloud agent, the edge agent sends a key change request.

    The identity owner\u2019s agent policy may require that key rotation requires authorization from two or more edge agents. If so, the first edge agent generates a one time passcode or QR code that the identity owner can use to authorize the key rotation at the second edge agent. Once the passcode is verified, the second edge agent signs the key rotation request and sends it to the first edge agent.

    Once the necessary authorizations have been received, the first edge agent writes the changes to the microledger for that DID. It then sends the updates to the microledger to the cloud agent for the other party to the DID relationship (Bob), who forwards it to Bob\u2019s edge agent. Bob\u2019s edge agent verifies the updates and adds the changes to its copy of the microledger.

    Bob\u2019s edge agent then needs to broadcast the changes to Bob\u2019s cloud agent and any other edge agent that Bob has authorized to interact with Alice. Once this is done, Alice and Bob are \"in sync\" with the rotated keys, and their connection is at full strength.

    "},{"location":"concepts/0051-dkms/dkms-v4/#1011-delete-connection","title":"10.11. Delete Connection","text":"

    In decentralized identity, identity owners are always in control of their relationships. This means either party to a connection can terminate the relationship by deleting it. This diagram illustrates Alice deleting the connection she had with Bob.

    All that is required to delete a connection is for the edge agent to add a DISABLE event to the microledger she established with Bob. As always, this change is propagated to Alice\u2019s cloud agent and any other edge agents authorized to interact with the DID she assigned to Bob.

    Note that, just like in the real world, it is optional for Alice to notify Bob of this change in the state of their relationship. If she chooses to do so, her edge agent will propagate the DISABLE event to Bob\u2019s copy of the microledger. If, when, and how Bob is notified by his edge agent(s) depends on Bob\u2019s notification policies.

    "},{"location":"concepts/0051-dkms/dkms-v4/#1012-revoke-edge-agent","title":"10.12. Revoke Edge Agent","text":"

    Key revocation is also a required feature of DKMS architecture as discussed in section 8.2. Revocation of keys for a specific DID is accomplished either through rotation of those keys (section 10.10) or deletion of the connection (section 10.11). However in certain cases, an identity owner may need to revoke an entire edge agent, effectively disabling all keys managed by that agent. This is appropriate if a device is lost, stolen, or suspected of compromise.

    Revoking an edge agent is done from another edge agent that is authorized to revoke agents. If a single edge agent is authorized, the process is straightforward. The revoking edge agent sends a signed request to the policy registry address (section 9.2) on the ledger holding the policy registry. The ledger performs the update. The revoking edge agent then \"removes\" the keys for the revoked edge agent by disabling them.

    As a best practice, this event also should trigger key rotation by the edge agent.

    Note that an identity owner may have a stronger revocation policy, such as requiring two edge agents to authorize revocation of another edge agent. This sequence is very similar to requiring two edge agents to authorize a key rotation as described in section 10.10. However it could also cause Alice to be locked out of her edge agents if an attacker can gain control of enough devices. In this case Alice could use one of her recovery options (sections 10.16 and 10.17).

    "},{"location":"concepts/0051-dkms/dkms-v4/#1013-recovery-setup","title":"10.13. Recovery Setup","text":"

    As discussed in section 6, recovery is a paramount feature of DKMS\u2014in decentralized key management, there is no \"forgot password\" button (and if there were, it would be a major security vulnerability). So it is particularly important that it be easy and natural for an identity owner to select and configure recovery options.

    The process begins with Alice\u2019s edge agent prompting Alice to select among the two recovery options described in section 6: offline recovery and social recovery. Her edge agent then creates a key pair for backup encryption, encrypts a backup of her edge wallet, and stores it with her cloud agent.

    If Alice chooses social recovery, the next step is for Alice to add trustees as described in section 10.14. Once the trustee has accepted Alice\u2019s invitation, Alice\u2019s edge agent creates and shares a recovery data share for each trustee. This is a shard of a file containing a copy of her backup encryption key, her link secret, and the special recovery endpoint that was set up by her cloud agent when the recovery invitation was created (see section 10.14).

    Alice\u2019s edge agent sends this recovery data share to her cloud agent who forwards it to the cloud agent for each of her trustees. Each cloud agent securely stores the share so its identity owner is ready in helping Alice to recover should the need arise. (See sections 10.17 and 10.18 for the actual social recovery process.)

    If Alice chooses offline recovery, her edge agent first creates a \"paper wallet\", which typically consists of a QR code or string of text that encodes the same data as in a recovery data share. Her edge agent then displays that paper wallet data to Alice for printing and storing in a safe place. Note that one of the primary usability challenges with offline recovery methods is Alice:

    1. Following through with storage of the paper wallet.

    2. Properly securing storage of the paper wallet over long periods of time.

    3. Remembering the location of the paper wallet over long periods of time.

    To some extent these can be addressed if the edge agent periodically reminds the identity owner to verify that his/her paper wallet is securely stored in a known location.

    "},{"location":"concepts/0051-dkms/dkms-v4/#1014-add-trustee","title":"10.14. Add Trustee","text":"

    The secret to implementing social recovery in DKMS is using DKMS agents to automate the process of securely storing, sharing, and recovering encrypted backups of DKMS wallets with several of the identity owner\u2019s connections. In DKMS architecture, these connections are currently called trustees. (Note: this is a placeholder term pending further usability research on the best name for this new role.)

    Trustees are selected by the identity owner based on the owner\u2019s trust. For each trustee, the edge agent requests the cloud agent to create a trustee invitation. The cloud agent generates and registers with the agency a unique URL that will be used only for this purpose. The edge agent then creates a recovery data share (defined in 10.13) and shards it as defined by the identity owner\u2019s recovery policy.

    At this point there are two options for delivering the trustee invitation depending on whether the identity owner already has a connection with the trustee or not. If a connection exists, the edge agent sends the invitation to the cloud agent who forwards it to the trustee\u2019s cloud agent who forwards it to an edge agent who notifies the trustee of the invitation.

    If a connection does not exist, the recovery invitation is delivered out of band in a process very similar to adding a connection to a private DID (sections 10.8 and 10.9).

    Once the trustee accepts the invitation, the response is returned to identity owner\u2019s edge agent to complete the recovery setup process (section 10.13).

    "},{"location":"concepts/0051-dkms/dkms-v4/#1015-update-recovery-setup","title":"10.15. Update Recovery Setup","text":"

    With DKMS infrastructure, key recovery is a lifelong process. A DKMS wallet filled with keys, DIDs, and verifiable credentials is an asset constantly increasing in value. Thus it is critical that identity owners be able to update their recovery methods as their circumstances, devices, and connections change.

    For social recovery, an identity owner may wish to add new trustees or delete existing ones. Whenever this happens, the owner\u2019s edge agent must recalculate new recovery data shares to shard among the new set of trustees. This is a two step process: the new share must first be sent to all trustees in the new set and an acknowledgement must be received from all of them. Once that it done, the edge agent can send a commitment message to all trustees in the new set to complete the process.

    Updating offline recovery data is simply a matter of repeating the process of creating and printing out a paper wallet. An edge agent can automatically inform its identity owner of the need to do this when circumstances require it as well as automatically remind its owner to keep such offline information safe and accessible.

    "},{"location":"concepts/0051-dkms/dkms-v4/#1016-offline-recovery","title":"10.16. Offline Recovery","text":"

    One advantage of the offline recovery process is that it can be performed very quickly by the identity owner because it has no dependencies on outside parties.

    The identity owner simply initiates recovery on a newly installed edge agent. The edge agent prompts to scan the paper wallet (or input the text). From this data, it extracts the special recovery endpoint registered in the recovery setup process (section 10.13) and the backup decryption key. It then requests the encrypted backup from the recovery endpoint (which routes to the identity owner\u2019s cloud agent), decrypts it, restores the edge wallet, and replaces the agent keys with new keys. The final steps are to update the agent policy registry and, as a best practice, rotate all DID keys.

    "},{"location":"concepts/0051-dkms/dkms-v4/#1017-social-recovery","title":"10.17. Social Recovery","text":"

    Social recovery, while more complex than offline recovery, is also more automated, flexible, and resilient. The secret to making it easy and intuitive for identity owners is using DKMS agents to automate every aspect of the process except for the most social step: verification of the actual identity of the identity owner by trustees.

    Social recovery, like offline recovery, begins with the installation of a fresh edge agent. The identity owner selects the social recovery option and is prompted for the contact data her edge agent and cloud agent will need to send special new connection requests to her trustees. These special connection requests are then issued as described in section 10.8.

    These special connection requests are able to leverage the same secure DKMS infrastructure as the original connections while at the same time carrying the metadata needed for the trustee\u2019s edge agent to recognize it is a recovery request. At that point, the single most important step in social recovery happens: the trustee verifying that it is really Alice making the recovery request, and not an impersonator using social engineering.

    Once the trustee is satisfied with the verification, the edge agent prompts the trustee to perform the next most important step: select the existing connection with Alice so that the trustee edge agent knows which connection is trying to recover. Only the trustee\u2014a human being\u2014can be trusted to make this association.

    At this point, the edge agent can correlate the old connection to Alice with the new connection to Alice, so it knows which recovery data share to select (see section 10.13). It can then decrypt the recovery data share with the identity owner\u2019s private key, extracts the recovery endpoint, and re-encrypt the recovery data share with the public key of Alice\u2019s new edge agent.

    Now the trustee\u2019s edge agent is ready to return the recovery data share to Alice\u2019s new cloud agent via the recovery endpoint. The cloud agent forwards it to Alice\u2019s new edge agent. Once Alice\u2019s new edge agent has the required set of recovery data shares, it decrypts and assembles them. It then uses that recovery data to complete the same final steps as offline recovery described in section 10.16.

    "},{"location":"concepts/0051-dkms/dkms-v4/#11-open-issues-and-future-work","title":"11. Open Issues and Future Work","text":"
    1. DID specification. The DKMS specification has major dependencies on the DID specification which is still in progress at the W3C Credentials Community Group. Although we are not concerned that the resulting specification will not support DKMS requirements, we cannot be specific about certain details of how DKMS will interact with DIDs until that specification is finalized. However the strong market interest in DIDs led the Credentials Community Group to author an extensive DID Use Cases document and submit a Decentralized Identifier Working Group charter to the W3C for consideration as a full Working Group.

    2. DID methods. The number of DID methods has grown substantially as shown by the unofficial DID Method Registry maintained by the W3C Credentials Community Group. Because different DID methods may support different levels of assurance about DKMS keys, more work may be required to assess about the role of different ledgers as a decentralized source of truth and the requirements of each ledger for the hosting of DIDs and DID documents.

    3. Verifiable credentials interoperability. The W3C Verifiable Claims Working Group is currently preparing its 1.0 Candidate Recommendation. As verifiable credentials mature, we need to say more about how different DKMS wallets and agents from different vendors can support interoperable verifiable credentials, including those with zero-knowledge credentials and proofs. Again, this may need to extend to an adjacent protocol.

    4. DKMS wallet and agent portability. As mentioned in section 5.4, this aspect of the DKMS protocol is not fully specified and needs to be addressed in a subsequent version. This area of work is particularly active in the Hyperledger Indy Agent development community. A recent \"connectathon\" hosted by the Sovrin Foundation had 32 developers testing agent-to-agent protocol interoperability among 9 different code bases.

    5. Secure elements, TPMs, and TEEs. Since DKMS is highly dependent on secure elements, more work is needed to specify how a device can communicate or verify its own security capabilities or its ability to attest to authentication factors for the identity owner.

    6. Biometrics. While they can play a special role in the DKMS architecture because of their ability to intrinsically identify a unique individual, this same quality means a privacy breach of biometric attributes could be disastrous because they may be unrecoverable. So determining the role of biometrics and biometric service providers is a major area of future work.

    7. Spam and DDOS attacks. There are several areas where this must be considered, particularly in relation to connection requests (section 10.7).

    8. DID phishing. DKMS can only enable security, it cannot by itself prevent a malicious actor or agency sending malicious invitations to form malicious connections that appear to be legitimate connection invitations (section 10.9).

    9. Usability testing. Although early research on the usability of DKMS wallets and agents was carried out by BYU Internet Security Research Lab, much more work remains to be done to develop the highly repeatable \"user ceremonies\" necessary for DKMS to succeed in the mass market.

    "},{"location":"concepts/0051-dkms/dkms-v4/#12-future-standardization","title":"12. Future Standardization","text":"

    It is the recommendation of the authors that the work described in this document be carried forward to full Internet standardization. We believe OASIS is a strong candidate for this work due to its hosting of the Key Management Interoperability Protocol (KMIP) at the KMIP Technical Committee since 2010. Please contact the authors if you are interested in contributing to organizing an open standard effort for DKMS.

    "},{"location":"concepts/0051-dkms/shamir_secret/","title":"Shamir secret API (indy-crypto and indy-sdk)","text":"

    Objective: indy-crypto exposes the low level API for generating and reconstructing secrets. indy-sdk uses the underlying indy-crypto and exposes an API to shard a JSON message, store the shards and reconstitute the secret.

    "},{"location":"concepts/0051-dkms/shamir_secret/#indy-crypto","title":"Indy-crypto","text":"
    1. shard_secret(secret: bytes, m: u8, n: u8, sign_shares: Option<bool>) -> Result<Vec<Share>, IndyCryptoError>. Splits the bytes of the secret secret in n different shares and m-of-n shares are required to reconstitute the secret. sign_shares if provided, all shards are signed.
    2. recover_secret(shards: Vec<Share>, verify_signatures: Option<bool>) -> Result<Vec<u8>, IndyCryptoError>. Recover the secret from the given shards. verify_signatures if given verifies the signatures.
    "},{"location":"concepts/0051-dkms/shamir_secret/#indy-sdk","title":"Indy-sdk","text":"
    1. shard_JSON(msg: String, m: u8, n: u8, sign_shares: Option<bool>) -> Result<Vec<String>, IndyError> Takes the message as a JSON string and serialises it to bytes and passes it to shard_secret of indy-crypto. The serialisation has to be deterministic, i.e the same JSON should always serialise to same bytes everytime. The resulting Share given by indy-crypto is converted to JSON before returning.
    2. shard_JSON_with_wallet_data(wallet_handle: i32, msg: String, wallet_keys:Vec<&str>, m: u8, n: u8, sign_shares: Option<bool>) -> Result<Vec<String>, IndyError> Takes the message as a JSON string, updates the JSON with key-values from wallet given by handle wallet_handle, keys present in the vector wallet_keys and passes the resulting JSON to shard_JSON.
    3. recover_secret(shards: Vec<String>, verify_signatures: Option<bool>) -> Result<String, IndyError> Takes a collection of shards each encoded as JSON, deserialises them into Shares and passes them to recover_secret from indy-crypto. It converts the resulting secret back to JSON before returning it.
    4. shard_JSON_and_store_shards(wallet_handle: i32, msg: String, m: u8, n: u8, sign_shares: Option<bool>) -> Result<String, IndyError> Shards the given JSON using shard_JSON and store shards as a JSON array (each shard is an object in itself) in the wallet given by wallet_handle. Returns the wallet key used to store the shards.
    "},{"location":"concepts/0051-dkms/trustee_protocols/","title":"Trustee Setup Protocol","text":"

    Objective: Provide the messages and data formats so an identity owner can choose, update, remove trustees and their delegated capabilities.

    "},{"location":"concepts/0051-dkms/trustee_protocols/#assumptions","title":"Assumptions","text":"
    1. An identity owner selects a connection to become a trustee
    2. Trustees can be granted various capabilities by identity owners
      1. Safeguarding a recovery share. This will be the most common
      2. Revoke an authorized agent on behalf of an identity owner
      3. Provision a new agent on behalf of an identity owner
      4. Be an administrator for managing identity owner agents
    3. Trustees agree to any new specified capabilities before any action is taken
    4. Trustees will safeguard recovery shares. Their app will encrypt the share and not expose it to anyone else
    5. Trustees authenticate out-of-band an identity owner when a recovery event occurs
    6. The Trustees' app should only send a recovery share to an identity owner after they have been authenticated
    7. All messages will use a standard DIDComm Envelope.
    "},{"location":"concepts/0051-dkms/trustee_protocols/#messages-and-structures","title":"Messages and Structures","text":"

    Messages are formatted as JSON. All binary encodings use base64url. All messages include the following fields:

    1. version \\<string>: The semantic version of the message data format.
    2. type \\<string>: The message type.
    "},{"location":"concepts/0051-dkms/trustee_protocols/#capabilty_offer","title":"CAPABILTY_OFFER","text":"

    Informs a connection that the identity owner wishes to make them a trustee. The message includes information about what capabilities the identity owner has chosen to grant a trustee and how long the offer is valid. This message adds the following fields

    expires \\<string>: 64-bit unsigned big-endian integer. The number of seconds elapsed between January 1, 1970 UTC and the time the offer will expire if no request message is received. This value is purely informative.\\ capabilities \\<list[string]>: A list of capabilities that the trustee will be granted. They can include

    1. RECOVERY_SHARE: The trustee will be given a recovery share
    2. REVOKE_AUTHZ: The trustee can revoke agents
    3. PROVISION_AUTHZ: The trustee can provision new agents
    4. ADMIN_AUTHZ: The trustee is an administrator of agents
    {\n  \"version\": \"0.1\",\n  \"type\": \"CAPABILITY_OFFER\",\n  \"capabilities\": [\"RECOVERY_SHARE\", \"REVOKE_AUTHZ\", \"PROVISION_AUTHZ\"]\n  \"expires\": 1517428815\n}\n
    "},{"location":"concepts/0051-dkms/trustee_protocols/#capabilty_request","title":"CAPABILTY_REQUEST","text":"

    Sent to an identity owner in response to a TRUSTEE_OFFER message. The message includes includes information for which capabilities the trustee has agreed. This message adds the following fields

    for_id \\<string>: The nonce sent in the TRUSTEE_OFFER message.\\ capabilities \\<object[string,string]>: A name value object that contains the trustee's response for each privilege.\\ authorizationKeys \\<list[string]>: The public keys that the trustee will use to verify her actions with the authz policy registry on behalf of the identity owner.

    {\n  \"version\": \"0.1\",\n  \"type\": \"CAPABILITY_REQUEST\",\n  \"authorizationKeys\": [\"Rtna123KPuQWEcxzbNMjkb\"]\n  \"capabilities\": [\"RECOVERY_SHARE\", \"REVOKE_AUTHZ\"]\n}\n
    "},{"location":"concepts/0051-dkms/trustee_protocols/#capability_response","title":"CAPABILITY_RESPONSE","text":"

    Sends the identity owner policy address and/or recovery data and metadata to a recovery trustee. A trustee should send a confirmation message that this message was received.

    address \\<string>: The identity owner's policy address. Only required if the trustee has a key in the authz policy registry.\\ share \\<object>: The actual recovery share data in the format given in the next section. Only required if the trustee has the RECOVERY_SHARE privilege.

    {\n  \"version\": \"0.1\",\n  \"type\": \"CAPABILITY_RESPONSE\",\n  \"address\": \"b3AFkei98bf3R2s\"\n  \"share\": {\n    ...\n  }\n}\n
    "},{"location":"concepts/0051-dkms/trustee_protocols/#trust_ping","title":"TRUST_PING","text":"

    Authenticates a party to the identity owner for out of band communication.

    challenge \\<object>: A message that a party should respond to so the identity owner can be authenticated. Contains a question field for the other party to answer and a list of valid_responses.

    {\n  \"version\": \"0.1\",\n  \"type\": \"TRUST_PING\",\n  \"challenge\": {\n    ...\n  }\n}\n

    challenge will look like the example below but allows for future changes as needed.\\ question \\<string>: The question for the other party to answer.\\ valid_responses \\<list[string]>: A list of valid responses that the party can give in return.

    {\n    \"question\": \"Are you on a call with CULedger?\",\n    \"valid_responses\": [\"Yes\", \"No\"]\n}\n
    "},{"location":"concepts/0051-dkms/trustee_protocols/#trust_pong","title":"TRUST_PONG","text":"

    The response message for the TRUST_PING message.

      \"version\": \"0.1\",\n  \"type\": \"TRUST_PONG\",\n  \"answer\": {\n    \"answerValue\": \"Yes\"\n  }\n
    "},{"location":"concepts/0051-dkms/trustee_protocols/#key_heartbeat_request","title":"KEY_HEARTBEAT_REQUEST","text":"

    Future_Work: Verifies a trustee/agent has and is using the public keys that were given to the identity owner. These keys

    authorizationKeys \\<list[string]>: Public keys the identity owner knows that belong to the trustee/agent.

    {\n  \"version\": \"0.1\",\n  \"type\": \"KEY_HEARTBEAT_REQUEST\",\n  \"authorizationKeys\": [\"Rtna123KPuQWEcxzbNMjkb\"]\n}\n
    "},{"location":"concepts/0051-dkms/trustee_protocols/#key_heartbeat_response","title":"KEY_HEARTBEAT_RESPONSE","text":"

    Future_Work: The updated keys sent back from the trustee/agent

    "},{"location":"concepts/0051-dkms/trustee_protocols/#recovery_share_response","title":"RECOVERY_SHARE_RESPONSE","text":"

    Future_Work: After an identity owner receives a challenge from a trustee, an application prompts her to complete the challenge. This message contains her response.

    for_id \\<string>: The nonce sent in the RECOVERY_SHARE_CHALLENGE message.\\ response \\<object>: The response from the identity owner.

    {\n  \"version\": \"0.1\",\n  \"type\": \"RECOVERY_SHARE_RESPONSE\",\n  \"response\": {\n    ...\n  }\n}\n

    response will look like the example below but allows for future changes as needed.

    {\n  \"pin\": \"3qA5h7\"\n}\n
    "},{"location":"concepts/0051-dkms/trustee_protocols/#recovery-share-data-structure","title":"Recovery Share Data Structure","text":"

    Recovery shares are formatted in JSON with the following fields:

    1. version \\<string>: The semantic version of the recovery share data format.
    2. source_did: The identity owner DID that sent this share to the trustee
    3. tag \\<string>: A value used to verify that all the shares are for the same secret. The identity owner compares this to every share to make sure they are the same.
    4. shareValue \\<string>: The share binary value.
    5. hint \\<object>: Hint data that contains the following fields:
      1. trustees \\<list[string]>: A list of all the recovery trustee names associated with this share. These names are only significant to the identity owner. Helps to aid in recovery by providing some metadata for the identity owner and the application.
      2. threshold \\<integer>: The minimum number of shares needed to recover the key. Helps to aid in recovery by providing some metadata for the identity owner and the application.
    {\n  \"version\": \"0.1\",\n  \"source_did\": \"did:sov:asbdfa32135\"\n  \"tag\": \"ze4152Bsxo90\",\n  \"shareValue\": \"abcdefghijkmnopqrstuvwxyzABCDEFGHJKLMNPQRSTUVWXYZ123456789\"\n  \"hint\": {\n    \"theshold\": 3,\n    \"trustees\": [\"Mike L\", \"Lovesh\", \"Corin\", \"Devin\", \"Drummond\"]\n  }\n}\n
    "},{"location":"concepts/0074-didcomm-best-practices/","title":"Aries RFC 0074: DIDComm Best Practices","text":""},{"location":"concepts/0074-didcomm-best-practices/#summary","title":"Summary","text":"

    Identifies some conventions that are generally accepted as best practice by developers of DIDComm software. Explains their rationale. This document is a recommendation, not normative.

    "},{"location":"concepts/0074-didcomm-best-practices/#motivation","title":"Motivation","text":"

    By design, DIDComm architecture is extremely flexible. Besides adapting well to many platforms, programming languages, and idioms, this let us leave matters of implementation style in the hands of developers. We don't want framework police trying to enforce rigid paradigms.

    However, some best practices are worth documenting. There is tribal knowledge in the community that represents battle scars. Collaboration is fostered if learning curves don't have to proliferate. Therefore, we offer the following guidelines.

    "},{"location":"concepts/0074-didcomm-best-practices/#tutorial","title":"Tutorial","text":""},{"location":"concepts/0074-didcomm-best-practices/#normative-language","title":"Normative language","text":"

    RFCs about protocols and DIDComm behaviors follow commonly understood conventions about normative language, including words like \"MUST\", \"SHOULD\", and \"MAY\". These conventions are documented in IETF's RFC 2119. Existing documents that were written before we clarified our intention to follow these conventions are grandfathered but should be updated to conform.

    "},{"location":"concepts/0074-didcomm-best-practices/#names","title":"Names","text":"

    Names show up in lots of places in our work. We name RFCs, ../../concepts defined in those RFCs, protocols, message types, keys in JSON, and much more.

    The two most important best practices with names are:

    These are so common-sense that we won't argue them. But a few other points are worthy of comment.

    "},{"location":"concepts/0074-didcomm-best-practices/#snake_case-and-variants","title":"snake_case and variants","text":"

    Nearly all code uses multi-word tokens as names. Different programming ecosystems have different conventions for managing them: camelCase, TitleCase, snake_case, kabob-case, SHOUT_CASE, etc. We want to avoid a religious debate about these conventions, and we want to leave developers the freedom to choose their own styles. However, we also want to avoid random variation that makes it hard to predict the correct form. Therefore, we try to stay idiomatic in the language we're using, and many of our tokens are defined to compare case-insensitive with punctuation omitted, so the differences melt away. This is the case with protocol names and message type names, for example; it means that you should interpret \"TicTacToe\" and \"tic-tac-toe\" and \"ticTacToe\" as being the same protocol. If you are writing a java function for it, by all means use \"ticTacToe\"; if you are writing CSS, by all means use \"tic-tac-toe\".

    The community tries to use snake_case in JSON key names, even though camelCase is slightly more common. This is not a hard-and-fast rule; in particular, a few constructs from DID Docs leak into DIDComm, and these use the camelCase style that those specs expect. However, it was felt that snake_case was mildly preferable because it didn't raise the questions about acronyms that camelCase does (is it \"zeroOutRAMAlgorithm\", \"zeroOutRamAlgorithm\", or \"zeroOutRAMalgorithm\"?).

    The main rule to follow with respect to case is: Use the same convention as the rest of the code around you, and in JSON that's intended to be interoperable, use snake_case unless you have a good reason not to. Definitely use the same case conventions as the other keys in the same JSON schema.

    "},{"location":"concepts/0074-didcomm-best-practices/#pluralization","title":"Pluralization","text":"

    The names of JSON items that represent arrays should be pluralized whenever possible, while singleton items should not.

    "},{"location":"concepts/0074-didcomm-best-practices/#terminology-and-notation","title":"Terminology and Notation","text":"

    Use terms correctly and consistently.

    The Sovrin Glossary V2 is considered a definitive source of terms. We will probably move it over to Aries at some point as an officially sponsored artifact of this group. RFC 0006: SSI Notation is also a definitive reference.

    RFCs in general should make every effort to define new terms only when needed, to be clear about the ../../concepts they are labeling, and use prior work consistently. If you find a misalignment in the terminology or notation used by RFCs, please open a github issue.

    "},{"location":"concepts/0074-didcomm-best-practices/#terseness-and-abbreviations","title":"Terseness and abbreviations","text":"

    We like obvious abbreviations like \"ipaddr\" and \"inet\" and \"doc\" and \"conn\". We also formally define abbreviations or acronyms for terms and then use the short forms as appropriate.

    However, we don't value terseness so much that we are willing to give up clarity. Abbreviating \"wallet\" as \"wal\" or \"agent\" as \"ag\" is quirky and discouraged.

    "},{"location":"concepts/0074-didcomm-best-practices/#rfc-naming","title":"RFC naming","text":"

    RFCs that define a protocol should be named in the form <do something>-protocol, where <do-something> is a verb phrase like issue-credential, or possibly a noun phrase like did-exchange--something that makes the theme of the protocol obvious. The intent is to be clear; a protocol name like \"connection\" is too vague because you can do lots of things with connections.

    Protocol RFCs need to be versioned thoughtfully. However, we do not put version numbers in a protocl RFC's folder name. Rather, the RFC folder contains all versions of the protocol, with the latest version documented in README.md, and earlier versions documented in subdocs named according to version, as in version-0.9.md or similar. The main README.md should contain a section of links to previous versions. This allows the most natural permalink for a protocol to be a link to the current version, but it also allows us to link to previous versions explicitly if we need to.

    RFCs that define a decorator should be named in the form <decorator name>-decorator, as in timing-decorator or trace-decorator.

    "},{"location":"concepts/0074-didcomm-best-practices/#json","title":"JSON","text":"

    Json is a very flexible data format. This can be nice, but it can also lead to data modeled in ways that cause a lot of bother for some programming languages. Therefore, we recommend the following choices.

    "},{"location":"concepts/0074-didcomm-best-practices/#no-variable-type-arrays","title":"No Variable Type Arrays","text":"

    Every element in an array should be the same data type. This is helpful for statically and strongly typed programming languages that want arrays of something more specific than a base Object class. A violating example:

    [\n   {\n    \"id\":\"324234\",\n    \"data\":\"1/3/2232\"\n   },\n   {\n    \"x_pos\":3251,\n    \"y_pos\":11,\n    \"z_pos\":55\n   }\n]\n
    Notice that the first object and the second object in the array have no structure in common.

    Although the benefit of this convention is especially obvious for some programming languages, it is helpful in all languages to keep parsing logic predictable and reducing branching cod epaths.

    "},{"location":"concepts/0074-didcomm-best-practices/#dont-treat-objects-as-associative-arrays","title":"Don't Treat Objects as Associative Arrays","text":"

    Many loosely typed programming languages conflate the concept of an associative array (dict, map) with the concept of object. In python, for example, an object is just a dict with some syntactic sugar, and python's JSON serialization handles the two interchangeably when serializing.

    This makes it tempting to do the same thing in JSON. An unhappy example:

    {\n    \"usage\": {\n        \"194.52.101.254\": 34,\n        \"73.183.146.222\": 55,\n        \"149.233.52.170\": 349\n    }\n}\n

    Notice that the keys of the usage object are unbounded; as the set of IP addresses grows, the set of keys in usage grows as well. JSON is an \"object notation\", and {...} is a JSON object -- NOT a JSON associative array--but this type of modeling ignores that. If we model data this way, we'll end up with an \"object\" that could have dozens, hundreds, thousands, or millions of keys with identical semantics but different names. That's not how objects are supposed to work.

    Note as well that the keys here, such as \"192.52.101.254\", are not appropriate identifiers in most programming languages. This means that unless deserialization code maps the keys to keys in an associative array (dict, map), it will not be able to handle the data at all. Also, this way to model the data assumes that we know how lookups will be done (in this case, ipaddr\u2192number); it doesn't leave any flexibility for other access patterns.

    A better way to model this type of data is as a JSON array, where each item in the array is a tuple of known field types with known field names. This is only slightly more verbose. It allows deserialization to map to one or more lookup data structures per preference, and is handled equally well in strongly, statically typed programming languages and in loosely typed languages:

    {\n    \"usage\": [\n        { \"ip\": \"194.52.101.254\", \"num\": 34 },\n        { \"ip\": \"73.183.146.222\", \"num\": 55 },\n        { \"ip\": \"149.233.52.170\", \"num\": 349 }\n    ]\n}\n
    "},{"location":"concepts/0074-didcomm-best-practices/#numeric-field-properties","title":"Numeric Field Properties","text":"

    Json numeric fields are very flexible. As wikipedia notes in its discussion about JSON numeric primitives:

    Number: a signed decimal number that may contain a fractional part and may use exponential\nE notation, but cannot include non-numbers such as NaN. The format makes no distinction\nbetween integer and floating-point. JavaScript uses a double-precision floating-point format\nfor all its numeric values, but other languages implementing JSON may encode numbers\ndifferently.\n

    Knowing that something is a number may be enough in javascript, but in many other programming languages, more clarity is helpful or even required. If the intent is for the number to be a non-negative or positive-only integer, say so when your field is defined in a protocol. If you know the valid range, give it. Specify whether the field is nullable.

    Per the first guideline above about names, name your numeric fields in a way that makes it clear they are numbers: \"references\" is a bad name in this respect (could be a hyperlink, an array, a string, etc), whereas \"reference_count\" or \"num_of_refs\" is much better.

    "},{"location":"concepts/0074-didcomm-best-practices/#date-time-conventions","title":"Date Time Conventions","text":"

    Representing date- and time-related data in JSON is a source of huge variation, since the datatype for the data isn't obvious even before it's serialized. A quick survey of source code across industries and geos shows that dates, times, and timestamps are handled with great inconsistency outside JSON as well. Some common storage types include:

    Of course, many of these datatypes have special rules about their relationship to timezones, which further complicates matters. And timezone handling is notoriously inconsistent, all on its own.

    Some common names for the fields that store these times include:

    The intent of this RFC is NOT to eliminate all diversity. There are good reasons why these different datatypes exist. However, we would like DIDComm messages to use broadly understood naming conventions that clearly communicate date- and time-related semantics, so that where there is diversity, it's because of different use cases, not just chaos.

    By convention, DIDComm field suffixes communicate datatype and semantics for date- and time-related ideas, as described below. As we've stressed before, conventions are recommendations only. However:

    1. It is strongly preferred that developers not ignore these perfectly usable conventions unless they have a good reason (e.g., a need to measure the age of the universe in seconds in scientific notation, or a need for ancient dates in a genealogy or archeology use case).

    2. Developers should never contradict the conventions. That is, if a developer sees a date- or time-related field that appears to match what's documented here, the assumption of alignment ought to be safe. Divergence should use new conventions, not redefine these.

    Field names like \"expires\" or \"lastmod\" are deprecated, because they don't say enough about what to expect from the values. (Is \"expires\" a boolean? Or is it a date/time? If the latter, what is its granularity and format?)

    "},{"location":"concepts/0074-didcomm-best-practices/#_date","title":"_date","text":"

    Used for fields that have only date precision, no time component. For example, birth_date or expiration_date. Such fields should be represented as strings in ISO 8601 format (yyyy-mm-dd). They should contain a timezone indicator if and only if it's meaningful (see Timezone Offset Notation).

    "},{"location":"concepts/0074-didcomm-best-practices/#_time","title":"_time","text":"

    Used for fields that identify a moment with both date and time precision. For example, arrival_time might communicate when a train reaches the station. The datatype of such fields is a string in ISO 8601 format (yyyy-mm-ddTHH:MM:SS.xxx...) using the Gregorian calendar, and the timezone defaults to UTC. However: * Precision can vary from minute to microsecond or greater. * It is strongly recommended to use the \"Z\" suffix to make UTC explicit: \"2018-05-27 18:22Z\" * The capital 'T' that separates date from time in ISO 8601 can freely vary with a space. (Many datetime formatters support this variation, for greater readability.) * If local time is needed, Timezone Offset Notation is used.

    "},{"location":"concepts/0074-didcomm-best-practices/#_sched","title":"_sched","text":"

    Holds a string that expresses appointment-style schedules such as \"the first Thursday of each month, at 7 pm\". The format of these strings is recommended to follow ISO 8601's Repeating Intervals notation where possible. Otherwise, the format of such strings may vary; the suffix doesn't stipulate a single format, but just the semantic commonality of scheduling.

    "},{"location":"concepts/0074-didcomm-best-practices/#_clock","title":"_clock","text":"

    Describes wall time without reference to a date, as in 13:57. Uses ISO 8601 formatted strings and a 24-hour cycle, not AM/PM.

    "},{"location":"concepts/0074-didcomm-best-practices/#_t","title":"_t","text":"

    Used just like _time, but for unsigned integer seconds since Jan 1, 1970 (with no opinion about whether it's a 32-bit or 64-bit value). Thus, a field that captures a last modified timestamp for a file, as number of seconds since Jan 1, 1970 would be lastmod_t. This suffix was chosen for resonance with Posix's time_t datatype, which has similar semantics.

    "},{"location":"concepts/0074-didcomm-best-practices/#_tt","title":"_tt","text":"

    Used just like _time and _t, but for 100-nanosecond intervals since Jan 1, 1601. This matches the semantics of the Windows FILETIME datatype.

    "},{"location":"concepts/0074-didcomm-best-practices/#_sec-or-subunits-of-seconds-_milli-_micro-_nano","title":"_sec or subunits of seconds (_milli, _micro, _nano)","text":"

    Used for fields that tell how long something took. For example, a field describing how long a system waited before retry might be named retry_milli. Normally, this field would be represented as an unsigned positive integer.

    "},{"location":"concepts/0074-didcomm-best-practices/#_dur","title":"_dur","text":"

    Tells duration (elapsed time) in friendly, calendar based units as a string, using the conventions of ISO 8601's Duration concept. Y = year, M = month, W = week, D = day, H = hour, M = minute, S = second: \"P3Y2M5D11H\" = 3 years, 2 months, 5 days, 11 hours. 'M' can be preceded by 'T' to resolve ambiguity between months and minutes: \"PT1M3S\" = 1 minute, 3 seconds, whereas \"P1M3S\" = 1 month, 3 seconds.

    "},{"location":"concepts/0074-didcomm-best-practices/#_when","title":"_when","text":"

    For vague or imprecise dates and date ranges. Fragments of ISO 8601 are preferred, as in \"1939-12\" for \"December 1939\". The token \"to\" is reserved for inclusive ranges, and the token \"circa\" is reserved to make fuzziness explicit, with \"CE\" and \"BCE\" also reserved. Thus, Cleopatra's birth_when might be \"circa 30 BCE\", and the timing of the Industrial Revolution might have a happened_when of \"circa 1760 to 1840\".

    "},{"location":"concepts/0074-didcomm-best-practices/#timezone-offset-notation","title":"Timezone Offset Notation","text":"

    Most timestamping can and should be done in UTC, and should use the \"Z\" suffix to make the Zero/Zulu/UTC timezone explicit.

    However, sometimes the local time and the UTC time for an event are both of interest. This is common with news events that are tied to a geo, as with the time that an earthquake is felt at its epicenter. When this is the case, rather than use two fields, it is recommended to use timezone offset notation (the \"+0800\" in \"2018-05-27T18:22+08:00\"). Except for the \"Z\" suffix of UTC, timezone name notation is deprecated, because timezones can change their definitions according to the whim of local lawmakers, and because resolving the names requires expensive dictionary lookup. Note that this convention is exactly how ISO 8601 handles the timezone issue.

    "},{"location":"concepts/0074-didcomm-best-practices/#blobs","title":"Blobs","text":"

    In general, blobs are encoded as base64url strings in DIDComm.

    "},{"location":"concepts/0074-didcomm-best-practices/#unicode","title":"Unicode","text":"

    UTF-8 is our standard way to represent unicode strings in JSON and all other contexts. For casual definition, this is sufficient detail.

    For advanced use cases, it may be necessary to understand subtleties like Unicode normalization forms and canonical equivalence. We generally assume that we can compare strings for equality and sort order using a simple binary algorithm. This is approximately but (in some corner cases) not exactly the same as assuming that text is in NFC normalization form with no case folding expectations and no extraneous surrogate pairs. Where more precision is required, the definition of DIDComm message fields should provide it.

    "},{"location":"concepts/0074-didcomm-best-practices/#hyperlinks","title":"Hyperlinks","text":"

    This repo is designed to be browsed as HTML. Browsing can be done directly through github, but we may publish the content using Github Pages and/or ReadTheDocs. As a result, some hyperlink hygiene is observed to make the content as useful as possible:

    These rules are enforced by a unit test that runs code/check_links.py. To run it, go to the root of the repo and run pytest code -- or simply invoke the check_links script directly. Normally, check_links does not test external hyperlinks on the web, because it is too time-consuming; if you want that check, add --full as a command-line argument.

    "},{"location":"concepts/0074-didcomm-best-practices/#security-considerations","title":"Security Considerations","text":""},{"location":"concepts/0074-didcomm-best-practices/#replay-attacks","title":"Replay attacks","text":"

    It should be noted that when defining a protocol that has domain specific requirements around preventing replay attacks an @id property SHOULD be required. Given the @id field is most commonly set to be a UUID, it usually provides sufficient randomness that a nonce would in preventing replay attacks. This means that sufficient care will be needed in processing of the @id field however, to make sure the @id field hasn't been used before. In some cases, nonces require being unpredictable as well. In this case, greater review should be taken as to how the @id field should be used in the domain specific protocol. Additionally, in the event where the @id field is not adequate, it's recommended that an additional nonce field be required by the domain specific protocol specification.

    "},{"location":"concepts/0074-didcomm-best-practices/#reference","title":"Reference","text":""},{"location":"concepts/0074-didcomm-best-practices/#drawbacks","title":"Drawbacks","text":"

    The main concern with this type of RFC is that it will produce more heat than light -- that is, that developers will debate minutiae instead of getting stuff done. We hope that the conventions here feel reasonable and lightweight enough to avoid that.

    "},{"location":"concepts/0074-didcomm-best-practices/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0074-didcomm-best-practices/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0074-didcomm-best-practices/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0094-cross-domain-messaging/","title":"Aries RFC 0094: Cross-Domain Messaging","text":""},{"location":"concepts/0094-cross-domain-messaging/#summary","title":"Summary","text":"

    There are two layers of messages that combine to enable interoperable self-sovereign identity DIDcomm (formerly called Agent-to-Agent) communication. At the highest level are Agent Messages - messages sent between Identities to accomplish some shared goal. For example, establishing a connection between identities, issuing a Verifiable Credential from an Issuer to a Holder or even the simple delivery of a text Instant Message from one person to another. Agent Messages are delivered via the second, lower layer of messaging - encryption envelopes. An encryption envelope is a wrapper (envelope) around an Agent Message to enable the secure delivery of a message from one Agent directly to another Agent. An Agent Message going from its Sender to its Receiver may be passed through a number of Agents, and an encryption envelope is used for each hop of the journey.

    This RFC addresses Cross Domain messaging to enable interoperability. This is one of a series of related RFCs that address interoperability, including DIDDoc Conventions, Agent Messages and Encryption Envelope. Those RFCs should be considered together in understanding DIDcomm messaging.

    In order to send a message from one Identity to another, the sending Identity must know something about the Receiver's domain - the Receiver's configuration of Agents. This RFC outlines how a domain MUST present itself to enable the Sender to know enough to be able to send a message to an Agent in the domain. In support of that, a DIDcomm protocol (currently consisting of just one Message Type) is introduced to route messages through a network of Agents in both the Sender and Receiver's domain. This RFC provides the specification of the \"Forward\" Agent Message Type - an envelope that indicates the destination of a message without revealing anything about the message.

    The goal of this RFC is to define the rules that domains MUST follow to enable the delivery of Agent messages from a Sending Agent to a Receiver Agent in a secure and privacy-preserving manner.

    "},{"location":"concepts/0094-cross-domain-messaging/#motivation","title":"Motivation","text":"

    The purpose of this RFC and its related RFCs is to define a layered messaging protocol such that we can ignore the delivery of messages as we discuss the much richer Agent Messaging types and interactions. That is, we can assume that there is no need to include in an Agent message anything about how to route the message to the Receiver - it just magically happens. Alice (via her App Agent) sends a message to Bob, and (because of implementations based on this series of RFCs) we can ignore how the actual message got to Bob's App Agent.

    Put another way - these RFCs are about envelopes. They define a way to put a message - any message - into an envelope, put it into an outbound mailbox and have it magically appear in the Receiver's inbound mailbox in a secure and privacy-preserving manner. Once we have that, we can focus on letters and not how letters are sent.

    Most importantly for Agent to Agent interoperability, this RFC clearly defines the assumptions necessary to deliver a message from one domain to another - e.g. what exactly does Alice have to know about Bob's domain to send Bob a message?

    "},{"location":"concepts/0094-cross-domain-messaging/#tutorial","title":"Tutorial","text":""},{"location":"concepts/0094-cross-domain-messaging/#core-messaging-goals","title":"Core Messaging Goals","text":"

    These are vital design goals for this RFC:

    1. Sender Encapsulation: We SHOULD minimize what the Receiver has to know about the domain (routing tree or agent infrastructure) of the Sender in order for them to communicate.
    2. Receiver Encapsulation: We SHOULD minimize what the Sender has to know about the domain (routing tree or agent infrastructure) of the Receiver in order for them to communicate.
    3. Independent Keys: Private signing keys SHOULD NOT be shared between agents; each agent SHOULD be separately identifiable for accounting and authorization/revocation purposes.
    4. Need To Know Information Sharing: Information made available to intermediary agents between the Sender and Receiver SHOULD be minimized to what is needed to perform the agent's role in the process.
    "},{"location":"concepts/0094-cross-domain-messaging/#assumptions","title":"Assumptions","text":"

    The following are assumptions upon which this RFC is predicated.

    "},{"location":"concepts/0094-cross-domain-messaging/#terminology","title":"Terminology","text":"

    The following terms are used in this RFC with the following meanings:

    "},{"location":"concepts/0094-cross-domain-messaging/#diddoc","title":"DIDDoc","text":"

    The term \"DIDDoc\" is used in this RFC as it is defined in the DID Specification:

    A DID can be resolved to get its corresponding DIDDoc by any Agent that needs access to the DIDDoc. This is true whether talking about a DID on a Public Ledger, or a pairwise DID (using the did:peer method) persisted only to the parties of the relationship. In the case of pairwise DIDs, it's the (implementation specific) domain's responsibility to ensure such resolution is available to all Agents requiring it within the domain.

    "},{"location":"concepts/0094-cross-domain-messaging/#messages-are-private","title":"Messages are Private","text":"

    Agent Messages sent from a Sender to a Receiver SHOULD be private. That is, the Sender SHOULD encrypt the message with a public key for the Receiver. Any agent in between the Sender and Receiver will know only to whom the message is intended (by DID and possibly keyname within the DID), not anything about the message.

    "},{"location":"concepts/0094-cross-domain-messaging/#the-sender-knows-the-receiver","title":"The Sender Knows The Receiver","text":"

    This RFC assumes that the Sender knows the Receiver's DID and, within the DIDDoc for that DID, the keyname to use for the Receiver's Agent. How the Sender knows the DID and keyname to send the message is not defined within this RFC - that is a higher level concern.

    The Receiver's DID MAY be a public or pairwise DID, and MAY be on a Public Ledger or only shared between the parties of the relationship.

    "},{"location":"concepts/0094-cross-domain-messaging/#example-domain-and-diddoc","title":"Example: Domain and DIDDoc","text":"

    The following is an example of an arbitrary pair of domains that will be helpful in defining the requirements in this RFC.

    In the diagram above:

    "},{"location":"concepts/0094-cross-domain-messaging/#bobs-did-for-his-relationship-with-alice","title":"Bob's DID for his Relationship with Alice","text":"

    Bob\u2019s domain has 3 devices he uses for processing messages - two phones (4 and 5) and a cloud-based agent (6). However, in Bob's relationship with Alice, he ONLY uses one phone (4) and the cloud-based agent (6). Thus the key for device 5 is left out of the DIDDoc (see below).

    Note that the keyname for the Routing Agent (3) is called \"routing\". This is an example of the kind of convention needed to allow the Sender's agents to know the keys for Agents with a designated role in the receiving domain - as defined in the DIDDoc Conventions RFC.

    {\n  \"@context\": \"https://w3id.org/did/v1\",\n  \"id\": \"did:sov:1234abcd\",\n  \"publicKey\": [\n    {\"id\": \"routing\", \"type\": \"RsaVerificationKey2018\",  \"owner\": \"did:sov:1234abcd\",\"publicKeyPem\": \"-----BEGIN PUBLIC X\u2026\"},\n    {\"id\": \"4\", \"type\": \"RsaVerificationKey2018\",  \"owner\": \"did:sov:1234abcd\",\"publicKeyPem\": \"-----BEGIN PUBLIC 9\u2026\"},\n    {\"id\": \"6\", \"type\": \"RsaVerificationKey2018\",  \"owner\": \"did:sov:1234abcd\",\"publicKeyPem\": \"-----BEGIN PUBLIC A\u2026\"}\n  ],\n  \"authentication\": [\n    {\"type\": \"RsaSignatureAuthentication2018\", \"publicKey\": \"did:sov:1234abcd#4\"}\n  ],\n  \"service\": [\n    {\n      \"id\": \"did:example:123456789abcdefghi;did-communication\",\n      \"type\": \"did-communication\",\n      \"priority\" : 0,\n      \"recipientKeys\" : [ \"did:example:1234abcd#4\" ],\n      \"routingKeys\" : [ \"did:example:1234abcd#3\" ],\n      \"serviceEndpoint\" : \"did:example:xd45fr567794lrzti67;did-communication\"\n    }\n  ]\n}\n

    For the purposes of this discussion we are defining the message flow to be:

    1 \u2192 2 \u2192 8 \u2192 9 \u2192 3 \u2192 4

    However, that flow is arbitrary and only one hop is actually required:

    "},{"location":"concepts/0094-cross-domain-messaging/#encryption-envelopes","title":"Encryption Envelopes","text":"

    An encryption envelope is used to transport any Agent Message from one Agent directly to another. In our example message flow above, there are five encryption envelopes sent, one for each hop in the flow. The separate Encryption Envelope RFC covers those details.

    "},{"location":"concepts/0094-cross-domain-messaging/#agent-message-format","title":"Agent Message Format","text":"

    An Agent Message defines the format of messages processed by Agents. Details about the general form of Agent Messages can be found in the Agent Messages RFC.

    This RFC specifies (below) the \"Forward\" message type, a part of the \"Routing\" family of Agent Messages.

    "},{"location":"concepts/0094-cross-domain-messaging/#did-diddoc-and-routing","title":"DID, DIDDoc and Routing","text":"

    A DID owned by the Receiver is resolvable by the Sender as a DIDDoc using either a Public Ledger or using pairwise DIDs based on the did:peer method. The related DIDcomm DIDDoc Conventions RFC defines the required contents of a DIDDoc created by the receiving entity. Notably, the DIDDoc given to the Sender by the Receiver specifies the required routing of the message through an optional set of mediators.

    "},{"location":"concepts/0094-cross-domain-messaging/#cross-domain-interoperability","title":"Cross Domain Interoperability","text":"

    A key goal for interoperability is that we want other domains to know just enough about the configuration of a domain to which they are delivering a message, but no more. The following walks through those minimum requirements.

    "},{"location":"concepts/0094-cross-domain-messaging/#required-the-did-and-diddoc","title":"Required: The DID and DIDDoc","text":"

    As noted above, the Sender of an Agent to Agent Message has the DID of the Receiver, and knows the key(s) from the DIDDoc to use for the Receiver's Agent(s).

    Example: Alice wants to send a message from her phone (1) to Bob's phone (4). She has Bob's B:did@A:B, the DID/DIDDoc Bob created and gave to Alice to use for their relationship. Alice created A:did@A:B and gave that to Bob, but we don't need to use that in this example. The content of the DIDDoc for B:did@A:B is presented above.

    "},{"location":"concepts/0094-cross-domain-messaging/#required-end-to-end-encryption-of-the-agent-message","title":"Required: End-to-End encryption of the Agent Message","text":"

    The Agent Message from the Sender SHOULD be hidden from all Agents other than the Receiver. Thus, it SHOULD be encrypted with the public key of the Receiver. Based on our assumptions, the Sender can get the public key of the Receiver agent because they know the DID#keyname string, can resolve the DID to the DIDDoc and find the public key associated with DID#keyname in the DIDDoc. In our example above, that is the key associated with \"did:sov:1234abcd#4\".

    Most Sender-to-Receiver messages will be sent between parties that have shared pairwise DIDs (using the did:peer method). When that is true, the Sender will (usually) AuthCrypt the message. If that is not the case, or for some other reason the Sender does not want to AuthCrypt the message, AnonCrypt will be used. In either case, the Indy-SDK pack() function handles the encryption.

    If there are mediators specified in the DID service endpoint for the Receiver agent, the Sender must wrap the message for the Receiver in a 'Forward' message for each mediator. It is assumed that the Receiver can determine the from did based on the to DID (or the sender's verkey) using their pairwise relationship.

    {\n  \"@type\" : \"https://didcomm.org/routing/1.0/forward\",\n  \"@id\": \"54ad1a63-29bd-4a59-abed-1c5b1026e6fd\",\n  \"to\"   : \"did:sov:1234abcd#4\",\n  \"msg\"  : { json object from <pack(AgentMessage,valueOf(did:sov:1234abcd#4), privKey(A.did@A:B#1))> }\n}\n

    Notes

    The bullet above about the unpack() function returning the signer's public key deserves some additional attention. The Receiver of the message knows from the \"to\" field the DID to which the message was sent. From that, the Receiver is expected to be able to determine the DID of the Sender, and from that, access the Sender's DIDDoc. However, knowing the DIDDoc is not enough to know from whom the message was sent - which key was used to send the message, and hence, which Agent controls the Sending private key. This information MUST be made known to the Receiver (from unpack()) when AuthCrypt is used so that the Receiver knows which key was used to the send the message and can, for example, use that key in responding to the arriving Message.

    The Sender can now send the Forward Agent Message on its way via the first of the encryption envelope. In our example, the Sender sends the Agent Message to 2 (in the Sender's domain), who in turn sends it to 8. That of course, is arbitrary - the Sender's Domain could have any configuration of Agents for outbound messages. The Agent Message above is passed unchanged, with each Agent able to see the @type, to and msg fields as described above. This continues until the outer forward message gets to the Receiver's first mediator or the Receiver's agent (if there are no mediators). Each agent decrypts the received encrypted envelope and either forwards it (if a mediator) or processes it (if the Receiver Agent). Per the Encryption Envelope RFC, between Agents the Agent Message is pack()'d and unpack()'d as appropriate or required.

    The diagram below shows an example use of the forward messages to encrypt the message all the way to the Receiver with two mediators in between - a shared domain endpoint (aka https://agents-r-us.com) and a routing agent owned by the receiving entity.

    "},{"location":"concepts/0094-cross-domain-messaging/#required-cross-domain-encryption","title":"Required: Cross Domain Encryption","text":"

    While within a domain the Agents MAY choose to use encryption or not when sending messages from Agent to Agent, encryption MUST be used when sending a message into the Receiver's domain. The endpoint agent unpack()'s the encryption envelope and processes the message - usually a forward. Note that within a domain, the agents may use arbitrary relays for messages, unknown to the sender. How the agents within the domain knows where to send the message is implementation specific - likely some sort of dynamic DID-to-Agent routing table. If the path to the receiving agent includes mediators, the message must go through those mediators in order (for example, through 3 in our example) as the message being forwarded has been encrypted for the mediators.

    "},{"location":"concepts/0094-cross-domain-messaging/#required-mediators-process-forward-messages","title":"Required: Mediators Process Forward Messages","text":"

    When a mediator (eventually) receives the message, it determines it is the target of the (current) outer forward Agent Message and so decrypts the message's msg value to reveal the inner \"Forward\" message. Mediators use their (implementation specific) knowledge to map from the to field to deliver the message to the physical endpoint of the next agent to process the message on it's way to the Receiver.

    "},{"location":"concepts/0094-cross-domain-messaging/#required-the-receiver-app-agent-decryptsprocesses-the-agent-message","title":"Required: The Receiver App Agent Decrypts/Processes the Agent Message","text":"

    When the Receiver Agent receives the message, it determines it is the target of the forward message, decrypts the payload and processes the message.

    "},{"location":"concepts/0094-cross-domain-messaging/#exposed-data","title":"Exposed Data","text":"

    The following summarizes the information needed by the Sender's agents:

    The DIDDoc will have a public key entry for each additional Agent message Receiver and each mediator.

    In many cases, the entry for the endpoint agent should be a public DID, as it will likely be operated by an agency (for example, https://agents-r-us.com) rather than by the Receiver entity (for example, a person). By making that a public DID in that case, the agency can rotate its public key(s) for receiving messages in a single operation, rather than having to notify each identity owner and in turn having them update the public key in every pairwise DID that uses that endpoint.

    "},{"location":"concepts/0094-cross-domain-messaging/#data-not-exposed","title":"Data Not Exposed","text":"

    Given the sequence specified above, the following data is NOT exposed to the Sender's agents:

    "},{"location":"concepts/0094-cross-domain-messaging/#message-types","title":"Message Types","text":"

    The following Message Types are defined in this RFC.

    "},{"location":"concepts/0094-cross-domain-messaging/#corerouting10forward","title":"Core:Routing:1.0:Forward","text":"

    The core message type \"forward\", version 1.0 of the \"routing\" family is defined in this RFC. An example of the message is the following:

    {\n  \"@type\" : \"https://didcomm.org/routing/1.0/forward\",\n  \"@id\": \"54ad1a63-29bd-4a59-abed-1c5b1026e6fd\",\n  \"to\"   : \"did:sov:1234abcd#4\",\n  \"msg\"  : { json object from <pack(AgentMessage,valueOf(did:sov:1234abcd#4), privKey(A.did@A:B#1))> }\n}\n

    The to field is required and takes one of two forms:

    The first form is used when sending forward messages across one or more agents that do not need to know the details of a domain. The Receiver of the message is the designated Routing Agent in the Receiver Domain, as it controls the key used to decrypt messages sent to the domain, but not to a specific Agent.

    The second form is used when the precise key (and hence, the Agent controlling that key) is used to encrypt the Agent Message placed in the msg field.

    The msg field calls the Indy-SDK pack() function to encrypt the Agent Message to be forwarded. The Sender calls the pack() with the suitable arguments to AnonCrypt or AuthCrypt the message. The pack() and unpack() functions are described in more detail in the Encryption Envelope RFC.

    "},{"location":"concepts/0094-cross-domain-messaging/#reference","title":"Reference","text":"

    See the other RFCs referenced in this document:

    "},{"location":"concepts/0094-cross-domain-messaging/#drawbacks","title":"Drawbacks","text":"

    N/A

    "},{"location":"concepts/0094-cross-domain-messaging/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    A number of discussions were held about this RFC. In those discussions, the rationale for the RFC evolved into the text, and the alternatives were eliminated. See prior versions of the superseded HIPE (in status section, above) for details.

    A suggestion was made that the following optional parameters could be defined in the \"routing/1.0/forward\" message type:

    The optional parameters have been left off for now, but could be added in this RFC or to a later version of the message type.

    "},{"location":"concepts/0094-cross-domain-messaging/#prior-art","title":"Prior art","text":"

    N/A

    "},{"location":"concepts/0094-cross-domain-messaging/#unresolved-questions","title":"Unresolved questions","text":"

    N/A

    "},{"location":"concepts/0094-cross-domain-messaging/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0103-indirect-identity-control/","title":"Aries RFC 0103: Indirect Identity Control","text":""},{"location":"concepts/0103-indirect-identity-control/#summary","title":"Summary","text":"

    Compares and contrasts three forms of indirect identity control that have much in common and that should be explored together: delegation, guardianship, and controllership. Recommends mechanisms that allow identity technology to model each with flexibility, precision, and safety. These recommendations can be applied to many decentralized identity and credentialing ecosystems--not just to the ones best known in Hyperledger circles.

    "},{"location":"concepts/0103-indirect-identity-control/#motivation","title":"Motivation","text":"

    In most situations, we expect identity owners to directly control their own identities. This is the ideal that gives \"self-sovereign identity\" its name. However, control is not so simple in many situations:

    We need to understand how such situations color the interactions we have in an identity ecosystem.

    "},{"location":"concepts/0103-indirect-identity-control/#tutorial","title":"Tutorial","text":"

    Although the Sovrin Foundation advocates a specific approach to verifiable credentials, its glossary offers a useful analysis of indirect identity control that applies to any approach. Appendix C of the Sovrin Glossary V2 defines three forms of indirect identity control relationship--delegation, guardianship, controllership--matching the three bulleted examples above. Reviewing that document is highly recommended. It is the product of careful collaboration by experts in many fields, includes useful examples, and is clear and thorough.

    Here, we will simply reproduce two diagrams as a summary:

    Note: The type of delegation described in Appendix C, and the type we focus on in this doc, is one that crosses identity boundaries. There is another type that happens within an identity, as Alice delegates work to her various agents. For the time being, ignore this intra-identity delegation; it is explored more carefully near the end of the Delegation Details doc.

    "},{"location":"concepts/0103-indirect-identity-control/#commonalities","title":"Commonalities","text":"

    All of these forms of identity control share the issue of indirectness. All of them introduce risks beyond the ones that dominate in direct identity management. All of them complicate information flows and behavior. And they are inter-related; guardians and controllers often need to delegate, delegates may become controllers, and so forth.

    The solutions for each ought to have much in common, too--and that is the case. These forms of indirect identity control use similarly structured credentials in similar ways, in the context of similarly structured trust frameworks. Understanding and implementing support for one of them should give developers and organizations a massive headstart in implementing the others.

    Before we provide details about solutions, let's explore what's common and unique about each of the three forms of indirect identity control.

    "},{"location":"concepts/0103-indirect-identity-control/#compare-and-contrast","title":"Compare and Contrast","text":""},{"location":"concepts/0103-indirect-identity-control/#delegation","title":"Delegation","text":"

    Delegation can be either transparent or opaque, depending on whether it's obvious to an external party that a delegate is involved. A lawyer that files a court motion in their own name, but on behalf of a client, is a transparent delegate. A nurse who transcribes a doctor's oral instructions may be performing record-keeping as an opaque delegate, if the nurse is unnamed in the record.

    Transparent delegation is safer and provides a better audit trail than opaque delegation. It is closer to the ethos of self-sovereign identity. However, opaque delegation is a fact of life; sometimes a CEO wants her personal assistant to send a note or meeting invitation in a way that impersonates rather than explicitly representing her.

    Delegation needs constraints. These can take many forms, such as:

    "},{"location":"concepts/0103-indirect-identity-control/#constraints","title":"Constraints","text":"

    Delegation needs to be revokable.

    Delegates should not mix identity data for themselves with data that may belong to the delegator.

    The rules of how delegation work need to be spelled out in a trust framework.

    Sometimes, the indirect authority of a delegate should be recursively extensible (allow sub-delegation). Other times, this may be inappropriate.

    Use cases and other specifics of delegation are explored in greater depth in the Delegation Details doc.

    "},{"location":"concepts/0103-indirect-identity-control/#guardianship","title":"Guardianship","text":"

    Guardianship has all the bolded properties of delegation: transparent or opaque styles, constraints, revocation, the need to not mix identity data, the need for a trust framework, and the potential for recursive extensibility. It also adds some unique considerations.

    Since guardianship does not always derive from dependent consent (that is, the dependent is often unable to exercise sovereignty), the dependent in a guardianship relationship is particularly vulnerable to abuse from within.

    Because of this risk, guardianship is the most likely of the three forms of indirect control to require an audit trail and to involve legal formalities. Its trust frameworks are typically the most nuanced and complex.

    Guardianship is also the form of indirect identity control with the most complications related to privacy.

    Guardianship must have a rationale -- a justification that explains why the guardian has that status. Not all rationales are equally strong; a child lacking an obvious parent may receive a temporary guardian, but this guardian's status could change if a parent is found. Having a formal rationale allows conflicting guardianship claims to be adjudicated.

    Either the guardian role or specific guardianship duties may be delegated. An example of the former is when a parent leaves on a long, dangerous trip, and appoints a grandparent to be guardian in their absence. An example of the latter is when a parent asks a grandparent to drive a child to the school to sign up for the soccer team. When the guardian role is delegated, the result is a new guardian. When only guardianship duties are delegated, this is simple delegation and ceases to be guardianship.

    Use cases and other specifics of guardianship are explored in greater depth in the Guardianship Details doc.

    "},{"location":"concepts/0103-indirect-identity-control/#controllership","title":"Controllership","text":"

    Controllership shares nearly all bolded ../../features with delegation. It is usually transparent because things are usually known not to be identity owners in their interactions, and things are assumed not to control themselves.

    Like guardianship, controllership has a rationale. Usually, it is rooted in property ownership, but occasionally it might derive from court appointment. Also like guardianship, either the role or specific duties of controllership may be delegated. When controllership involves animals instead of machines, it may have risks of abuse and complex protections and trust frameworks.

    Unlike guardianship, controlled things usually require minimal privacy. However, things that constantly identify their controller(s) in a correlatable fashion may undermine the privacy of controllers in ways that are unexpected.

    Use cases and other specifics of controllership are explored in greater depth in the Controllership Details doc.

    "},{"location":"concepts/0103-indirect-identity-control/#solution","title":"Solution","text":"

    We recommend that all three forms of indirect identity control be modeled with some common ingredients:

    Here, \"proxy\" is used as a generic cover term for all three forms of indirect identity control. Each ingredient has a variant for each form (e.g., delegate credential, guardian credential, controller credential), and they have minor differences. However, they work so similarly that they'll be described generically, with differences noted where necessary.

    "},{"location":"concepts/0103-indirect-identity-control/#proxy-trust-framework","title":"Proxy Trust Framework","text":"

    A proxy trust framework is a published, versioned document (or collection of documents) that's accessible by URI. Writing one doesn't have to be a massive undertaking; see the sample guardianship trust framework for a simple example).

    It should answer at least the following questions:

    1. What is the trust framework's formal name, version, and URI? (The name cannot include a / character due to how it's paired with version in credential type fields. The version must follow semver rules.)

    2. In what geos and legal jurisdictions is it valid?

    3. On what rationales are proxies appointed? (For guardianship, these might include values like kinship and court_order. Each rationale needs to be formally defined, named, and published at a URI, because proxy credentials will reference them. This question is mostly irrelevant to delegation, where the rationale is always an action of the delegator.)

    4. What are the required and recommended behaviors of a proxy (holder), issuer, and verifier? How will this be enforced?

    5. What permissions vis-a-vis the proxied identity govern proxy actions? (For a delegate, these might include values like sign, pay, or arrange_travel. For a guardian, these might include values like financial, medical, do_not_resuscitate, foreign_travel, or new_relationships. Like rationales, permissions need to be formally defined and referencable by URI.)

    6. What are possible constraints on a proxy? (Constraints are bound to particular proxies, whereas a permission model is bound to the identity that the proxy is controlling; this distinction will make more sense in an example. Some constraints might include geo_radius, jurisdiction, biometric_consent_freshness, and so forth. These values also need to be formally defined and referencable by URI.)

    7. What auditing mechanisms are required, recommended, or allowed?

    8. What appeal mechanisms are required or supported?

    9. What proxy challenge procedures are best practice?

    10. What freshness rules are used for revocation testing and offline mode?

    "},{"location":"concepts/0103-indirect-identity-control/#proxy-credential","title":"Proxy Credential","text":"

    A proxy credential conforms to the Verifiable Credential Data Model 1.0. It can use any style of proof or data format (JSON-LD, JWT, Sovrin ZKP, etc). It is recognizable as a proxy credential by the following characteristics:

    1. Its @context field, besides including the \"https://www.w3.org/2018/credentials/v1\" required of all VCs, also includes a reference to this spec: \"https://github.com/hyperledger/aries-rfcs../../concepts/0103-indirect-identity-control\".

    2. Its type field contains, in addition to \"VerifiableCredential\", a string in the format:

      ...where form is one of the letters D (for Delegation), G (for Guardianship), or C (for controllership), trust framework is the name that a Proxy Trust Framework formally declares for itself, tfver is its version, and variant is a specific schema named in the trust framework. A regex that matches this pattern is: Proxy\\.([DGC])/([^/]+)/(\\d+[^/]*)/(.+), and an example of a matching string is: Proxy.G/UNICEF Vulnerable Populations Trust Framework/1.0/ChildGuardian.

    3. The metadata fields for the credential include trustFrameworkURI (the value of which is a URI linking to the relevant trust framework), auditURI (the value of which is a URI linking to a third-party auditing service, and which may be constrained or empty as specified in the trust framework), and appealURI (the value of which is a URI linking to an arbitration or adjudication authority for the credential, and which may be constrained or empty as specified in the trust framework).

    4. The credentialSubject section of the credential describes a subject called holder and a subject called proxied. The holder is the delegate, guardian, or controller; the proxied is the delegator, dependent, or controlled thing.

    5. credentialSubject.holder.type must be a URI pointing to a schema for credentialSubject.holder as defined in the trust framework. The schema must include the following fields:

      • role: A string naming the role that the holder plays in the permissioning scheme of the dependent. These roles must be formally defined in the trust framework. For example, a guardian credential might identify the holder (guardian) as playing the next_of_kin role, and this next_of_kin role might be granted a subset of all permissions that are possible for the dependent's identity. A controllership credential for a drone might identify the holder (controller) as playing the pilot role, which has different permissions from the maintenance_crew role.

      • rationaleURI: Required for guardian credentials, optional for the other types. This links to a formal definition in the trust framework of a justification for holding identity control status. For guardians, the rationaleURI might point to a definition of the blood_relative or tribal_member rationale, for example. For controllers, the rationaleURI might point to a definition of legal_appointment or property_owner.

      The schema may also include zero or more credentialSubject.holder.constraint.* fields. These fields would be used to limit the time, place, or circumstances in which the proxy may operate.

    6. credentialSubject.proxied.type must be a URI pointing to a schema for credentialSubject.proxied as defined in the trust framework. The schema must include a permissions field. This field contains an array of SGL rules, each of which is a JSON object in the form:

      {\"grant\": privileges, \"when\": condition}\n

      A complete example for a guardianship use case is provided in the SGL tutorial.

    7. The credential MAY or MUST contain additional fields under credentialSubject.holder that describe the holder (e.g., the holder's name, DID, biometric, etc.). If the credential is based on ZKP/link secret technologies, then these may be unnecessary, because the holder can bind their proxy credential to other credentials that prove who they are. If not, then the credential MUST contain such fields.

    8. The credential MUST contain additional fields under credentialSubject.proxied that describe the proxied identity (e.g., a dependent's name or biometric; a pet's RFID tag; a drone's serial number).

    "},{"location":"concepts/0103-indirect-identity-control/#proxy-challenge","title":"Proxy Challenge","text":"

    A proxy challenge is an interaction in which the proxy must justify the control they are exerting over the proxied identity. The heart of the challenge is a request for a verifiable presentation based on a proxy credential, followed by an evaluation of the evidence. This evaluation includes traditional credential verification, but also a comparison of a proxy's role (credentialSubject.holder.role) to permissions (credentialSubject.proxied.permissions), and a comparison of circumstances to constraints (credentialSubject.holder.constraints.*). It may also involve the creation of an audit trail, depending on the value of the auditURI field.

    During the verifiable presentation, the holder MUST disclose all of the following fields:

    In addition, the holder MUST prove that the proxy is the intended holder of the credential, to whatever standard is required by the trust framework. This can be done by disclosing additional fields under credentialSubject.holder, or by proving things about the holder in zero knowledge, if the credential supports ZKPs. In the latter case, proofs about the holder could also come from other credentials in the holder's possession, linked to the proxy credential through the link secret.

    The holder MUST also prove that the proxied identity is correct, to whatever standard is required by the trust framework. This can be done by disclosing additional fields under credentialSubject.proxied, or by proving things about the subject in zero knowledge.

    [TODO: discuss moments when proxy challenges may be vital; see https://docs.google.com/presentation/d/1aq45aUHTOK_WhFEICboXQrp7dalpLm9-MGg77Nsn50s/edit#slide=id.g59fffee7a0_0_39 ]

    [TODO: discuss offline mode, freshness, and revocation]

    "},{"location":"concepts/0103-indirect-identity-control/#reference","title":"Reference","text":"

    A complete sample of a guardianship trust framework and credential schema are attached for reference. Please also see the details about each form of indirect identity control:

    "},{"location":"concepts/0103-indirect-identity-control/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0103-indirect-identity-control/controllership-details/","title":"Controllership Details","text":""},{"location":"concepts/0103-indirect-identity-control/delegation-details/","title":"Delegation Details","text":"

    Three basic approaches to delegation are possible:

    1. Delegate by expressing intent in a DID Doc.
    2. Delegate with verifiable credentials.
    3. Delegate by sharing a wallet.

    The alternative of delegating via the authorization section of a DID Doc (option #1) is unnecessarily fragile, cumbersome, redundant, and expensive to implement. The theory of delegation with DIDs and credentials has been explored thoughtfully in many places (see Prior Art and References). The emergent consensus is:

    "},{"location":"concepts/0103-indirect-identity-control/delegation-details/#use-cases","title":"Use Cases","text":"

    The following use cases are good tests of whether we're implementing delegation properly.

    "},{"location":"concepts/0103-indirect-identity-control/delegation-details/#1-thrift-bank-employees","title":"1. Thrift Bank Employees","text":"

    Thrift Bank wishes to issue employee credentials to its employees, giving them delegated authority to perform certain actions on behalf of the bank (e.g., open their till, unlock the front door, etc). Thrist has a DID, but wishes to grant credential-issuing authority to its Human Resources Department (which has a separate DID). In turn, the HR department wishes to further delegate this authority to the Personnel Division. Inside of the Personnel division, three employees, Cathy, Stan, and Janet will ultimately be responsible for issuing the employee credentials.

    "},{"location":"concepts/0103-indirect-identity-control/delegation-details/#2-u-rent-a-car","title":"2. U-Rent-a-Car","text":"

    U-Rent-a-Car is a multinational company that owns a large fleet of vehicles. Its national headquarters issues a credential, C1, to its regional office in Quebec, authorizing U-Rent-a-Car Quebec to delegate driving privileges to customers, for cars owned by the parent company. Alice rents a car from U-Rent-a-Car Quebec. U-Rent-a-Car Quebec issues a driving privileges credential, C2, to Alice. C2 gives Alice the privilege to drive the car from Monday through Friday of a particular week. Alice climbs in the car and uses her C2 credential to prove to the car (which acts as verifier) that she is an authorized driver. She gets pulled over for speeding on Wednesday and uses C2 to prove to the police that she is the authorized driver of the car. On Thursday night Alice goes to a fancy restaurant. She uses valet parking. She issues credential C3 to the valet, allowing him to drive the car within 100 meters of the restaurant, for the next 2 hours while she is at the restaurant. The valet uses this credential to drive the car to the parking garage. While Alice eats, law enforcement goes to U-Rent-a-Car Quebec with a search warrant for the car. The law enforcement agency has discovered that the previous driver of the car was a criminal. It asks U-Rent-a-Car Quebec to revoke C2, because they don\u2019t want the car to be driven any more, in case evidence is accidentally destroyed. At the end of dinner, Alice goes to the valet and asks for her car to be returned. The valet goes to the car and attempts to open the door using C3. The car tests the validity of the delegation chain of C3, and discovers that C2 has been revoked, making C3 invalid. The car refuses to open the door. Alice has to take Uber to get home. Law enforcement officials take possession of the car.

    "},{"location":"concepts/0103-indirect-identity-control/delegation-details/#3-acme-departments","title":"3. Acme Departments","text":"

    Acme wants its HR department to issue Acme Employment Credentials, its Accounting department to issue Purchase Orders and Letters of Credit, its Marketing department to officially sign press releases, and so forth. All of these departments should be provably associated with Acme and acting under Acme\u2019s name in an official capacity.

    "},{"location":"concepts/0103-indirect-identity-control/delegation-details/#4-members-of-an-llc","title":"4. Members of an LLC","text":"

    Like #3, but simpler. 3 or 4 people each need signing authority for the LLC, so LLC delegates that authority.

    "},{"location":"concepts/0103-indirect-identity-control/delegation-details/#approaches-to-recursive-delegation","title":"Approaches to recursive delegation","text":"

    TODO 1. Root authority delegates directly at every level. 2. Follow the chain 3. Embed the chain

    "},{"location":"concepts/0103-indirect-identity-control/delegation-details/#revocation","title":"Revocation","text":"

    [TODO]

    "},{"location":"concepts/0103-indirect-identity-control/delegation-details/#infra-identity-delegation","title":"Infra-identity Delegation","text":"

    TODO

    "},{"location":"concepts/0103-indirect-identity-control/delegation-details/#prior-art-and-references","title":"Prior Art and References","text":"

    All of the following sources have contributed valuable thinking about delegation:

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-details/","title":"Guardianship Details","text":"

    For a complete walkthrough or demo of how guardianship works, see this demo script.

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-details/#use-cases","title":"Use Cases","text":"

    See https://docs.google.com/presentation/d/1qUYQa7U1jczEFun3a7sB3lKHIprlwd7brfOU9hEJ34U/edit?usp=sharing

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-details/#who-appoints-a-guardian-rationales","title":"Who appoints a guardian (rationales)","text":"

    See https://docs.google.com/presentation/d/1aq45aUHTOK_WhFEICboXQrp7dalpLm9-MGg77Nsn50s/edit#slide=id.g59fffee7a0_0_0

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-details/#transparent-vs-opaque","title":"Transparent vs. Opaque","text":"

    See https://docs.google.com/presentation/d/1aq45aUHTOK_WhFEICboXQrp7dalpLm9-MGg77Nsn50s/edit#slide=id.g59fffee7a0_0_46

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-details/#modes-of-guardianship","title":"Modes of Guardianship","text":"

    Holding-Based, Impersonation, Doc-based

    See https://docs.google.com/presentation/d/1aq45aUHTOK_WhFEICboXQrp7dalpLm9-MGg77Nsn50s/edit#slide=id.g59fffee7a0_0_265

    See also https://docs.google.com/presentation/d/1aq45aUHTOK_WhFEICboXQrp7dalpLm9-MGg77Nsn50s/edit#slide=id.g59fffee7a0_0_280, https://docs.google.com/presentation/d/1aq45aUHTOK_WhFEICboXQrp7dalpLm9-MGg77Nsn50s/edit#slide=id.g59fffee7a0_0_295, https://docs.google.com/presentation/d/1aq45aUHTOK_WhFEICboXQrp7dalpLm9-MGg77Nsn50s/edit#slide=id.g59fffee7a0_0_307

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-details/#guardians-and-wallets","title":"Guardians and Wallets","text":"

    Need to work on \"wallets\" term See https://docs.google.com/presentation/d/1aq45aUHTOK_WhFEICboXQrp7dalpLm9-MGg77Nsn50s/edit#slide=id.g59fffee7a0_0_365

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-details/#guardians-and-delegation","title":"Guardians and Delegation","text":"

    TODO

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-details/#privacy-considerations","title":"Privacy Considerations","text":""},{"location":"concepts/0103-indirect-identity-control/guardianship-details/#diffuse-trust","title":"Diffuse Trust","text":""},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/schema/","title":"Sample Guardianship Schema","text":"

    This document presents a sample schema for a guardian credential appropriate to the IRC-as-guardian-of-Mya-in-a-refugee-camp use case. It is accompanied by a sample trust framework.

    The raw schema is here:

    For general background on guardianship and its associated credentials, see this slide presentation.

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/schema/#how-to-use","title":"How to Use","text":"

    The schema documented here could be passed as the attrs arg to the indy_issuer_create_schema() method in libindy. The \"1.0\" in this document's name refers to the fact that we are using Indy 1.0-style schemas; we aren't trying to use the rich schema constructs that will be available to us when the \"schema 2.0\" effort is mature.

    The actual JSON you would need to pass to the indy_issuer_create_schema() method is given in the attached schema.json file. In code, if you place that file's content in a string variable and pass the variable as the attrs arg, the schema will be registered on the ledger. You might use values like \"Red Cross Vulnerable Populations Guardianship Cred\" and \"1.0\" as the name and version args to that same function. You can see an example of how to make the call by looking at the \"Save Schema and Credential Definition\" How-To in Indy SDK.

    See the accompanying trust framework for an explanation of individual fields.

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/","title":"Sample Guardianship Trust Framework","text":"

    This document describes a sample trust framework for guardianship appropriate to the IRC-as-guardian-of-Mya-in-a-refugee-camp use case. It is accompanied by a sample schema for a guardian credential.

    For general background on guardianship and its associated credentials, see this slide presentation.

    The trust framework shown here is a reasonable starting point, and it demonstrates the breadth of issues well. However, it probably would need significantly more depth to provide enough guidance for developers writing production software, and to be legally robust in many different jurisdictions.

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#name-version-author","title":"Name, Version, Author","text":"

    This is the \"Sovrin ID4All Vulnerable Populations Guardianship Trust Framework\", version \"1.0\". The trust framework is abbreviated in credential names and elsewhere as \"SIVPGTF\". It is maintained by the Sovrin ID4All Working Group. Credentials using the schema described here are known as gcreds.

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#scope","title":"Scope","text":"

    The trust framework applies to situations where NGOs like the International Red Cross/Red Crescent, UNICEF, or Doctors Without Borders are servicing large populations of vulnerable refugees, both children and adults, in formal camps. It assumes that the camps have at least modest, intermittent access to telecommunications, and that that they operate with at least tacit approval from relevant legal authorities. It may not provide enough guidance or protections in situations involving active combat, or in legal jurisdictions where rule of law is very tenuous.

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#rationales-for-guardianship","title":"Rationales for Guardianship","text":"

    In this framework, guardianship is based on one or more of the following formally defined rationales:

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#identifying-a-guardian","title":"Identifying a guardian","text":"

    This framework assumes that credentials will use ZKP technology. Thus, no holder attributes are embedded in a gcred except for the holder's blinded link secret. During a guardian challenge, the holder should include appropriate identifying evidence based on ZKP credential linking.

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#identifying-a-dependent","title":"Identifying a dependent","text":"

    This framework defines the following formal ways to identify a dependent in a gcred:

    These fields should appear in all gcreds. First name should be the name that the dependent acknowledges and answers to, not necessarily the legal first name. Last name may be empty if it is unknown. Birth date may be approximate. Photo is required and must be a color photo of at least 800x800 pixel resolution, taken at the time the guardian credential is issued, showing the dependent only, in good light. At least one of iris and fingerprint are strongly recommended, but neither is required.

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#permissions","title":"Permissions","text":"

    Guardians may be assigned some or all of the following formally defined permissions in this trust framework:

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#constraints","title":"Constraints","text":"

    A guardian's ability to control the dependent may be constrained in the following formal ways by guardian credentials that use this trust framework:

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#boundary","title":"Boundary","text":"

    Guardian can only operate within named boundaries, such as the boundaries of a country, province, city, military command, river, etc. Boundaries are specified as a localized, comma-separated list of strings, where each locale section begins with a | (pipe) character followed by an ISO639 language code followed by a : (colon) character, followed by data. All localized values must describe the same constraints; if one locale's description is more permissive than another's, the most restrictive interpretation must be used. An example might be:

    \"constraints.boundaries\": \"|en: West side of Euphrates river, within Baghdad city limits\n    |es: lado oeste del r\u00edo Eufrates, dentro del centro de Bagdad\n    |fr: c\u00f4t\u00e9 ouest de l'Euphrate, dans les limites de la ville de Bagdad\n    |ar: \u0627\u0644\u062c\u0627\u0646\u0628 \u0627\u0644\u063a\u0631\u0628\u064a \u0645\u0646 \u0646\u0647\u0631 \u0627\u0644\u0641\u0631\u0627\u062a \u060c \u062f\u0627\u062e\u0644 \u062d\u062f\u0648\u062f \u0645\u062f\u064a\u0646\u0629 \u0628\u063a\u062f\u0627\u062f\"\n
    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#point-of-origin-and-radius","title":"Point of Origin and Radius","text":"

    The constraints.point_of_origin and radius fields are an additional or alternative way to specify a geographical constraint. They must be used together. Point of origin is a string that may use latitude/longitude notation (e.g., \"@40.4043328,-111.7761829,15z\"), or a landmark. Landmarks must be localized as described previously. Radius is an integer measured in kilometers.

    \"constraints.point_of_origin\": \"|en: Red Crescent Sunrise Camp\"\n\"constraints.radius_km\": 10\n
    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#jurisdictions","title":"Jurisdictions","text":"

    This is a comma-separated list of legal jurisdictions where the guardianship applies. It is also localized:

    \"constraints.jurisdictions\": \"|en: EU, India, Bangladesh\"\n
    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#trigger-and-circumstances","title":"Trigger and Circumstances","text":"

    These are human-friendly description of circumstances that must apply in order to make the guardian's status active. It may be used in conjunction with a trigger (see next). It is vital that the wording of these fields be carefully chosen to minimize ambiguity; carelessness could invite abuse. Note that each of these fields could be used separately. A trigger by itself would unconditionally confer guardianship status; circumstances without a trigger would require re-evaluation with every guardianship challenge and might be used as long as an adult is unconscious or diagnosed with dementia, or while traveling with a child, for example.

    \"constraints.trigger\": \"|en: Death of parent\"\n\"constraints.circumstances\": \"|en: While a parent or adult sibling is unavailable, and no\n    new guardian has been adjudicated.\n    |ar: \u0641\u064a \u062d\u064a\u0646 \u0623\u0646 \u0623\u062d\u062f \u0627\u0644\u0648\u0627\u0644\u062f\u064a\u0646 \u0623\u0648 \u0627\u0644\u0623\u0634\u0642\u0627\u0621 \u0627\u0644\u0628\u0627\u0644\u063a\u064a\u0646 \u063a\u064a\u0631 \u0645\u062a\u0648\u0641\u0631 \u060c \u0648\u0644\u064a\u0633\n         \u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0\u00a0 \u0627\u0644\u0648\u0635\u064a \u0627\u0644\u062c\u062f\u064a\u062f \u062a\u0645 \u0627\u0644\u0641\u0635\u0644 \u0641\u064a\u0647.\"\n
    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#timing","title":"Timing","text":"

    These allow calendar restrictions. Both start time and end time are expressed as ISO8601 timestamps in UTC timezone, but can be limited to day- instead of hour-and-minute-precision (in which case timezone is irrelevant). Start time is inclusive, whereas end time is exclusive (as soon as the date and time equals or exceeds end time, the guardianship becomes invalid). Either value can be used by itself, in addition to being used in combination.

    \"constraints.startTime\": \"2019-07-01T18:00\"\n\"constraints.endTime\": \"2019-08-01\"\n
    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#auditing","title":"Auditing","text":"

    It is strongly recommended that an audit trail be produced any time a guardian performs any action on behalf of the dependent, except for school and necessaries. Reports of auditable events are accomplished by generating a JSON document in the following format:

    {\n    \"@type\": \"SIVPGTF audit/1.0\",\n    \"event_time\": \"2019-07-25T18:03:26\",\n    \"event_place\": \"@40.4043328,-111.7761829,15z\",\n    \"challenger\": \"amy.smith@redcross.org\",\n    \"witness\": \"fred.jones@redcross.org\",\n    \"guardian\": \"Farooq Abdul Sami\",\n    \"rationale\": \"natural parent\",\n    \"dependent\": \"Isabel Sami, DOB 2009-05-21\",\n    \"event\": \"enroll in class, receive books\",\n    \"justifying_permissions\": \"school, necessaries\"\n    \"evidence\": // base64-encoded photo of Farooq and Isabel\n}\n
    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#appeal","title":"Appeal","text":"

    NGO staff (who receive delegated authority from the NGO that acts as guardian), and a council of 5 grandmothers maintain a balance of powers. Decisions of either group may be appealed to the other. Conformant NGOs must identify a resource that can adjudicate an escalated appeal, and this resource must be independent in all respects--legal, financial, human, and otherwise--from the NGO. This resource must have contact information in the form of a phone number, web site, or email address, and the contact info must be provided in the guardian credential in the appeal_uri field.

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#freshness-and-offline-operation","title":"Freshness and Offline Operation","text":"

    [TODO]

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#revocation","title":"Revocation","text":"

    [TODO]

    "},{"location":"concepts/0103-indirect-identity-control/guardianship-sample/trust-framework/#best-practices","title":"Best Practices","text":""},{"location":"concepts/0104-chained-credentials/","title":"Aries RFC 0104: Chained Credentials","text":""},{"location":"concepts/0104-chained-credentials/#note-editable-images","title":"Note: editable images","text":"

    See here for original images used in this RFC.

    "},{"location":"concepts/0104-chained-credentials/#note-terminology-update","title":"Note: terminology update","text":"

    \"Chained credentials\" were previously called \"delegatable credentials.\" The new term is broader and more accurate. Delegation remains a use case for the mechanism, but is no longer its exclusive focus.

    "},{"location":"concepts/0104-chained-credentials/#summary","title":"Summary","text":"

    Describes a set of conventions, collectively called chained credentials, that allows data in a verifiable credential (VC) to be traced back to its origin while retaining its verifiable quality. This chaining alters trust dynamics. It means that issuers late in a chain can skip complex issuer setup, and do not need the same strong, globally recognizable reputation that's important for true roots of trust. It increases the usefulness of offline verification. It enables powerful delegation of privileges, which unlocks many new verifiable credential use cases.

    Chained credentials do not require any modification to the standard data model for verifiable credentials; rather, they leverage the data model in a simple, predictable way. Chaining conventions work (with some feature variations) for any W3C-conformant verifiable credential type, not just the ones developed inside Hyperledger.

    "},{"location":"concepts/0104-chained-credentials/#note-object-capabilities","title":"Note: object capabilities","text":"

    When chained credentials are used to delegate, the result is an object capabilities (OCAP) solution similar to ZCAP-LD in scope, ../../features, and intent. However, such chained capabilities accomplish their goals a bit differently. See here for an explanation of the divergence and redundancy.

    "},{"location":"concepts/0104-chained-credentials/#note-sister-rfc","title":"Note: sister RFC","text":"

    This RFC is complements Aries RFC 0103: Indirect Identity Control. That doc describes how delegation (and related control mechanisms like delegation and controllership) can be represented in credentials and governed; this one describes an underlying infrastructure to enable such a model. The ZKP implementation of this RFC comes from Hyperledger Ursa and depends on cryptography described by Camenisch et al. in 2017.

    "},{"location":"concepts/0104-chained-credentials/#motivation","title":"Motivation","text":"

    There is a tension between the decentralization that we want in a VC ecosystem, and the way that trust tends to centralize because knowledge and reputation are unevenly distributed. We want anyone to be able to attest to anything they like--but we know that verifiers care very much about the reputation of the parties that make those attestations.

    We can say that verifiers will choose which issuers they trust. However, this places a heavy burden on them--verifiers can't afford to vett every potential issuer of credentials they might encounter. The result will be a tendency to accept credentials only from a short list of issuers, which leads back to centralization.

    This tendency also creates problems with delegation. If all delegation has to be validated through a few authorities, a lot of the flexibility and power of delegation is frustrated.

    We'd like a VC landscape where a tiny startup can issue an employment credential with holder attributes taken as seriously as one from a massive global conglomerate--and with no special setup by verifiers to trust them equally. And we'd like parents to be able to delegate childcare decisions to a babysitter on the spur of the moment--and have the babysitter be able to prove it when she calls an ambulance.

    "},{"location":"concepts/0104-chained-credentials/#tutorial","title":"Tutorial","text":""},{"location":"concepts/0104-chained-credentials/#data-provenance","title":"Data provenance","text":"

    Our confidence in data depends on the data's origin and chain of custody--its provenance.

    Journalists and academics cite sources. The highest quality sources explain how primary data was derived, and what inferences are reasonable to draw from it. Better sources, and better links to those sources, create better trust.

    With credentials, the direct reporter of data is the issuer--but the issuer is not always the data's source. When Acme's HR department issues an employment credential that includes Bob the employee's name, the source of Bob's name is probably government-issued ID, not Acme's subjective opinion. Acme is reporting data that originated elsewhere.

    Acme should cite its sources. Even when citations are unstructured and unsigned, they may still be helpful to humans. But we may be able to do better. If the provenance of an employee's name is verifiable in the same way as other credential data, then Acme's reputation with respect to that assertion becomes almost unimportant; the data's ability to foster trust is derived from the reputation of its true source, plus the algorithm that verifies that source.

    This matters.

    One of the challenges with traditional trust on the web is the all-or-nothing model of trust for certificate authorities. A website in an obscure corner of the globe uses an odd CA; browser manufacturers must debate whether that CA deserves to be on the list of globally trusted attesters. If yes, then any cert the CA issues will be silently believed; if no, then none will. UX pressure has often decided the debate in favor of trust by default; the result has been very long lists of trusted CAs, and a corresponding parade of junk certificates and abuse.

    Provenanced data helps verifiable credentials avoid the same conundrum. The set of original sources for a person's legal name is far smaller than the set of secondary entities that might issue credentials containing that data, so verifiers need only a short list of trusted sources for that data, no matter how many issuers they see. When they evaluate an employment credential, they will be able to see that the employee's name comes from a passport issued by the government, while the hire date is directly attested by the company. This lets the verifier nuance trust in granular and useful ways.

    "},{"location":"concepts/0104-chained-credentials/#delegation-as-provenance-of-authority","title":"Delegation as provenance of authority","text":"

    Delegation can be modeled as a data provenance issue, where the data in question is an authorization. Suppose Alice, the CEO of Thrift Bank, has the authority to do many tasks, and that one of them is to negotiate contracts. As the company grows, she decides that the company needs a role called \"Corporate Counsel\", and she hires Carl for the job. She wants to give Carl a credential that says he has the authority to negotiate contracts. The provenance of Carl's authority is Alice's own authority.

    Notice how parallel this diagram is to the previous one.

    "},{"location":"concepts/0104-chained-credentials/#chaining","title":"Chaining","text":"

    Both of the examples given above imagine a single indirection between a data source and the issuer who references it. But of course many use cases will be far more complex. Perhaps the government attests Bob's name; this becomes the basis for Bob's employer's attestation, which in turn becomes the basis for an attestation by the contractor that processes payroll for Bob's employer. Or perhaps authorization from Alice to corporate counsel gets further delegated. In either case, the result will be a data provenance chain:

    This is the basis for the chained credential mechanism that gives this RFC its name. Chained credentials contain information about the provenance of some or all of the data they embody; this allows a verifier to trace the data backward, possibly through several links, to its origin, and to evaluate trust on that basis.

    "},{"location":"concepts/0104-chained-credentials/#use-cases","title":"Use cases","text":"

    Many use cases exist for conveying provenance for the data inside verifiable credentials:

    "},{"location":"concepts/0104-chained-credentials/#acid-test","title":"Acid Test","text":"

    Although these situations sound different, their underlying characteristics are surprisingly similar--and so are those of other use cases we've identified. We therefore chose a single situation as being prototypical. If we address it well, our solution will embody all the characteristics we want. The situation is this:

    "},{"location":"concepts/0104-chained-credentials/#chain-of-provenance-for-authority-delegation","title":"Chain of Provenance for Authority (Delegation)","text":"

    The national headquarters of Ur Wheelz (a car rental company) issues a verifiable credential, C1, to its regional office in Houston, authorizing Ur Wheelz Houston to rent, maintain, sell, drive, and delegate driving privileges to customers, for certain cars owned by the national company.

    Alice rents a car from Ur Wheelz Houston. Ur Wheelz Houston issues a driving privileges credential, C2, to Alice. C2 gives Alice the privilege to drive the car on a particular week, within the state of Texas, and to further delegate that privilege. Alice uses her C2 credential to prove to the car (which is a fancy future car that acts as verifier) that she is an authorized driver; this is what unlocks the door.

    Alice gets pulled over for speeding on Wednesday and uses C2 to prove to the police that she is the authorized driver of the car.

    On Thursday night Alice goes to a fancy restaurant. She uses valet parking. She issues credential C3 to the valet, allowing him to drive the car within 100 meters of the restaurant, for the next 2 hours while she is at the restaurant. Alice chooses to constrain C3 so the valet cannot further delegate. The valet uses C3 to unlock and drive the car to the parking garage.

    "},{"location":"concepts/0104-chained-credentials/#revocation","title":"Revocation","text":"

    While Alice eats, law enforcement officers go to Ur Wheelz Houston with a search warrant for the car. They have discovered that the previous driver of the car was a criminal. They ask Ur Wheelz to revoke C2, because they don\u2019t want the car to be driven any more, in case evidence is accidentally destroyed.

    At the end of dinner, Alice goes to the valet and asks for her car to be returned. The valet goes to the car and attempts to open the door using C3. The car tests the validity of the delegation chain of C3, and discovers that C2 has been revoked, making C3 invalid. The car refuses to open the door. Alice has to take Uber to get home. Law enforcement takes possession of the car.

    "},{"location":"concepts/0104-chained-credentials/#how-chained-credentials-address-this-use-case","title":"How chained credentials address this use case","text":"

    A chained credential is a verifiable credential that contains provenanced data, linking it back to its source. In this case, the provenanced data is about authority, and each credential in the chain functions like a capability token, granting its holder privileges that derive from an upstream issuer's own authority.

    "},{"location":"concepts/0104-chained-credentials/#note-delegate-credentials","title":"Note: delegate credentials","text":"

    We call this subtype of chained credential a delegate credential. We'll try to describe the provenance chain in generic terms as much as possible, but the delegation problem domain will occasionally color our verbiage... All delegate credentials are chained; not all chained credentials are delegate credentials.

    The first entity in the provenance chain for authority (Ur Wheels National, in our acid use case) is called the root attester, and is probably an institution configured for traditional credential issuance (e.g., with a public DID to which reputation attaches; in Indy, this entity also publishes a credential definition). All downstream entities in the provenance chain can participate without special setup. They need not have public DIDs or credential definitions. This is because the strength of the assertion does not depend on their reputation; rather, it depends on the robustness of the algorithm that walks the provenance chain back to its root. Only the root attester needs public reputation.

    "},{"location":"concepts/0104-chained-credentials/#note-contrast-with-acls","title":"Note: contrast with ACLs","text":"

    When chained credentials are used to convey authority (the delegate credential subtype), they are quite different from ACLs. ACLs map an identity to a list of permissions. Delegate credentials entitle their holder to whatever permissions the credential enumerates. Holding may or may not be transferrable. If it is not transferrable, then fraud prevention must be considered. If the credential isn't bound to a holder, then it's a bearer token and is an even more canonical OCAP.

    "},{"location":"concepts/0104-chained-credentials/#special-sauce","title":"Special Sauce","text":"

    A chained credential delivers these ../../features by obeying some special conventions over and above the core requirements of an ordinary VC:

    1. It contains a special field named schema that is a base64url-encoded representation of its own schema. This makes the credential self-contained in the sense that it doesn't depend on a schema or credential definition defined by an external authority (though it could optionally embody one). This field is always disclosed in presentations.

    2. It contains a special field named provenanceProofs. The field is an array, where each member of the array is a tuple (also a JSON array). The first member of each tuple is a list of field names; the second member of each tuple is an embedded W3C verifiable presentation that proves the provenance of the values in those fields. In the case of delegate credentials, provenanceProofs is proving the provenance of a field named authorization.

      Using credentials C1, C2, and C3 from our example use case, the authorization tuple in provenanceProofs of C1 includes a presentation that proves, on the basis of a car title that's a traditional, non-provenanced VC, that Ur Wheelz National had the authority to delegate a certain set of privileges X to Ur Wheelz Houston. The authorization tuple in provenanceProofs of C2 proves that Ur Wheelz Houston had authority to delegate Y (a subset of the authority in X) to Alice, and also that Ur Wheelz Houston derived its authority from Ur Wheelz National, who had the authority to delegate X to Ur Wheelz Houston. Similarly, the authorization tuple in C3's provenanceProofs is an extension of the authorization tuple in C2's provenanceProofs\u2014now proving that Alice had the authority to delegate Z to the valet, plus all the other delegations in the upstream credentials.

      When a presentation is created from a chained credential, provenanceProofs is either disclosed (for non-ZKP proofs), or is used as evidence to prove the same thing (for ZKPs).

    3. It is associated (through a name in its type field array and through a URI in its trustFrameworkURI field) with a trust framework that describes provenancing rules. For general chained credentials, this is optional; for delegate credentails, it is required. The trust framework may partially describe the semantics of some schema variants for a family of chained credentials, as well as how provenance is attenuated or categorized. For example, a trust framework jointly published by Ur Wheelz and other car rental companies might describe delegate credential schemas for car owners, car rental offices, drivers, insurers, maintenance staff, and guest users of cars. It might specify that the permissions delegatable in these credentials include drive, maintain, rent, sell, retire, delegate-further, and so forth. The trust framework would do more than enumerate these values; it would define exactly what they mean, how they interact with one another, and what permissions are expected to be in force in various circumstances.

    4. The reputation of non-root holders in a provenance chain become irrelevant as far as credential trust is concerned--trust is based on an unbroken chain back to a root public attester, not on published, permanent characteristics of secondary issuers. Only the root attester needs to have a public DID. Other issuer keys and DIDs can be private and pairwise.

    5. If it is a delegate credential, it also meets all the requirements to be a proxy credential as described in Aries RFC 0103: Indirect Identity Control. Specifically:

      • It uses credentialSubject.holder.* fields to bind it to a particular holder, if applicable.

      • It uses credentialSubject.proxied.* fields to describe the upstream delegator to whatever extent is required.

      • It uses credentialSubject.holder.role and credentialSubject.proxied.permissions to grant permissions to the holder. See Delegating Permissions for more details.

      • It may use credentialSubject.holder.constraints.* to impose restrictions on how/when/under what circumstances the delegation is appropriate.

    "},{"location":"concepts/0104-chained-credentials/#whats-not-different","title":"What's not different","text":"

    Proof of non-revocation uses the same mechanism as the underlying credentialing system. For ZKPs, this means that merkle tree or accumulator state is checked against the ledger or against any other source of truth that the root attester in the chain specifies; no conferring with upstream issuers is required. See ZKP Revocation in the reference section. For non-ZKP credentials, this probably means consulting revocation lists or similar.

    Offline mode works exactly the same way as it works for ordinary credentials, and with exactly the same latency and caching properties.

    Chained credentials may contain ordinary credential attributes that describe the holder or other subjects, including ZKP-style blinded link secrets. This allows chained credentials to be combined with other VCs in composite presentations.

    "},{"location":"concepts/0104-chained-credentials/#sample-credentials","title":"Sample credentials","text":"

    Here is JSON that might embody credentials C1, C2, and C3 from our use case. Note that these examples suppress a few details that seem uninteresting, and they also introduce some new ../../concepts that are described more fully in the Reference section.

    "},{"location":"concepts/0104-chained-credentials/#c1-delegates-management-of-car-to-ur-wheelz-houston","title":"C1 (delegates management of car to Ur Wheelz Houston)","text":"
    {\n    \"@context\": [\"https://w3.org/2018/credentials/v1\", \"https://github.com/hyperledger/aries-rfcs/tree/main../../concepts/0104-delegatable-credentials\"],\n    \"type\": [\"VerifiableCredential\", \"Proxy.D/CarRentalTF/1.0/subsidiary\"],\n    \"schema\": \"WwogICJAY29udGV4dCIsIC8vSlN... (clipped for brevity) ...ob2x\",\n    \"provenanceProofs\": [\n        [[\"authorization\"], {\n            // proof that Ur Wheelz National owns the car\n            }]\n    ],\n    // Optional. Might be used to identify the car in question.\n    \"credentialSubject.car.VIN\": \"1HGES26721L024785\",\n    \"credentialSubject.proxied.permissions\": {\n        \"grant\": [\"rent\", \"maintain\", \"sell\", \"drive\", \"delegate\"], \n        \"when\": { \"role\": \"regional_office\" } \n    }\n    // Optional. Binds the credential to a business name.\n    \"credentialSubject.holder.name\": \"Ur Wheelz Houston\",\n    // Optional. Binds the credential to the public DID of Houston office.\n    \"credentialSubject.holder.id\": \"did:example:12345\",\n    \"credentialSubject.holder.role\": \"regional_office\",\n}\n
    "},{"location":"concepts/0104-chained-credentials/#c2-delegates-permission-to-alice-to-drive-subdelegate","title":"C2 (delegates permission to Alice to drive, subdelegate)","text":"
    {\n    // @context, type, schema are similar to previous\n    \"provenanceProofs\": {\n        [[\"authorization\"], {\n            // proof that Ur Wheelz Houston could delegate\n            }]\n    },\n    // Optional. Might be used to identify the car in question.\n    \"credentialSubject.car.VIN\": \"1HGES26721L024785\",\n    \"credentialSubject.proxied.permissions\": {\n        \"grant\": [\"drive\", \"delegate\"], \n        \"when\": { \"role\": \"renter\" } \n    }\n    // Optional. Binds the credential to a business name.\n    \"credentialSubject.holder.name\": \"Alice Jones\",\n    // Optional. Binds the credential to the public DID of Houston office.\n    \"credentialSubject.holder.id\": \"did:example:12345\",\n    \"credentialSubject.holder.role\": \"renter\",\n    // Limit dates when delegation is active\n    \"credentialSubject.holder.constraints.startTime\": \"2020-05-20T14:00Z\",\n    \"credentialSubject.holder.constraints.endTime\": \"2020-05-27T14:00Z\",\n    // Provide a boundary within which delegation is active\n    \"credentialSubject.holder.constraints.boundary\": \"USA:TX\"\n}\n
    "},{"location":"concepts/0104-chained-credentials/#c3-delegates-permission-to-valet-to-drive","title":"C3 (delegates permission to valet to drive)","text":"
    {\n    // @context, type, schema are similar to previous\n    \"delegationProof\": {\n        [[\"authorization\"], {\n            // proof that Alice could delegate\n            }]\n    },\n    // Optional. Might be used to identify the car in question.\n    \"credentialSubject.car.VIN\": \"1HGES26721L024785\",\n    \"credentialSubject.proxied.permissions\": {\n        \"grant\": [\"drive\"], \n        \"when\": { \"role\": \"valet\" } \n    }\n    // Optional. Binds the credential to a business name.\n    \"credentialSubject.holder.name\": \"Alice Jones\",\n    // Optional. Binds the credential to the public DID of Houston office.\n    \"credentialSubject.holder.id\": \"did:example:12345\",\n    \"credentialSubject.holder.role\": \"valet\",\n    \"credentialSubject.holder.constraints.startTime\": \"2020-05-25T04:00Z\",\n    \"credentialSubject.holder.constraints.endTime\": \"2020-05-25T06:00Z\",\n    // Give a place where delegation is active.\n    \"credentialSubject.holder.constraints.pointOfOrigin\": \"@29.7690295,-95.5293445,12z\",\n    \"credentialSubject.holder.constraints.radiusKm\": 0.1,\n}\n
    "},{"location":"concepts/0104-chained-credentials/#reference","title":"Reference","text":""},{"location":"concepts/0104-chained-credentials/#delegating-permissions","title":"Delegating Permissions","text":"

    In theory, we could just enumerate permissions in delegate credentials in a special VC field named permissions. To delegate the drive and delegate privileges to Alice, this would mean we'd need a credential field like this:

    {\n    // ... rest of credential fields ...\n\n    \"permissions\": [\"drive\", \"delegate\"]\n}\n

    Such a technique is adequate for many delegation use cases, and is more or less how ZCAP-LD works. However, it has two important limitations:

    To address these additional requirements, delegate credentials split the granting of permissions into two fields instead of one:

    1. The permission model that provides context for the credential is expressed in a special field named credentialSubject.proxied.permissions. This field contains an SGL rule that embodies the semantics of the delegation.
    2. The holder (delegate) is given a named role in that overall permission scheme in a special field named credentialSubject.holder.role. This role has to reference something from ...permissions.

    In our Ur Wheelz / Alice use case, the extra expressive power of these two fields is not especially interesting. The credential that Alice carries might look like this:

    {\n    // ... rest of credential fields ...\n\n    \"credentialSubject.proxied.permissions\": { \n        \"grant\": [\"drive\"], \n        \"when\": { \"role\": \"renter\" } \n    }\n    \"credentialSubject.holder.role\": [\"renter\"]\n}\n

    Since credentialSubject.holder.role says that Alice has the renter role, the grant of drive applies to her. We expect permissions to always apply directly to the holder in simple cases like this.

    But in the case of a corporation that wants to delegate signing privileges to 3 board members, the benefit of the two-field approach is clearer. Each board member gets a delegate credential that looks like this:

    {\n    // ... rest of credential fields ...\n\n    \"credentialSubject.proxied.permissions\": { \n        \"grant\": [\"sign\"], \n        \"when\": { \"role\": \"board\", \"n\": 3 } \n    }\n    \"credentialSubject.holder.role\": [\"board\"]\n}\n

    Now a verifier can say to one credential-holding board member, \"I see that you have part of the signing privilege. Can you find me two other board members who agree with this action?\"

    "},{"location":"concepts/0104-chained-credentials/#privacy-considerations","title":"Privacy Considerations","text":"

    Non-ZKP-based chained credentials reveal the public identity of the immediate downstream holder to each issuer (delegator) -- and they reveal the public identiy of all upstream members of the chain to the holder.

    ZKP-based chained credentials offer more granular choices. See ZKP Variants and their privacy implications below.

    "},{"location":"concepts/0104-chained-credentials/#embedded-schema","title":"Embedded schema","text":"

    Often, the schema of a chained credential might be decided (or created) by the issuer. In some cases, the schema might be decided by the delegatee or specified fully or partially in a trust framework.

    It is the responsibility of each issuer to ensure that the special schema attribute is present and that the credential matches it.

    "},{"location":"concepts/0104-chained-credentials/#zkp-revocation","title":"ZKP Revocation","text":"

    When a chained credential is issued, a unique credential id is assigned to it by its issuer and then the revocation registry is updated to track which credential id was issued by which issuer. During proof presentation, the prover proves in zero knowledge that its credential is not revoked. When a credential is to be revoked, the issuer of the credential sends a signed message to the revocation registry asking it to mark the credential id as revoked. Note that this allows only the issuer of the credential to revoke the credential and does not allow, for example, the delegator to revoke any credential that was issued by its delegatee. However, this can be achieved by the verifier mandating that each credential in the chain of credentials is non-revoked. When a PCF decides to revoke the PTR credential, every subsequent credential should be considered revoked.

    In practice, there are more attributes associated with the credential id in the revocation registry than just the public key. The registry also tracks the timestamps of issuance and revocation of the credential id and the prover is able to prove in zero knowledge about those data points as well. The way we imagine revocation being implemented is having a merkle tree with each leaf corresponding to a credential id, so for a binary tree of height 8, there are 2^8 = 256 leaves and leaf number 1 will correspond to credential id 1, leaf number 2 will correspond to credential id 2, and so on. The data at the leaf consists of the public key of the issuer, the issuance timestamp and the revocation timestamp. We imagine the use of Bulletproofs merkle tree gadget to do such proofs like we plan to do for the upcoming version of anonymous credentials.

    "},{"location":"concepts/0104-chained-credentials/#zkp-variants-and-their-privacy-implications","title":"ZKP Variants and their privacy implications","text":"

    There are two general categories of chained anonymous credentials, distinguished by the tradeoff they make between privacy and efficiency. Choosing between them should depend on whether privacy between intermediate issuers is required.

    The more efficient category provides no privacy among the issuers but only verifiers. Suppose the holder, say Alice, requests a chained credential from the root attestor, say Acme Corp., which it further attests to a downstream issuer Bob which further delegates to another downstream issuer Carol. Here Carol knows the identity (a public key) of Bob and both Carol and Bob know the identity of Alice but when Carol or Bob uses its credential to create a proof and send it to the verifier, the verifier only learns about the identity of the root attester.

    Less efficient but more private schemes (isolating attestors more completely) also exist.

    The first academic paper in the following list describes a scheme which does not allow for privacy between attestors, but that is more efficient; the second and third papers make the opposite tradeoff.

    1. Practical UC-Secure Delegatable Credentials with Attributes and Their Application to Blockchain.
    2. Delegatable Attribute-based Anonymous Credentials from Dynamically Malleable Signatures
    3. Delegatable Anonymous Credentials from Mercurial Signatures

    In the first scheme, each issuer passes on its received credentials to the issuer it is delegating to. In the above Acme Corp., Alice, Bob and Carol example, if when Alice delegates to Bob, it gives Bob a new credential but also a copy of the credential it received from Acme Corp. And when Bob delegates to Carol, he gives a new credential to Carol but also the copies of credential it got from Alice and the one Alice had got from Acme Corp. The verifier while getting a proof from, say Carol, does not learn the about the Alice, Bob or Carol but learns that there were 2 issuers between Acme Corp and the proof presenter. It also learns the number of attributes in each credential in the chain of credentials.

    In the second and third scheme, during delegation, the delegator gives only one credential to the delegatee derived from its credential but the delegatee randomizes its identity each time. The second scheme's efficiency is comparable to the first scheme's but it has a trusted authority which can deanonymize any issuer given a proof created from that issuer's credential. This might be fine in cases where the PCF can be safely made the trusted authority and is not assumed to colluding with the verifiers to deanonymize the users.

    In the third scheme, another limitation exists that non-root issuers cannot add any more attributes to the credential than the root issuer did.

    "},{"location":"concepts/0104-chained-credentials/#drawbacks","title":"Drawbacks","text":"

    If the trust framework is not properly defined, malicious parties might be able to get credentials from delegators leading to priviledge escalation.

    "},{"location":"concepts/0104-chained-credentials/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    An expensive alternative of delegatable credentials is the holder to get credential directly from the root issuer. The expensiveness of this is not just computational but operational too.

    "},{"location":"concepts/0104-chained-credentials/#prior-art","title":"Prior art","text":"

    Delegatable anonymous credentials have been explored since the last decade and the first efficient (somewhat) came in 2009 by Belenkiy et al. in \"Randomizable proofs and delegatable anonymous credentials\". Although this was a significant efficiency improvement over previous works, it was still impractical. Chase et al. gave a conceptually novel construction of delegatable anonymous credentials in 2013 in \"Complex unary transformations and delegatable anonymous credentials\" but the resulting construction was essentially as inefficient as that of Belenkiy et al.

    "},{"location":"concepts/0104-chained-credentials/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0104-chained-credentials/contrast-zcap-ld/","title":"Contrast zcap ld","text":""},{"location":"concepts/0104-chained-credentials/contrast-zcap-ld/#why-not-zcap-ld","title":"Why not ZCAP-LD?","text":"

    The object capability model is great, and ZCAP-LD is an interesting solution that exposes that goodness to the VC ecosystem. However, we had the following concerns when we first encountered its spec (originally entitled \"OCAP-LD\"):

    For these reasons, we spent some time working out a somewhat similar mechanism. We hope we can reconcile the two at some point. For now, though, this doc just describes our alternative path.

    "},{"location":"concepts/0167-data-consent-lifecycle/","title":"Aries RFC 0167: Data Consent Lifecycle","text":""},{"location":"concepts/0167-data-consent-lifecycle/#table-of-contents","title":"Table of Contents","text":""},{"location":"concepts/0167-data-consent-lifecycle/#summary","title":"Summary","text":"

    This RFC illustrates a reference implementation for generating a consent proof for use with DLT (Distributed Ledger Technology). Presenting a person controlled consent proof data control architecture and supply chain permissions, that is linked to the single consent proof.

    The objective of this RFC is to move this reference implementation, once comments are processed, to a working implementation RFC, demonstrating a proof of consent for DLT.

    This RFC breaks down key components to generate an explicit consent directive with the use of a personal data processing notice (PDP-N) specification which is provided with this RFC as a template for smart privacy. Appendix - PDP - Notice Spec (DLC Extension for CR v2)

    This reference RFC utilises a unified legal data control vocabulary for notification and consent records and receipts (see Appendix A), maintained by the W3C Data Privacy Vocabulary Control Community Group (DPV), where the unified data control vocabulary is actively being maintained.

    This RFC modularizes data capture to make the mappings interchangeable with overlays (OCA -Ref), to facilitate scale of data control sets across contexts, domains and jurisdictions.

    "},{"location":"concepts/0167-data-consent-lifecycle/#motivation","title":"Motivation","text":"

    A key challenge with privacy and personal data sharing and self-initiated consent is to establish trust. There is no trust in the personal data based economy. GDPR Article 25, Data Protection by Design and by Default, lists recommendations on how private data is processed. Here we list the technology changes required to implement that GDPR article. Note the RFC focuses on formalizing the processing agreement associated with the consent, rather than on informal consent dialogue.

    Hyperledger Aries provides the perfect framework for managing personal data, especially personal identifiable information (PII), when necessary data is restricted to protect the identity of the individual or data subject. Currently, the privacy policy that is agreed to when signing up for a new service dictates how personal data is processed and for which purpose. There is no clear technology to hold a company accountable for the privacy policy. By using blockchain and the data consent receipt, accountability of a privacy policy can be reached. The data consent is not limited to a single data controller (or institution) and data subject (or individual), but to a series of institutions that process the data from the original data subject. The beauty of the proposal in this RFC is that accountability is extended to ALL parties using the data subject's personal data. When the data subject withdraws consent, the data consent receipt agreement is withdrawn, too.

    GDPR lacks specifics regarding how technology should be or can be used to enforce obligations. This RFC provides a viable alternative with the mechanisms to bring accountability and at the same time protecting personal data.

    "},{"location":"concepts/0167-data-consent-lifecycle/#overview","title":"Overview","text":"

    Three key components need to be in place:

    1. Schema bases/overlays

    2. Consent Lifecycle

    3. Wallet

    Schema bases/overlays describes a standard approach to data capture that separates raw schema building blocks from additional semantic layers such as data entry business logic and constraints, knowledge about data sensitivity, and so forth (refer to [RFC 0013: Overlays for details). The data consent lifecycle covers the data consent receipt certificate, proof request and revocation. The wallet is where all data is stored which requires a high level of security and control by individual or institution. This RFC will cover the consent lifecycle.

    The Concepts section below explains the RFC in GDPR terms. There is an attempt to align with the vocabulary in the W3C Data Privacy Vocabulary specification.

    The consent lifecycle will be based on self sovereign identity (SSI) to ensure that the individual (data subject) has full control of their personal information. To help illustrate how SSI is applied several use cases along a reference implementation will help show the relation between the data subject, data controller and data processor.

    "},{"location":"concepts/0167-data-consent-lifecycle/#concepts","title":"Concepts","text":"

    These are some ../../concepts that are important to understand when reviewing this RFC.

    Secondary Data Controller: The terms \"data subject\" and \"data controller\" (see GDPR Article 4, items 1 and 7) should be well understood. The data controller is responsible for the data that is shared beyond their control. A data controller which does not itself collect data but receives it from another controller is termed a 'secondary' data controller. Even though the secondary data controller is independent in its processing of personal data, GDPR requires the primary or original data controller to be responsible for sharing data under the given consent. The 3rd party becomes a secondary controller under the responsibility of the original data controller. Important to note that if a 3rd party does not share the collected data back to the original data controller, then the 3rd party is considered an independent data controller (add reference to CIEU).

    Opt-in / Opt-out: These terms describe a request to use personal data beyond the limits of legitimate reasons for conducting a service. If for example the data is shared with a 3rd party a consent or opt-in is required. At any point the data subject may withdraw the consent through an opt-out.

    Expiration: The consent may have time limitations that may require being renewed and does not automatically renew. The data subject may have a yearly subscription or for purposes of a trial there needs to be a mechanism to ensure the consent is limited to the duration of the service.

    Storage limitation: PII data should not be stored indefinitely and need to have a clear storage limitation. Storage limitation as defined by GDPR is limiting how long PII data is kept to fulfill the legitimate reasons of a service.

    Processing TTL: Indy currently supports proof only limited to a specific point in time. For companies that collect data over time to check for proof every minute is not a viable solution. The processing TTL gives allowances for data ingestion to be done for an extended period without requiring performing new proof request. Examples will be given that explain the usage of the term.

    "},{"location":"concepts/0167-data-consent-lifecycle/#use-cases","title":"Use Cases","text":"

    These are the use cases to help understand the implementation guide. A reference implementation will help in the development.

    1. Alice (data subject) gives data consent by accepting a privacy agreement.

    2. Acme (3rd party data controller) requests proof that data consent was given

    3. Alice terminates privacy agreement, thus withdrawing her data consent.

    Note: additional use cases may be developed based on contributions to this RFC.

    "},{"location":"concepts/0167-data-consent-lifecycle/#implementation-guidelines","title":"Implementation Guidelines","text":""},{"location":"concepts/0167-data-consent-lifecycle/#collect-personal-data","title":"Collect Personal Data","text":"

    These are the steps covered with collect personal data:

    The [Blinding Identity Taxonomy] provides a compressive list of data points that are considered sensitive and shall be handled with higher level of security.

    Section will expand terms of the explanation of personal identifiable and quasi-identifiable terms.

    "},{"location":"concepts/0167-data-consent-lifecycle/#personal-data-processing-schema","title":"Personal Data Processing Schema","text":"

    The personal data processing (PDP) schema captures attributes used to defines the conditions of collecting data and conditions how data may be shared or used.

    These are the PDP schema attributes:

    Category Attribute Brief description Comment Data subset DID of associated schema or overlay Data object identifier All data objects Industry Scope [1] A predefined description of the industry scope of the issuer. All data objects Storage (raw) Expiration Date The definitive date on which data revocation throughout the chain of engaged private data lockers of all Data Controllers and sub-Data Controllers will automatically occur. In other words when the PDP expires. Access-Window Limitation (Restricted-Time) How long data is kept in the system before being removed. Different from expiration date attribute limitation indicates how long personal data may be used beyond the PDP expires. Request to be forgotten supersedes the limitation. Access-Window PII pseudonymization Data stored with pseudonymization. Conditions of access to are given under purpose attribute of \"Access\" category. Encryption Method of psuedonymization Specify algorithm used for performing anonymisation that is acceptable. Encryption Geographic restriction The data storage has geo location restrictions (country). Demarcation No share The data shall not be shared outside of the Data Controller responsibility. When set no 3rd party or Secondary Data Controller are allowed. Demarcation Access (1-n) Purpose The purpose for processing data shall be specified (refer to GDPR Article 4, clause 2, for details on processing details). Applies to both a Data Controller and Secondary Data Controller. Access-Window policyUrl Reference to privacy policy URL that describes policy in human readable form. Access-Window Requires 3PP PDP [2] A PDP is required between Data Controller and Secondary Data Controller in the form of code of conduct agreement. Access-Window Single Use The data is shared only for the purpose of completing the interaction at hand. \"Expiration-Date\" is set to the date of interaction completion. Access-Window PII anonymisation Data stored with no PII association. Encryption [3] Method of anonymisation Specify algorithm used for performing anonymisation that is acceptable. Encryption Multi-attribute anonymisation Quasi-identifiable data may be combined create a finger print of the data subject. When set a method of multi-attribute anonymisation is applied on the data Encryption Method of multi-attribute anonymisation Specifify algorithm used for performing anonymisation that is acceptable (K-anonymity). Encryption Ongoing Use The data is shared for repeated use by the recipient, with no end date or end conditions. However, the data subject may alter the terms of use in the future, and if the alteration in terms is unacceptable to the data controller, the data controller acknowledges that it will thereby incur a duty to delete. In other words, the controller uses the data at the ongoing sufferance of its owner. Access-Window Collection Frequency (Refresh) How frequently the data can be accessed. The collection may be limited to once a day or 1 hour. Purpose of attribute is protect data subject to create a profile of behavior. Access-Window Validity TTL If collection is continuous the validity TTL specifies when to perform new verification. Verification is to check customer withdrew consent. Note this is method for revocation. Access-Window No correlation No correlation is allowed for subset. This means no external data shall be combined for example public data record of the data subject. Correlation Inform correlation Correlation is shared with data subject and what data was combined related to them. Correlation Open correlation Correlation is open and does not need to be informed to data subject. Correlation"},{"location":"concepts/0167-data-consent-lifecycle/#notes","title":"Notes","text":""},{"location":"concepts/0167-data-consent-lifecycle/#1","title":"1","text":"

    As the PDP schema may be the only compulsory linked schema specified in every schema metadata block, we have an opportunity to store the \"Framework Description\" - a description of the business framework of the issuer.

    Predefined values could be imported from the GICS \"Description\" entries, or, where missing, NECS \"Description\" entries, courtesy of filtration through the Global Industry Classification Standard (GICS) or New Economy Classification Standard (NECS) ontologies.

    The predefined values could be determined by the next highest level code to the stored GICS \"Sub-industry\" code (or NECS \"SubSector\" code) held in the associated metadata attribute of the primary schema base to add flexibility of choice for the Issuer.

    "},{"location":"concepts/0167-data-consent-lifecycle/#2","title":"2","text":"

    If a PDP is required between the Data Controller (Issuer) and sub-Data Controller, we should have a field(s) to store the Public DID (or Private Data Locker ID) of the sub-Data Controller(s). This will be vital to ensure auto-revocation from all associated private data lockers on the date of expiry.

    "},{"location":"concepts/0167-data-consent-lifecycle/#3","title":"3","text":"

    As the \"PII Attribute\" schema object is already in place for Issuer's to flag sensitive data according to the Blinding Identity Taxonomy (BIT), we already have a mechanism in place for PII. Once flagged, we can obviously encrypt sensitive data. Some considerations post PII flagging: (i.) In the Issuer's Private Data Locker : The default position should be to encrypt all sensitive elements. However, the issuer should be able to specify if any of the flagged sensitive elements should remain unencrypted in their private locker. (ii.) In a Public Data Store : all sensitive elements should always be encrypted

    "},{"location":"concepts/0167-data-consent-lifecycle/#example-schemas","title":"Example: Schemas","text":"

    When defining a schema there will be a consent schema associated with it.

    SCHEMA = {\n      did: \"did:sov:3214abcd\",\n    name: 'Demographics',\n    description: \"Created by Faber\",\n    version: '1.0',\n    # MANDATORY KEYS\n    attr_names: {\n      brthd: Date,\n      ageic: Integer\n    },\n    consent: did:schema:27312381238123  # reference to consent schema\n    # Attributes flagged according to the Blinding Identity Taxonomy\n    # by the issuer of the schema\n    # OPTIONAL KEYS\n    frmsrc: \"DEM\"\n}\n

    The original schema will have a consent schema reference.

    CONSENT_SCHEMA = {\n    did: \"did:schema:27312381238123\",\n    name: 'Consent schema for consumer behaviour data',\n    description: \"Created by Faber\",\n    version: '1.0',\n    # MANDATORY KEYS\n    attr_names: {\n      expiration: Date,\n      limitation: Date,\n      dictatedBy: String,\n      validityTTL: Integer\n    }\n}\n

    The consent schema will have specific attributes for managing data.

    Attribute Purpose Type expiration How long consent valid for Date limitation How long is data kept Date dictatedBy Who sets expiration and limitation String validityTTL Duration proof is valid for purposes of data processing Integer

    The issuer may optionally define an overlay that sets the consent schema values without input from the data subject.

    CONSENT_RECEIPT_OVERLAY = {\n  did: \"did:sov:5678abcd\",\n  type: \"spec/overlay/1.0/consent_entry\",\n  name: \"Consent receipt entry overlay for clinical trial\",\n  default_values: [\n    :expiration => 3 years,\n    :limitation => 2 years,\n    :dictatedBy = <reference to issuer> # ??? Should the DID of the issuer's DID be used?\n    :validityTTL => 1 month\n    ]\n}\n

    If some attributes are identified as sensitive based on the Blinding Identity Taxonomy when a sensitivity overlay is created.

    SENSITIVE_OVERLAY = {\n    did: \"did:sov:12idksjabcd\",\n  type: \"spec/overlay/1.0/bit\",\n  name: \"Sensitive data for private entity\",\n  attributes: [\n      :ageic\n  ]\n}\n

    To finalise a consent a proof schema has to be created which lists which schemas and overlays applied and values. The proof is kept off ledger in the wallet.

    PROOF_SCHEMA = {\n    did: \"did:schema:12341dasd\",\n    name: 'Credential Proof schema',\n    description: \"Created by Rosche\",\n    version: '1.0',\n    # MANDATORY KEYS\n    attr_names: {\n      createdAt: DateTime,           # How long consent valid for.\n      proof_key: \"<crypto asset>\",   # How long data is kept.\n      # Include all the schema did that were agreed upon\n      proof_of: [ \"did:sov:3214abcd\", \"did:sov:1234abcd\"]\n    }\n}\n
    "},{"location":"concepts/0167-data-consent-lifecycle/#blockchain-prerequisites","title":"Blockchain Prerequisites","text":"

    These are the considerations when setting up the ledger:

    "},{"location":"concepts/0167-data-consent-lifecycle/#data-consent-receipt-certificate","title":"Data Consent Receipt Certificate","text":"

    These are the steps covered with data consent receipt certificate:

    "},{"location":"concepts/0167-data-consent-lifecycle/#initial-agreement-of-privacy-agreement","title":"Initial agreement of privacy agreement","text":"

    The following flow diagram for setting up privacy agreement.

    "},{"location":"concepts/0167-data-consent-lifecycle/#proof-request","title":"Proof Request","text":"

    These are the steps covered with proof request:

    The proof request serves multiple purposes. The main one being the conditions of access are auditable. If a data controller encounters a situation they need to show the consent and conditions of accessing data are meet the proof request provides the evidence. The data subject also has more control of the proof request and in situations the revocation of certificate is not performed this becomes an extra safe guard. An important aspect with proof request is that it can be done without requiring to share any personal data.

    "},{"location":"concepts/0167-data-consent-lifecycle/#performing-proof-request","title":"Performing Proof Request","text":"

    The following flow diagram for setting up privacy agreement.

    "},{"location":"concepts/0167-data-consent-lifecycle/#certification-revocation","title":"Certification Revocation","text":"

    These are the steps covered with certification revocation:

    "},{"location":"concepts/0167-data-consent-lifecycle/#implementation-reference","title":"Implementation Reference","text":"

    A python jupyter notebook is available as reference implementation to help with implementation. The base for this example is getting-started jupyter notebook. In order to run the example take the following steps.

    1. Clone indy-sdk \\

       git clone https://github.com/hyperledger/indy-sdk.git\n
      1. Copy over following files to doc/getting-started \\
      2. consent-flow.ipynb
      3. docker-compose.yml *

      Note * - Reason for changing the docker-compose.yml is to be able to view consent-flow.ipynb.

    2. Ready to start docker-compose \\

      docker-compose up 4. Open html link and run consent-flow.ipynb

    "},{"location":"concepts/0167-data-consent-lifecycle/#reference","title":"Reference","text":"

    Provide guidance for implementers, procedures to inform testing, interface definitions, formal function prototypes, error codes, diagrams, and other technical details that might be looked up. Strive to guarantee that:

    "},{"location":"concepts/0167-data-consent-lifecycle/#annex-a-pdp-schema-mapping-to-kantara-consent-receipt","title":"Annex A: PDP Schema mapping to Kantara Consent Receipt","text":"

    Kantara has defined a Consent Receipt with a list of mandatory and optional attributes. This annex maps the attributes to the PDP. Many of the attributes are supported through the ledger and is not directly included in the PDP.

    Note: The draft used for this annex was file \"Consent receipt annex for 29184.docx\".

    Kantara attribute Hyperledger Indy mapping Version Schema registration Jurisdiction Agent registration Consent Timestamp PDP signed certificate Collection Method - Consent Receipt ID PDP signed certificate Public Key Ledger Language Overlays PII Principal ID Schema/Agent registration PII Controller Agent registration On Behalf Agent registration (1) PII Controller Contract Agent registration (2) PII Controller Address Agent registration PII Controller Email Agent registration PII Controller Phone Agent registration PII Controller URL [OPTIONAL] - Privacy Policy PDP services PDP purposes PDP Purpose Category - Consent Type PDP PII Categories - Primary Purpose PDP Termination Ledger Third Party Name PDP Sensitive PII Schema base

    Notes

    (1) Agent may be of type Cloud Agent which works on behalf of an Issuer (Data Controller). When the institution when they register in blockchain should make it clear who are they registering on behalf.

    (2) Controller Contact may change over time and is not a good reference to be used when accepting a consent. If required suggest include as part of Agent registration (or requirement)

    "},{"location":"concepts/0167-data-consent-lifecycle/#prior-art","title":"Prior art","text":""},{"location":"concepts/0167-data-consent-lifecycle/#etl-process","title":"ETL process","text":"

    Current data processing of PII date is not based on blockchain. Data is processed through ETL routines (ex. AWS API Gateway and Lambda) with a data warehouse (ex. AWS Redshift). The enforcement of GDPR is based on adding configuration routines to enforce storage limitations. Most data warehouses do not implement pseudonymization and may instead opt to have a very short storage limitation of a couple of months. The current practice is to collect as much data as possible which goes against data minimisation.

    "},{"location":"concepts/0167-data-consent-lifecycle/#personal-data-terms-and-conditions","title":"Personal Data Terms and Conditions","text":"

    The Customer Commons iniative (customercommons.org) has developed a [terms and conditions] for personal data usage. The implementation of these terms and conditions will be tied to the schema and overlay definitions. The overlay will specify the conditions of sharing. For more broader conditions the schema will have new attributes for actual consent for data sharing. The work by Hypeledger Aries and Customer Commons complement each other.

    "},{"location":"concepts/0167-data-consent-lifecycle/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0167-data-consent-lifecycle/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0167-data-consent-lifecycle/#plan","title":"Plan","text":""},{"location":"concepts/0167-data-consent-lifecycle/#todo","title":"ToDo","text":""},{"location":"concepts/0167-data-consent-lifecycle/#comments","title":"Comments","text":"Question From Date Answer Where is consent recorded? Harsh 2019-07-31 There are several types of consent listed below. Where the actual consent is recorded needs Specialised Consent (legal)Generic Consent (legal)General Data Processing Consent"},{"location":"concepts/0207-credential-fraud-threat-model/","title":"0207: Credential Fraud Threat Model","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#summary","title":"Summary","text":"

    Provides a model for analyzing and preventing fraud with verifiable credentials.

    "},{"location":"concepts/0207-credential-fraud-threat-model/#motivation","title":"Motivation","text":"

    Cybersecurity experts often view technology through the lens of a threat model that helps implementers methodically discover and remediate vulnerabilities.

    Verifiable credentials are a new technology that has enormous potential to shape the digital landscape. However, when used carelessly, they could bring to digital, remote interactions many of the same abuse possibilities that criminals have exploited for generations in face-to-face interactions.

    We need a base threat model for the specific subdiscipline of verifiable credentials, so implementations and deployments have a clear view of how vulnerabilities might arise, and how they can be eliminated. More specific threat models can build atop this general foundation.

    "},{"location":"concepts/0207-credential-fraud-threat-model/#tutorial","title":"Tutorial","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#scope","title":"Scope","text":"

    Verifiable credentials are a way to establish trust. They provide value for login, authorization, reputation, and data sharing, and they enable an entire ecosystem of loosely cooperating parties that use different software, follow different business processes, and require different levels of assurance.

    This looseness and variety presents a challenge. Exhaustively detailing every conceivable abuse in such an ecosystem would be nearly as daunting as trying to model all risk on the internet.

    This threat model therefore takes a narrower view. We assume the digital landscape (e.g., the internet) as context, with all its vulnerabilities and mitigating best practices. We focus on just the ways that the risks and mitigations for verifiable credential fraud are unique.

    "},{"location":"concepts/0207-credential-fraud-threat-model/#definition","title":"Definition","text":"

    Fraud: intentional deception to secure unfair or unlawful gain, or to hurt a victim. Contrast hoax, which is deception for annoyance or entertainment. (paraphrase from Wikipedia)

    "},{"location":"concepts/0207-credential-fraud-threat-model/#relation-to-familiar-methods","title":"Relation to familiar methods","text":"

    There are many methods for constructing threat models, including STRIDE, PASTA, LINDDUN, CVSS, and so forth. These are excellent tools. We use insights from them to construct what's offered here, and we borrow some terminology. We recommend them to any issuer, holder, or verifier that wants to deepen their expertise. They are an excellent complement to this RFC.

    However, this RFC is an actual model, not a method. Also, early exploration of the threat space suggests that with verifiable credentials, patterns of remediation grow more obvious if we categorize vulnerabilities in a specialized way. Therefore, what follows is more than just the mechanical expansion of the STRIDE algorithm or the PASTA process.

    "},{"location":"concepts/0207-credential-fraud-threat-model/#data-flow-diagram","title":"Data Flow Diagram","text":"

    Data flows in a verifiable credential ecosystem in approximately the following way:

    Some verifiable credential models include an additional flow (arrow) directly from issuers to verifiers, if they call for revocation to be tested by consulting a revocation list maintained by the issuer.

    "},{"location":"concepts/0207-credential-fraud-threat-model/#key-questions","title":"Key Questions","text":"

    Fraud could be categorized in many ways--for example, by how much damage it causes, how easy it is to detect, or how common it is. However, we get predictive power and true insight when we focus on characteristics that lead to different risk profiles and different remediations. For verifiable credentials, this suggests a focus on the following 4 questions:

    1. Who is the perpetrator?
    2. Who is directly deceived?
    3. When is the deception committed?
    4. Where (on which fact) is the deception focused?

    We can think of these questions as orthogonal dimensions, where each question is like an axis that has many possible positions or answers. We will enumerate as many answers to these questions as we can, and assign each answer a formal name. Then we can use a terse, almost mathematical notation in the form (w + x + y + z) (where w is an answer to question 1, x is an answer to question 2, and so forth) to identify a fraud potential in 4-dimensional space. For example, a fraud where the holder fools the issuer at time of issuance about subject data might be given by the locus: (liar-holder + fool-issuer + issuance-time + bad-subject-claims).

    What follows is an exploration of each question and a beginning set of associated answers. We provide at least one example of a situation that embodies each answer, notated with \u21e8. Our catalog is unlikely to be exhaustive; criminal creativity will find new expressions as we eliminate potential in the obvious places. However, these answers are complete enough to provide significant insight into the risks and remediations in the ecosystem.

    "},{"location":"concepts/0207-credential-fraud-threat-model/#1-who-is-the-perpetrator","title":"1. Who is the perpetrator?","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#third-parties","title":"third parties","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#combinations","title":"combinations","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#2-who-is-directly-deceived","title":"2. Who is directly deceived?","text":"

    Combinations of the above?

    "},{"location":"concepts/0207-credential-fraud-threat-model/#3-when-is-the-deception-committed","title":"3. When is the deception committed?","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#4-where-on-which-fact-is-the-deception-focused","title":"4. Where (on which fact) is the deception focused?","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#identity","title":"identity","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#claims","title":"claims","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#context","title":"context","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#reference","title":"Reference","text":"

    Provide guidance for implementers, procedures to inform testing, interface definitions, formal function prototypes, error codes, diagrams, and other technical details that might be looked up. Strive to guarantee that:

    "},{"location":"concepts/0207-credential-fraud-threat-model/#drawbacks","title":"Drawbacks","text":"

    Why should we not do this?

    "},{"location":"concepts/0207-credential-fraud-threat-model/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#prior-art","title":"Prior art","text":"

    Discuss prior art, both the good and the bad, in relation to this proposal. A few examples of what this can include are:

    This section is intended to encourage you as an author to think about the lessons from other implementers, provide readers of your proposal with a fuller picture. If there is no prior art, that is fine - your ideas are interesting to us whether they are brand new or if they are an adaptation from other communities.

    Note that while precedent set by other communities is some motivation, it does not on its own motivate an enhancement proposal here. Please also take into consideration that Aries sometimes intentionally diverges from common identity ../../features.

    "},{"location":"concepts/0207-credential-fraud-threat-model/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0207-credential-fraud-threat-model/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0217-linkable-message-paths/","title":"Aries RFC 0217: Linkable Message Paths","text":""},{"location":"concepts/0217-linkable-message-paths/#summary","title":"Summary","text":"

    Describes how to hyperlink to specific elements of specific DIDComm messages.

    "},{"location":"concepts/0217-linkable-message-paths/#motivation","title":"Motivation","text":"

    It must be possible to refer to specific pieces of data in specific DIDComm messages. This allows a message later in a protocol to refer to data in a message that preceded it, which is useful for stitching together subprotocols, debugging, error handling, logging, and various other scenarios.

    "},{"location":"concepts/0217-linkable-message-paths/#tutorial","title":"Tutorial","text":"

    There are numerous approaches to the general problem of referencing/querying a piece of data in a JSON document. We have chosen JSPath as our solution to that part of the problem; see Prior Art for a summary of that option and a comparison to alternatives.

    What we need, over and above JSPath, is a URI-oriented way to refer to an individual message, so the rest of the referencing mechanism has a JSON document to start from.

    "},{"location":"concepts/0217-linkable-message-paths/#didcomm-message-uris","title":"DIDComm Message URIs","text":"

    A DIDComm message URI (DMURI) is a string that references a sent/received message, using standard URI syntax as specified in RFC 3986. It takes one of the following forms:

    1. didcomm://<thid>/<msgid>
    2. didcomm://./<msgid> or didcomm://../<msgid>
    3. didcomm:///<msgid> (note 3 slashes)
    4. didcomm://<sender>@<thid>/<senderorder>

    Here, <msgid> is replaced with the value of the @id property of a plaintext DIDComm message; <thid> is replaced with the ~thread.thid property, <sender> is replaced with a DID, and <senderorder> is replaced with a zero-based index (the Nth message emitted in the thread by that sender).

    Form 1 is called absolute form, and is the prefered form of DMURI to use when talking about messages outside the context of an active thread (e.g., in log files)

    Form 2 is called relative form, and is a convenient way for one message to refer to another within an ongoing interaction. It is relatively explicit and terse. It uses 1 or 2 dots to reference the current or parent thread, and then provides the message id with that thread as context. Referencing more distant parent threads is done with absolute form.

    Form 3 is called simple form. It omits the thread id entirely. It is maximally short and usually clear enough. However, it is slightly less preferred than forms 1 and 2 because it is possible that some senders might not practice good message ID hygeine that guarantees global message ID uniqueness. When that happens, a message ID could get reused, making this form ambiguous. The most recent message that is known to match the message id must be assumed.

    Form 4 is called ordered form. It is useful for referencing a message that was never received, making the message's internal @id property unavailable. It might be used to request a resend of a lost message that is uncovered by the gap detection mechanism in DIDComm's message threading.

    Only parties who have sent or received messages can dereference DMURIs. However, the URIs should be transmittable through any number of third parties who do not understand them, without any loss of utility.

    "},{"location":"concepts/0217-linkable-message-paths/#combining-a-dmuri-with-a-jspath","title":"Combining a DMURI with a JSPath","text":"

    A JSPath is concatenated to a DMURI by using an intervening slash delimiter:

    didcomm:///e56085f9-4fe5-40a4-bf15-6438751b3ae8/.~timing.expires_time

    If a JSPath uses characters from RFC 3986's reserved characters list in a context where they have special meaning, they must be percent-encoded.

    "},{"location":"concepts/0217-linkable-message-paths/#reference","title":"Reference","text":"

    Provide guidance for implementers, procedures to inform testing, interface definitions, formal function prototypes, error codes, diagrams, and other technical details that might be looked up. Strive to guarantee that:

    "},{"location":"concepts/0217-linkable-message-paths/#drawbacks","title":"Drawbacks","text":"

    Why should we not do this?

    "},{"location":"concepts/0217-linkable-message-paths/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0217-linkable-message-paths/#prior-art","title":"Prior art","text":""},{"location":"concepts/0217-linkable-message-paths/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0217-linkable-message-paths/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0231-biometric-service-provider/","title":"Aries RFC 0231: Biometric Service Provider","text":""},{"location":"concepts/0231-biometric-service-provider/#summary","title":"Summary","text":"

    Biometric services for Identity Verification, Authentication, Recovery and other use cases referred to in Aries RFCs including DKMS.

    "},{"location":"concepts/0231-biometric-service-provider/#motivation","title":"Motivation","text":"

    Biometrics play a special role in many identity use cases because of their ability to intrinsically identify a unique individual, but their use depends on a variety of factors including liveness, matching accuracy, ease of acquisition, security and privacy. Use of biometrics is already well established in most countries for domestic and international travel, banking and law enforcement. In banking, know-your-customer (KYC) and anti-money laundering (AML) laws require some form of biometric(s) when establishing accounts.

    In this specification, we characterize the functions and schema that biometric service providers (BSPs) must implement to ensure a uniform interface to clients: wallets and agents. For example, current Automated Biometric Information Systems (ABIS) and other standards (IEEE 2410, FIDO) provide a subset of services but often require proprietary adaptors due to the fragmented history of the biometric market: different modalities (face, fingerprint, iris, etc.) require different functions, schema, and registration information. More recently, standards have begun to specify functions and schema across biometric modalities. This specification will adopt these approaches and treat biometric data within an encrypted envelope across modalities.

    "},{"location":"concepts/0231-biometric-service-provider/#tutorial","title":"Tutorial","text":"

    One goal of the Biometric Service Provider (BSP) specification is to allow for self-sovereign biometric credentials in a holder's wallet or cloud agent trusted by issuers and verifiers:

    An issuer may collect biometric information from a holder in order to issue credentials (biometric or not). Likewise, a verifier may require biometric matching against the holder's credentials for authentication. In either case, issuers, holders and verifiers may need to rely on 3rd party services to perform biometric matching functions for comparison to authoritative databases.

    "},{"location":"concepts/0231-biometric-service-provider/#basics","title":"Basics","text":"

    In general, biometrics are collected during registration from a person and stored for later comparisons. The registration data is called the Initial Biometric Vector (IBV). During subsequent sessions, a biometric reading is taken called the Candidate Biometric Vector (CBV) and \"matched\" to the IBV:

    Both the IBV and CBV must be securely stored on a mobile device or server often with the help of hardware-based encryption mechanisms such as a Trusted Execution Environment (TEE) or Hardware Security Module (HSM). The CBV is typically ephemeral and discarded (using secure erasure) following the match operation.

    If the IBV and/or CBV are used on a server, any exchange must use strong encryption between client and server if transmitted over public or private networks in case of interception. Failure to properly protect the collection, transmission, storage and processing of biometric data is a serious offense in most countries and violations are subject to severe fines and/or imprisonment.

    "},{"location":"concepts/0231-biometric-service-provider/#example-aadhaar","title":"Example: Aadhaar","text":"

    The Aadhaar system is an operational biometric that provides identity proofing and identity verification services for over 1 billion people in India. Aadhaar is comprised of many elements with authentication as the most common use case:

    Authentication Service Agents (ASAs) are licensed by the Government of India to pass the verification request via secure channels to the Unique Identification Authority of India (UIDAI) data centre where IBVs are retrieved and matched to incoming CBVs from Authentication User Agencies (AUA) that broker user authentication sessions from point-of-sale (PoS) terminals:

    "},{"location":"concepts/0231-biometric-service-provider/#use-cases","title":"Use Cases","text":"

    A Biometric Service Provider (BSP) supports the following use cases. In each case, we distinguish whether the use case requires one-to-one (1:1) matching or one-to-many (1:N) matching:

    1. Device Unlocking - primarily introduced to solve the inconvenience of typing a password into a small mobile device, face and single-digit fingerprint was introduced to mobile devices to protect access to the device resources. This is a 1:1 match operation.

    2. Authentication - the dominant use case for biometrics. Users must prove they sufficiently match the IBV created during registration in order to access local and remote resources including doors, cars, servers, etc. This is a 1:1 match operation.

    3. Identification - an unknown person presents for purposes of determining their identity against a database of registered persons. This is a 1:N match operation because the database(s) must be searched for all IBVs of matching identities.

    4. Identity Verification - a person claims a specific identity with associated metadata (e.g., name, address, etc.) and provides a CBV for match against that person's registered biometric data to confirm the claim. This is a 1:1 match operation.

    5. Identity Proofing - a person claims a specific identity with associated metadata (e.g., name, address, etc.) and provides a CBV for match against all persons in database(s) in order to determine the efficacy of their claims and any counter-claims. This is a 1:N match operation because the database(s) must be searched for all IBVs of matching identities.

    6. Deduplication - given a CBV, match against IBVs of all registered identities to determine if already present or not in the database(s). This is a 1:N matching operation.

    7. Fraud prevention - A match operation could return confidence score(s) (0..1) rather than a simply boolean. Confidence score(s) express the probability that the candidate is not an imposter and could be used in risk analysis engines. This may be a use case for BSP clients.

    8. Recovery - Using biometric shards, a process using one's biometrics to recover lost private keys associated with a credential is possible using secret sharing. This may be a use case for BSP clients.

    The previous diagram describing the IBV and CBV collection and matching during registration and presentation did not specify where the IBV is persisted nor where the match operation is performed. In general, we can divide the use cases into 4 categories depending on where the IBV is persisted and where the match must occur:

    Mobile-Mobile: The IBV is stored on the mobile device and the match with the CBV occurs on the mobile device

    Mobile-Server: The IBV is stored on the mobile device, but the match occurs on a server

    Server-Mobile: The IBV is stored on a server, but the match occurs on a mobile device

    Server-Server: The IBV is stored on a server and the match occurs on a server

    "},{"location":"concepts/0231-biometric-service-provider/#use-case-1-identity-proofing","title":"Use case 1: Identity Proofing","text":""},{"location":"concepts/0231-biometric-service-provider/#use-case-2-recovery","title":"Use case 2: Recovery","text":""},{"location":"concepts/0231-biometric-service-provider/#reference","title":"Reference","text":"

    The NIST 800-63-3 publications are guidelines that establish levels of assurance (LOA) for identity proofing (Volume A), authentication (Volume B), and federation (Volume C). The Biometric Service Provider (BSP) specification deals primarily with identity proofing and authentication.

    A common misconception is that a biometric is like a password, but cannot be replaced upon loss or compromise. A biometric is private but not secret, whereas a password is secret and private. Used correctly, biometrics require presentation attack detection (PAD), also called liveness, to ensure that the sensor is presented with a live face, fingerprints, etc. of a subject rather than a spoof, i.e., a photo, fake fingertips, etc. Indeed, NIST 800-63-3B requires presence of a person in front of a witness for Identity Assurance Level 3 (IAL3) in identity proofing use cases.NIST characterizes the identity proofing process as follows:

    Remote use of biometrics is increasing as well to streamline on-boarding and recovery processes without having to present to an official. NIST 800-63-3A introduced remote identity proofing for IAL2 in 2017 with some form of PAD strongly recommended (by reference to NIST 800-63-3B). Typically, additional measures are combined with biometrics including knowledge-based authentication (KBA), risk scoring and document-based verification to reduce fraud.

    "},{"location":"concepts/0231-biometric-service-provider/#protection","title":"Protection","text":"

    Biometric data is highly sensitive and must be protected wherever and whenever it is collected, transmitted, stored and processed. In general, some simple rules of thumb include:

    "},{"location":"concepts/0231-biometric-service-provider/#issues","title":"Issues","text":""},{"location":"concepts/0231-biometric-service-provider/#drawbacks","title":"Drawbacks","text":"

    Biometrics are explicitly required in many global regulations including NIST (USA), Aadhaar (India), INE (Mexico), and RENIEC (Peru) but also standardized by international organizations for travel (IATA) and finance (FATF).

    "},{"location":"concepts/0231-biometric-service-provider/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    By addressing biometrics, we seek to provide explicit guidance to developers who will undoubtedly encounter them in many identity credentialing and authentication processes.

    "},{"location":"concepts/0231-biometric-service-provider/#prior-art","title":"Prior art","text":"

    Several biometric standards exist that provide frameworks for biometric services including the FIDO family of standards and IEEE 2410. Within each biometric modality, standards exist to encode representations of biometric information. For example, fingerprints can be captured as raw images in JPEG or PNG format but also represented as vectors of minutiae encoded in the WSQ format.

    "},{"location":"concepts/0231-biometric-service-provider/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0231-biometric-service-provider/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0250-rich-schemas/","title":"RFC 0250: Rich Schema Objects","text":""},{"location":"concepts/0250-rich-schemas/#summary","title":"Summary","text":"

    A high-level description of the components of an anonymous credential ecosystem that supports rich schemas, W3C Verifiable Credentials and Presentations, and correspondingly rich presentation requests. Rich schemas are hierarchically composable graph-based representations of complex data. For these rich schemas to be incorporated into the aries anonymous credential ecosystem, we also introduce such objects as mappings, encodings, presentation definitions and their associated contexts.

    Though the goal of this RFC is to describe how rich schemas may be used with anonymous credentials, it will be noted that many of the objects described here may be used to allow any credential system to make use of rich schemas.

    This RFC provides a brief description of each rich schema object. Future RFCs will provide greater detail for each individual object and will be linked to from this document. The further RFCs will contain examples for each object.

    "},{"location":"concepts/0250-rich-schemas/#motivation","title":"Motivation","text":""},{"location":"concepts/0250-rich-schemas/#standards-compliance","title":"Standards Compliance","text":"

    The W3C Verifiable Claims Working Group (VCWG) will soon be releasing a verifiable credential data model. This proposal introduces aries anonymous credentials and presentations which are in compliance with that standard.

    "},{"location":"concepts/0250-rich-schemas/#interoperability","title":"Interoperability","text":"

    Compliance with the VCWG data model introduces the possibility of interoperability with other credentials that also comply with the standard. The verifiable credential data model specification is limited to defining the data structure of verifiable credentials and presentations. This includes defining extension points, such as \"proof\" or \"credentialStatus.\"

    The extensions themselves are outside the scope of the current specification, so interoperability beyond the data model layer will require shared understanding of the extensions used. Work on interoperability of the extensions will be an important aspect of maturing the data model specification and associated protocols.

    Additionally, the new rich schemas are compatible with or the same as existing schemas defined by industry standards bodies and communities of interest. This means that the rich schemas should be interoperable with those found on schema.org, for example. Schemas can also be readily defined for those organizations that have standards for data representation, but who do not have an existing formal schema representation.

    "},{"location":"concepts/0250-rich-schemas/#shared-semantic-meaning","title":"Shared Semantic Meaning","text":"

    The rich schemas and associated constructs are linked data objects that have an explicitly shared context. This allows for all entities in the ecosystem to operate with a shared vocabulary.

    Because rich schemas are composable, the potential data types that may be used for field values are themselves specified in schemas that are linked to in the property definitions. The shared semantic meaning gives greater assurance that the meaning of the claims in a presentation is in harmony with the semantics the issuer intended to attest when they signed the credential.

    "},{"location":"concepts/0250-rich-schemas/#improved-predicate-proofs","title":"Improved Predicate Proofs","text":"

    Introducing standard encoding methods for most data types will enable predicate proof support for floating point numbers, dates and times, and other assorted measurements. We also introduce a mapping object that ties intended encoding methods to each schema property that may be signed so that an issuer will have the ability to canonically specify how the data they wish to sign maps to the signature they provide.

    "},{"location":"concepts/0250-rich-schemas/#use-of-json-ld","title":"Use of JSON-LD","text":"

    Rich schema objects primarily wish to benefit from the accessibility of ordinary JSON, but require more sophisticated JSON-LD-driven patterns when the need arises.

    Each rich schema object will specify the extent to which it supports JSON-LD functionality, and the extent to which JSON-LD processing may be required.

    "},{"location":"concepts/0250-rich-schemas/#what-the-casual-developer-needs-to-know","title":"What the Casual Developer Needs to Know","text":""},{"location":"concepts/0250-rich-schemas/#details","title":"Details","text":"

    Compatibility with JSON-LD was evaluated against version 1.1 of the JSON-LD spec, current in early 2019. If material changes in the spec are forthcoming, a new analysis may be worthwhile. Our current understanding follows.

    "},{"location":"concepts/0250-rich-schemas/#type","title":"@type","text":"

    The type of an rich schema object, or of an embedded object within a rich schema object, is given by the JSON-LD @type property. JSON-LD requires this value to be an IRI.

    "},{"location":"concepts/0250-rich-schemas/#id","title":"@id","text":"

    The identifier for a rich schema object is given by the JSON-LD @id property. JSON-LD requires this value to be an IRI.

    "},{"location":"concepts/0250-rich-schemas/#context","title":"@context","text":"

    This is JSON-LD\u2019s namespacing mechanism. It is active in rich schema objects, but can usually be ignored for simple processing, in the same way namespaces in XML are often ignored for simple tasks.

    Every rich schema object has an associated @context, but for many of them we have chosen to follow the procedure described in section 6 of the JSON-LD spec, which focuses on how ordinary JSON can be interpreted as JSON-LD.

    Contexts are JSON objects. They are the standard mechanism for defining shared semantic meaning among rich schema objects. Contexts allow schemas, mappings, presentations, etc. to use a common vocabulary when referring to common attributes, i.e. they provide an explicit shared semantic meaning.

    "},{"location":"concepts/0250-rich-schemas/#ordering","title":"Ordering","text":"

    JSON-LD specifies that the order of items in arrays is NOT significant, and notes that this is the opposite of the standard assumption for plain JSON. This makes sense when viewed through the lens of JSON-LD\u2019s role as a transformation of RDF, and is a concept supported by rich schema objects.

    "},{"location":"concepts/0250-rich-schemas/#tutorial","title":"Tutorial","text":"

    The object ecosystem for anonymous credentials that make use of rich schemas has a lot of familiar items: credentials, credential definitions, schemas, and presentations. Each of these objects has been changed, some slightly, some more significantly, in order to take advantage of the benefits of contextually rich linked schemas and W3C verifiable credentials. More information on each of these objects can be found below.

    In addition to the familiar objects, we introduce some new objects: contexts, mappings, encodings, and presentation definitions. These serve to bridge between our current powerful signatures and the rich schemas, as well as to take advantage of some of the new capabilities that are introduced.

    Relationship graph of rich schema objects

    "},{"location":"concepts/0250-rich-schemas/#verifiable-credentials","title":"Verifiable Credentials","text":"

    The Verifiable Claims Working Group of the W3C is working to publish a Verifiable Credentials data model specification. Put simply, the goal of the new data format for anonymous credentials is to comply with the W3C specification.

    The data model introduces some standard properties and a shared vocabulary so that different producers of credentials can better inter-operate.

    "},{"location":"concepts/0250-rich-schemas/#rich-schemas","title":"Rich Schemas","text":"

    The proposed rich schemas are JSON-LD objects. This allows credentials issued according to them to have a clear semantic meaning, so that the verifier can know what the issuer intended. They also support explicitly typed properties and semantic inheritance. A schema may include other schemas as property types, or extend another schema with additional properties. For example a schema for \"employee\" may inherit from the schema for \"person.\"

    Rich schemas are an object that may be used by any verifiable credential system.

    "},{"location":"concepts/0250-rich-schemas/#mappings","title":"Mappings","text":"

    Rich schemas are complex, hierarchical, and possibly nested objects. The Camenisch-Lysyanskaya signature scheme used in anonymous credentials requires the attributes to be represented by an array of 256-bit integers. Converting data specified by a rich schema into a flat array of integers requires a mapping object.

    Mappings serve as a bridge between rich schemas and the flat array of signed integers. A mapping specifies the order in which attributes are transformed and signed. It consists of a set of graph paths and the encoding used for the attribute values specified by those graph paths. Each claim in a mapping has a reference to an encoding, and those encodings are defined in encoding objects.

    Mappings are written to a data registry so they can be shared by multiple credential definitions. They need to be discoverable. When a mapping has been created or selected by an issuer, it is made part of the credential definition.

    The mappings serve as a vital part of the verification process. The verifier, upon receipt of a presentation must not only check that the array of integers signed by the issuer is valid, but that the attribute values were transformed and ordered according to the mapping referenced in the credential definition.

    Note: The anonymous credential signature scheme introduced here is Camenisch-Lysyanskaya signatures. It is the use of this signature scheme in combination with rich schema objects that necessitates a mapping object. If another signature scheme is used which does not have the same requirements, a mapping object may not be necessary or a different mapping object may need to be defined.

    "},{"location":"concepts/0250-rich-schemas/#encodings","title":"Encodings","text":"

    All attribute values to be signed in an anonymous credential must be transformed into 256-bit integers in order to support the current Camenisch-Lysyanskaya signature scheme.

    The introduction of rich schemas and their associated range of possible attribute value data types require correspondingly rich encoding algorithms. The purpose of the encoding object is to specify the algorithm used to perform transformations for each attribute value data type. The encoding algorithms will also allow for extending the cryptographic schemes and various sizes of encodings (256-bit, 384-bit, etc.). The encoding algorithms will allow for broad use of predicate proofs, and avoid hashed values where they are not needed, as hashed values do not support predicate proofs.

    Encodings, at their heart, describe an algorithm for converting data from one format to another, in a deterministic way. They can therefore be used in myriad ways, not only for the values of attributes within anonymous credentials.

    Encoding objects are written to a data registry. Encoding objects also allow for a means of extending the standard set of encodings.

    "},{"location":"concepts/0250-rich-schemas/#credential-definitions","title":"Credential Definitions","text":"

    Credential definitions provide a method for issuers to specify a schema and mapping object, and provide public key data for anonymous credentials they issue. This ties the schema and public key data values to the issuer. The verifier uses the credential definition to check the validity of each signed credential attribute presented to the verifier.

    "},{"location":"concepts/0250-rich-schemas/#presentation-definitions","title":"Presentation Definitions","text":"

    A presentation definition is the means whereby a verifier asks for data from a holder. It contains a set of named desired proof attributes with corresponding restrictions that limit the potential sources for the attribute data according to the desired source schema, issuer DID, credential definition, etc. A presentation definition also contains a similar set of requested predicate proofs, with named attributes and restrictions.

    It may be helpful to think of a presentation definition as the mirror image of a mapping object. Where a mapping object specifies the graph paths of the attributes to be signed, a presentation definition specifies the graph query that may be fulfilled by such graph paths. The presentation definition does not need to concern itself with specifying a particular mapping that contains the desired graph paths, any mapping that contains those graph paths may be acceptable. The fact that multiple graph paths might satisfy the query adds some complexity to the presentation definition. The query may also restrict the acceptable set of issuers and credential definitions and specify the desired predicates.

    A presentation definition is expressed using JSON-LD and may be stored in a data registry. This supports re-use, interoperability, and a much richer set of communication options. Multiple verifiers can use the same presentation definitions. A community may specify acceptable presentation definitions for its verifiers, and this acceptable set may be adopted by other communities. Credential offers may include the presentation definition the issuer would like fulfilled by the holder before issuing them a credential. Presentation requests may also be more simply negotiated by pointing to alternative acceptable presentation definitions. Writing a presentation definition to a data registry also allows it to be publicly reviewed for privacy and security considerations and gain or lose reputation.

    Presentation definitions specify the set of information that a verifier wants from a holder. This is useful regardless of the underlying credential scheme.

    "},{"location":"concepts/0250-rich-schemas/#presentations","title":"Presentations","text":"

    The presentation object that makes use of rich schemas is defined by the W3C Verifiable Credentials Data Model, and is known in the specification as a verifiable presentation. The verifiable presentation is defined as a way to present multiple credentials to a verifier in a single package.

    As with most rich schema objects, verifiable presentations will be useful for credential systems beyond anonymous credentials.

    The claims that make up a presentation are specified by the presentation definition. For anonymous credentials, the credentials from which these claims originate are used to create new derived credentials that only contain the specified claims and the cryptographic material necessary for proofs.

    The type of claims in derived credentials is also specified by the presentation definition. These types include revealed and predicate proof claims, for those credential systems which support them.

    The presentation contains the cryptographic material needed to support a proof that source credentials are all held by the same entity. For anonymous credentials, this is accomplished by proving knowledge of a link secret.

    A presentation refers to the presentation definition it fulfills. For anonymous credentials, is also refers to the credential definitions on the data registry associated with the source credentials. A presentation is not stored on a data registry.

    The following image illustrates the relationship between anonymous credentials and presentations:

    "},{"location":"concepts/0250-rich-schemas/#presentation-description","title":"Presentation Description","text":"

    There may be a number of ways a presentation definition can be used by a holder to produce a presentation, based on the graph queries and other restrictions in the presentation definition. A presentation description describes the source credentials and the process that was used to derive a presentation from them.

    "},{"location":"concepts/0250-rich-schemas/#reference","title":"Reference","text":"

    This document draws on a number of other documents, most notably the W3C verifiable credentials and presentation data model.

    The signature types used for anonymous credentials are the same as those currently used in Indy's anonymous credential and Fabric's idemix systems. Here is the paper that defines Camenisch-Lysyanskaya signatures. They are the source for Indy's AnonCreds protocol.

    "},{"location":"concepts/0250-rich-schemas/#drawbacks","title":"Drawbacks","text":""},{"location":"concepts/0250-rich-schemas/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    This design has the following benefits: - It complies with the upcoming Verifiable Credentials standard. - It allows for interoperability with existing schemas, such as those found on schema.org. - It adds security guarantees by providing means for validation of attribute encodings. - It allows for a broad range of value types to be used in predicate proofs. - It introduces presentation definitions that allow for proof negotiation, rich presentation specification, and an assurance that the presentation requested complies with security and privacy concerns. - It supports discoverability of schemas, mappings, encodings, presentation definitions, etc.

    "},{"location":"concepts/0250-rich-schemas/#unresolved-questions","title":"Unresolved questions","text":"

    This technology is intended for implementation at the SDK API level. It does not address UI tools for the creation or editing of these objects.

    Variable length attribute lists are only partially addressed using mappings. Variable lists of attributes may be specified by a rich schema, but the maximum number of attributes that may be signed as part of the list must be determined at the time of mapping creation.

    "},{"location":"concepts/0250-rich-schemas/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0257-private-credential-issuance/","title":"Aries RFC 0257: Private Credential Issuance","text":""},{"location":"concepts/0257-private-credential-issuance/#summary","title":"Summary","text":"

    This document describes an approach to let private individuals issue credentials without needing to have a public DID or credential definition on the ledger but more importantly without disclosing their identity to the credential receiver or the verifier. The idea is for the private individual to anchor its identity in a public entity (DID) like an organization. The public entity issues a credential to the private individual which acts as a permission for the private individual to issue credentials on behalf of the public entity. To say it another way, the public entity is delegating the issuance capability to the private individual. The receiver of the delegated credential (from the private individual) does not learn the identity of the private individual but only learn that the public entity has allowed this private individual to issue credentials on its behalf. When such a credential is used for a proof, the verifier's knowledge of the issuer is same as the credential receiver, it only knows identity of the public entity. The contrasts the current anonymous credential scheme used by Aries where the credential receiver and proof verifier know the identity of the credential issuer. Additionally, using the same cryptographic techniques, the private individual can delegate issuance rights further, if allowed by the public entity.

    "},{"location":"concepts/0257-private-credential-issuance/#motivation","title":"Motivation","text":"

    As they\u2019ve been implemented so far, verifiable credentials in general, and Indy-style credentials in particular, are not well suited to helping private individuals issue. Here are some use cases we don\u2019t address:

    "},{"location":"concepts/0257-private-credential-issuance/#recommendations","title":"Recommendations","text":"

    Alice wants to give Bob a credential saying that he did good work for her as a plumber.

    "},{"location":"concepts/0257-private-credential-issuance/#testimony","title":"Testimony","text":"

    Alice isn\u2019t necessarily recommending Bob, but she\u2019s willing to say that he was physically present at her house at 9 am on July 31.

    "},{"location":"concepts/0257-private-credential-issuance/#payment-receipts","title":"Payment receipts","text":"

    Bob, a private person selling a car, wants to issue a receipt to Alice, confirming that she paid him the price he was asking.

    "},{"location":"concepts/0257-private-credential-issuance/#agreements","title":"Agreements","text":"

    Alice wants to issue a receipt to Carol, acknowledging that she is taking custody of a valuable painting and accepting responsibility for its safety. Essentially, this is Alice formalizing her half of a contract between peers. Carol wants to issue a receipt to Alice, formalizing her agreement to the contract as well. Note that consent receipts, whether they be for data sharing or medical procedures, fall into this category, but the category is broader than consent.

    "},{"location":"concepts/0257-private-credential-issuance/#delegation","title":"Delegation","text":"

    Alice wants to let Darla, a babysitter, have the right to seek medical care for her children in Alice\u2019s absence.

    The reasons why these use cases aren\u2019t well handled are:

    "},{"location":"concepts/0257-private-credential-issuance/#issuers-are-publicly-disclosed","title":"Issuers are publicly disclosed.","text":"

    Alice would have to create a wholly public persona and DID for her issuer role--and all issuance she did with that DID would be correlatable. This endangers privacy. (Non-Indy credentials have exactly this same problem; there is nothing about ZKPs that makes this problem arise. But proponents of other credential ecosystems don't consider this risk a concern, so they may not think their credentialing solution has a problem.)

    "},{"location":"concepts/0257-private-credential-issuance/#issuance-requires-tooling-setup-and-ongoing-maintenance","title":"Issuance requires tooling, setup, and ongoing maintenance.","text":"

    An issuer needs to register a credential definition and a revocation registry on the ledger, and needs to maintain revocation status. This is an expensive hassle for private individuals. (Setup for credential issuance in non-ZKP ecosystems is also a problem, particularly for revocation. However, it may be more demanding for Indy due to the need for a credential definition and due to the more sophisticated revocation model.)

    "},{"location":"concepts/0257-private-credential-issuance/#tutorial","title":"Tutorial","text":""},{"location":"concepts/0257-private-credential-issuance/#delegatable-credentials-as-a-tool","title":"Delegatable credentials as a tool","text":"

    Delegatable Credentials are a useful tool that we can use to solve this problem. They function like special Object Capabilities (OCAP) tokens, and may offer the beginnings of a solution. They definitely address the delegation use cases, at least. Their properties include:

    "},{"location":"concepts/0257-private-credential-issuance/#applying-delegatable-credentials-to-other-use-cases","title":"Applying Delegatable Credentials to Other Use Cases","text":"

    Here is how we might apply delegatable credentials to the private-individuals-can-issue problem.

    A new kind of issuer is needed, called a private credential facilitator (PCF). The job of a PCF is to eliminate some of the setup and maintenance hassle for private individual issuers by acting as a root issuer in a delegatable credential chain.

    On demand, a PCF is willing to issue a personal trust root (PTR) credential to any individual who asks. A PTR is a delegatable credential that points to a delegation trust framework where particular delegation patterns and credential schemas are defined. The PTR grants all privileges in that trust framework to its holder. It may also contain fields that describe the holder in certain ways (e.g., the holder is named Alice, the holder has a particular birth date or passport number or credit card number, the holder has a blinded link secret with a certain value, etc), based on things that the individual holder has proved to the PCF. The PCF is not making any strong claim about holder attributes when it issues these PTR credentials; it's just adding a few attributes that can be easily re-proved by Alice in the future, and that can be used to reliably link the holder to more traditional credentials with higher bars for trust. In some ways the PCF acts like a notary by endorsing or passing along credential attributes that originated elsewhere.

    For example, Alice might approach a PCF and ask for a PTR that she can use as a homeowner who wishes to delegate certain privileges in her smart home to AirBnB guests. The PCF would (probably for a fee) ask Alice to prove her name, address, and home ownership with either verifiable or non-digital credentials, agree with Alice on a trust framework that's useful for AirBnB scenarios, and create a PTR for Alice that gives Alice all privileges for her home under that trust framework.

    With this PTR in hand, Alice can now begin to delegate or subdivide permissions in whatever way she chooses, without a public DID and without going through any issuer setup herself. She issues (delegates) credentials to each guest, allowing them to adjust the thermostat and unlock the front doors, but not to schedule maintenance on the furnace. Each delegated credential she issues traces its trust back to the PTR and from there, to the PCF.

    Alice can revoke any credential she has delegated in this way, without coordinating either upstream or downstream. The PCF she contracted with gave her access to do this by either configuring their own revocation registry on the ledger so it was writable by Alice's DID as well as their own, or by providing a database or other source of truth where revocation info could be stored and edited by any of its customers.

    This use of delegatable credentials is obvious, and helpful. But what's cooler and less obvious is that Alice can also use the PTR and delegatable credential mechanism to address non-delegation use cases. For example, she can issue a degenerate delegated credential to Bob the plumber, granting him zero privileges but attesting to Alice's 5-star rating for the job he did. Bob can use this credential to build his reputation, and can prove that each recommendation is unique because each such recommendation credential is bound to a different link secret, which in turn traces back to a unique human due to the PCF's vetting of Alice when Alice enrolled in the service. If Alice agrees to include information about herself in the recommendation credential, Bob can even display credential-based recommendations (and proofs derived therefrom) on his website, showing that recommendation A came from a woman named Alice who lived in postal code X, whereas recommendation B came from a man named Bob who lived in postal code Y.

    Lets consider another case where an employee issues a delegated credential on the basis of a credential issued to by the employer. Lets say the PCF is an employer. The PCF issues a PTR credential to each of its employee using which the employee can issue recommendation credentials to different 3rd party service providers associated with the employer. The recommender (employee) while issuing a recommendation credential proves that he has a valid non-revoked PTR credential from the PCF. The credential contains the id of the employee, the rating, other data and is signed by the employee's private key. The 3rd party service provider can discover the employee's public key from the employer's hosted database. Now the service provider can use this credential to create proofs which do not reveal the identity of the employee but only the employer. If the verifier wanted more protection, he could demand that the service provider verifiably encrypt the employee ID from the PTR credential for the employer such that if the employer wishes (in case of any dispute), deanonymize the employee by decrypting the encrypted employee ID.

    Alice can issue testimony credentials in the same way she issues recommendation credentials. And she can issue payment receipts the same way.

    "},{"location":"concepts/0257-private-credential-issuance/#more-about-reputation-management","title":"More about Reputation Management","text":"

    Reputation requires a tradeoff with privacy; we haven't figured out anonymous reputation yet. If Alice's recommendation of Bob as a plumber (or her testimony that Bob was at her house yesterday) is going to carry any weight, people who see it need to know that the credential used as evidence truly came from a woman named Alice--not from Bob himself. And they need to know that Alice couldn't distort reputation by submitting dozens of recommendations or eyewitness accounts herself.

    Therefore, issuance of by private individuals should start by carefully answering this question:

    What characteristic(s) of the issuer will make this credential useful?

    The characteristics might include:

    Weighting factors are probably irrelevant to payment receipts and agreements; proofs in these use cases are about binary matching, not degree.

    All of our use cases for individual issuance care about distinguishing factors. Sometimes the distinguishing factors might be fuzzy (enough to tell that Alice-1 recommending Bob as a plumber is different from Alice-2, but not enough to strongly identify); other times they have to be exact. They do need distinguishing factors, though. Where these factors could maybe be fuzzy in recommendations or do matter, though. In many cases, the distinguishing factors need to be strongly identifying, whereas for recommendations or testimony, fuzzier distinguishing factors might probably don\u2019t care about weighting factors.

    Distinguishing factors and weighting factors should be embedded in each delegated credential, to the degree that they will be needed in downstream use to facilitate reputation. In some cases, we may want to use verifiable encryption to embed some of them. This would allow Alice to give an eyewitness testimony credential to Bob, to still remain anonymous from Bob, but to prove to Bob at the time of private issuance that Alice's strong personal identifiers are present, and could be revealed by Alice's PCF (or a designated 3rd party) if Bob comes up with a compelling reason.

    "},{"location":"concepts/0257-private-credential-issuance/#reference","title":"Reference","text":""},{"location":"concepts/0257-private-credential-issuance/#todo","title":"TODO","text":""},{"location":"concepts/0257-private-credential-issuance/#drawbacks","title":"Drawbacks","text":""},{"location":"concepts/0257-private-credential-issuance/#todo_1","title":"TODO","text":""},{"location":"concepts/0257-private-credential-issuance/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0257-private-credential-issuance/#todo_2","title":"TODO","text":""},{"location":"concepts/0257-private-credential-issuance/#prior-art","title":"Prior art","text":""},{"location":"concepts/0257-private-credential-issuance/#todo_3","title":"TODO","text":""},{"location":"concepts/0257-private-credential-issuance/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0257-private-credential-issuance/#todo_4","title":"TODO","text":""},{"location":"concepts/0257-private-credential-issuance/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/","title":"Aries RFC 0268: Unified DIDCOMM Deeplinking","text":""},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#summary","title":"Summary","text":"

    A set of specifications for mobile agents to standardize around to provide better interoperable support for DIDCOMM compliant messages. Standards around the way agents intepret these encoded messages allow increased user choice when picking agents.

    This RFC lists a series of standards which must be followed by an Aires compatible agent for it to be considered interoperable with other agents.

    "},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#motivation","title":"Motivation","text":"

    As more and more mobile agents come to market, the user base for these wallets will become increasingly fragmented. As one of the core tennats of SSI is interoperability we want to ensure that messages passed to users from these wallets are in formats that any wallet can digest. We want to ensure that the onboarding experience for new users is as seemless and unified as possible.

    "},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#tutorial","title":"Tutorial","text":"

    Alice wants to invite Bob to connect with her. Alice sends Bob a invitation link generated by her Mobile Agent (a wallet provided by ACME Corp).

    The invitation url takes the form of: \"www.acmecorp.com/invite?d_m=XXXXX\" where the text following the query parameter \"d_m\" is the base64url-encoded invitation.

    Bob recieves this link and opens it on his phone. Since he doesn't have an Aires wallet, he gets directed to the webpage, \"acmecorp.com/invite\" where there's a list of wallets for each platform that he can choose and pick from. On the page is also the ACME Corp offical wallet.

    Bob decides to download the ACME Corp wallet and clicks on the link again. Because the ACME Corp wallet registered 'www.acmecorp.com' as it's deeplink, Bob gets prompted to open it in the ACME Corp app.

    Alice sends a similar invite to Charlie. Charlie uses a wallet distributed by Open Corp. Open Corp does not have the \"acmecorp.com\" URI registered as their deeplink, because they do not own that domain.

    When Charlie lands on that page, along with the offer for wallets is a QR Code with the encoded invitation and a button that states \"Open in App\". This button launches the didcomm:// custom protocol, which is registered by ALL Aires compatible wallets (in the same way e-mail apps all register mailto: ).

    Pressing the button prompts Charlie's phone to open the app that can handle didcomm://, which happens to be the wallet app by Open Corp.

    In both instances, Alice does not need to worry about what wallet the counter party is using and can send didcomm messages with the assurance that the counter party will have a onboarding experience waiting for them even if they don't have a wallet already.

    "},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#reference","title":"Reference","text":""},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#uri-registration","title":"URI Registration","text":"

    Each mobile agent should register their own URI to open in app. These URI's should point to a landing invitation page.

    An example of such a URI page/invitation: \"www.spaceman.id/invite?d_m=\" In this case, if the recepient of this URL has an app that's registered Spaceman.id as it's domain (likely a wallet published by spaceman.id) then it will open the invation in app. If the recepient does not have the app installed, they'll have a page open on their mobile browser with suggestions for DIDCOMM compliant wallets.

    "},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#invitation-page","title":"Invitation Page","text":"

    The Invitation page users land on must have a list of DIDCOMM compliant agents for each platform (iOS, Android). A list of these can be found here:

    The Invitation page must also show the encoded message as a scannable QR code and have a button (\"Open in App\") to manually launch the didcomm:// protcol. The QR code helps interactions between web and mobile agent wallets.

    The button to manually launch the didcomm:// protocol allows other Aires wallets on their phone to launch to handle the message, even if they haven't registered that specific URI. Alternatively you can use a library to automatically launch the didcomm:// prefix when the webpage is opened.

    The Invitation page should also run a URL shortner service. This would help passing messages between services easily without needing to pass massive strings around. This also prevents polluting closed source properitary services with links.

    "},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#deeplink-prefix","title":"Deeplink Prefix","text":"

    There must exist a common prefix for mobile agents to register for DIDCOMM messages. This vastly improves interoperability between agents and messages, as they can be opened by any wallet. As the messages are all off the DIDCOMM family, we think the prefix that is best suited is didcomm://

    All Mobile Agents should register didcomm:// as affilated with their app to both iOS and Android. This will enable users to be prompted to use their wallet when they recieve a DIDCOMM message.

    "},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#message-requirements","title":"Message Requirements","text":"

    Proposed is a change to the query parameter usually used for the passing of the message from 'c_i' which stands for connection invite to the more inclusive 'd_m' which stands for DIDCOMM message.

    Furthermore, messages must also be base64 encoded serialized jsons, stripped of any excess space and as small as possible.

    "},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#drawbacks","title":"Drawbacks","text":"

    This puts extra work on wallet developers to ensure a good experience.

    On iOS only one app can be registered to handle didcomm:// at a time; the first one to be installed will prevent others from using this custom scheme.

    "},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    This allows each wallet to define their own invite page (or use an existing page provided by the community) while providing a common protocol scheme (didcomm://) for all applications.

    If we don't do this, there's a chance that wallet applications become unable to communicate with each other effecively during the onboarding process, leading to fragmentation, much like in the IM world.

    "},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#prior-art","title":"Prior Art","text":""},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#unresolved-questions","title":"Unresolved Questions","text":""},{"location":"concepts/0268-unified-didcomm-agent-deeplinking/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes.

    Name / Link Implementation Notes"},{"location":"concepts/0270-interop-test-suite/","title":"0270: Interop Test Suite","text":""},{"location":"concepts/0270-interop-test-suite/#summary","title":"Summary","text":"

    Describes the goals, scope, and interoperability contract of the Aries Interop Test Suite. Does NOT serve as a design doc for the test suite code, or as a developer guide explaining how the test suite can be run; see the test suite codebase for that.

    "},{"location":"concepts/0270-interop-test-suite/#motivation","title":"Motivation","text":"

    The Aries Interop Test Suite makes SSI interoperability publicly and objectively measurable. It is a major deliverable of the Aries project as a whole--not a minor detail that only test zealots care about. It's important that the entire SSI community understand what it offers, how it works, what its results mean, and how it should be used.

    "},{"location":"concepts/0270-interop-test-suite/#tutorial","title":"Tutorial","text":"

    Interoperability is a buzzword in the SSI/decentralized identity space. We all want it.

    Without careful effort, though, interoperability is subjective and slippery. If products A and B implement the same spec, or if they demo cooperation in a single workflow, does that mean they can be used together? How much? For how long? Across which release boundaries? With what feature caveats?

    We need a methodology that gives crisp answers to questions like these--and it needs to be more efficient than continuously exercising every feature of every product against ../../features of every other product.

    However, it's important to temper our ambitions. Standards, community specs, and reference implementations exist, and many of them come with tests or test suites of their own. Products can test themselves with these tools, and with custom tests written by their dev staffs, and make rough guesses about interoperability. The insight we're after is a bit different.

    "},{"location":"concepts/0270-interop-test-suite/#goals","title":"Goals","text":"

    What we need is a tool that achieves these goals:

    1. Evaluate practical interoperability of agents.

      Other software that offers SSI ../../features should also be testable. Here, such components are conflated with agents for simplicity, but it's understood that the suite targets protocol participants no matter what their technical classification.

    Focus on remote interactions that deliver business value: high-level protocols built atop DIDComm, such as credential issuance, proving, and introducing, where each participant uses different software. DID methods, ledgers, crypto libraries, credential implementations, and DIDComm infrastructure should have separate tests that are out of scope here. None of these generate deep insight into whether packaged software is interoperable enough to justify purchase decisions; that's the gap we need to plug.

    1. Describe results in a formal, granular, reproducible way that supports comparison between agents A and B, and between A at two different points of time or in two different configurations.

      This implies a structured report, as well as support for versioning of the suite, the agents under test, and the results.

    2. Track the collective community state of the art, so measurements are comprehensive and up-to-date, and so new ideas automatically encounter pressure to be vetted for interoperability.

      The test suite isn't a compliance tool, and it should be unopinionated about what's important and what's not. However, it should embody a broad list of testable ../../features--certainly, ones that are standard, and often, ones that are still maturing.

    "},{"location":"concepts/0270-interop-test-suite/#dos-and-donts","title":"Dos and Don'ts","text":"

    Based on the preceding context, the following rules guide our understanding of the test suite scope:

    "},{"location":"concepts/0270-interop-test-suite/#general-approach","title":"General Approach","text":"

    We've chosen to pursue these goals by maintaining a modular interop test suite as a deliverable of the Aries project. The test suite is an agent in its own right, albeit an agent with deliberate misbehaviors, a security model unsuitable for production deployment, an independent release schedule, and a desire to use every possible version of every protocol.

    Currently the suite lives in the aries-protocol-test-suite repo, but the location and codebase could change without invalidating this RFC; the location is an implementation detail.

    "},{"location":"concepts/0270-interop-test-suite/#contract-between-suite-and-agent-under-test","title":"Contract Between Suite and Agent Under Test","text":"

    The contract between the test suite and the agents it tests is:

    "},{"location":"concepts/0270-interop-test-suite/#suite-will","title":"Suite will...","text":"
    1. Be packaged for local installation.

      Packaging could take various convenient forms. Those testing an agent install the suite in an environment that they control, where their agent is already running, and then configure the suite to talk to their agent.

    2. Evaluate the agent under test by engaging in protocol interactions over a frontchannel, and control the interactions over a backchannel.

      Note: Initially, this doc stipulated that both channels should use DIDComm over HTTP. This has triggered some dissonance. If an agent doesn't want to talk HTTP, should it have to, just to be tested? If an agent wants to be controlled over a RESTful interface, shouldn't it be allowed to do that? Answers to the preceding two questions have been proposed (use a generic adapter to transform the protocol, but don't make the test suite talk on a different frontchannel; unless all agents expose the same RESTful interface, the only thing we can count on is that agents will have DIDComm support, and the only methdology we have for uniform specification is to describe a DIDComm-based protocol, so yes, backchannel should be DIDComm). These two incompatible opinions are both alive and well in the community, and we are not yet converging on a consensus. Therefore, the actual implementation of the frontchannel and backchannel remains a bit muddy right now. Perhaps matters will clarify as we think longer and/or as we gain experience with implementation.

      Over the frontchannel, the test suite and the agent under test look like ordinary agents in the ecosystem; any messages sent over this channel could occur in the wild, with no clue that either party is in testing mode.

      The backchannel is the place where testing mode manifests. It lets the agent's initial state be set and reset with precision, guarantees its choices at forks in a workflow, eliminates any need for manual interaction, and captures notifications from the agent about errors. Depending on the agent under test, this backchannel may be very simple, or more complex. For more details, see Backchannel below.

      Agents that interact over other transports on either channel can use transport adapters provided by the test suite, or write their own. HTTP is the least common denominator transport into which any other transports are reinterpreted. Adapting is the job of the agent developer, not the test suite--but the suite will try to make this as easy as possible.

    3. Not probe for agent ../../features. Instead, it will just run whatever subset of its test inventory is declared relevant by the agent under test.

      This lets simple agents do simple integrations with the test suite, and avoid lots of needless error handling on both sides.

    4. Use a set of predefined identities and a set of starting conditions that all agents under test must be able to recognize on demand; these are referenced on the backchannel in control messages. See Predefined Inventory below.

    5. Run tests in arbitrary orders and combinations, but only run one test at a time.

      Some agents may support lots of concurrency, but the test suite should not assume that all agents do.

    6. Produce an interop profile for the agent under test, with respect to the tested ../../features, for every successful run of the test suite.

      A \"successful\" run is one where the test suite runs to completion and believes it has valid data; it has nothing to do with how many tests are passed by the agent under test. The test suite will not emit profiles for unsuccessful runs.

      Interop profiles emitted by the test suite are the artifacts that should be hyperlinked in the Implementation Notes section of protocol RFCs. They could also be published (possibly in a prettified form) in release notes, distributed as a product or documentation artifact, or returned as an attachment with the disclose message of the Discover Features protocol.

    7. Have a very modest footprint in RAM and on disk, so running it in Docker containers, VMs, and CI/CD pipelines is practical.

    8. Run on modern desktop and server operating systems, but not necessarily on embedded or mobile platforms. However, since it interacts with the agent under test over a remote messaging technology, it should be able to test agents running on any platform that's capable of interacting over HTTP or over a transport that can be adapted to HTTP.

    9. Enforce reasonable timeouts unless configured not to do so (see note about user interaction below).

    "},{"location":"concepts/0270-interop-test-suite/#agent-under-test-will","title":"Agent under test will...","text":"
    1. Provide a consistent name for itself, and a semver-compatible version, so test results can be compared across test suite runs.

    2. Use the test suite configuration mechanism to make a claim about the tests that it believes are relevant, based on the ../../features and roles it implements.

    3. Implement a distinction between test mode and non-test mode, such that:

      • Test mode causes the agent to expose and use a backchannel--but the backchannel does not introduce a risk of abuse in production mode.

      • Test mode either causes the agent to need no interaction with a user (preferred), or is combined with test suite config that turns off timeouts (not ideal but may be useful for debugging and mobile agents). This is necessary so the test suite can be automated, or so unpredictable timing on user interaction doesn't cause spurious results.

      The mechanism for implementing this mode distinction could be extremely primitive (conditional compilation, cmdline switches, config file, different binaries). It simply has to preserve ordinary control in the agent under test when it's in production, while ceding some control to the test suite as the suite runs.

    4. Faithfully create the start conditions implied by named states from the Predefined Inventory, when requested on the backchannel.

    5. Accurately report errors on the backchannel.

    "},{"location":"concepts/0270-interop-test-suite/#reference","title":"Reference","text":""},{"location":"concepts/0270-interop-test-suite/#releasing-and-versioning","title":"Releasing and Versioning","text":"

    Defining a release and versioning scheme is important, because the test suite's version is embedded in every interop profile it generates, and people who read test suite output need to reason about whether the results from two different test suites are comparable. By picking the right conventions, we can also avoid a lot of complexity and maintenance overhead.

    The test suite releases implicitly with every merged commit, and is versioned in a semver-compatible way as follows:

    The major version should change rarely, after significant community debate. The minor version should update on a weekly or monthly sort of timeframe as protocols accumulate and evolve in the community--without near-zero release effort by contributors to the test suite. The patch version is updated automatically with every commit. This is a very light process, but it still allows the test suite on Monday and the test suite on Friday to report versions like 1.39.5e22189 and 1.40.c5d8aaf, to know which version of the test suite is later, to know that both versions implement the same contract, and to know that the later version is backwards-compatible with the earlier one.

    "},{"location":"concepts/0270-interop-test-suite/#test-naming-and-grouping","title":"Test Naming and Grouping","text":"

    Tests in the test suite are named in a comma-separated form that groups them by protocol, version, role, and behavior, in that order. For example, a test of the holder role in version 1.0 of the the issue-credential protocol, that checks to see if the holder sends a proper ack at the end, might be named:

    issue-credential,1.0,holder,sends-final-ack\n

    Because of punctuation, this format cannot be reflected in function names in code, and it also will probably not be reflected in file names in the test suite codebase. However, it provides useful grouping behavior when sorted, and it is convenient for parsing. It lets agents under test declare patterns of relevant tests with wildcards. An agent that supports the credential issuance but not holding, and that only supports the 1.1 version of the issue-credential protocol, can tell the test suite what's relevant with:

    issue-credential,1.1,issuer,*\n
    "},{"location":"concepts/0270-interop-test-suite/#interop-profile","title":"Interop Profile","text":"

    The results of a test suite run are represented in a JSON object that looks like this:

    {\n    \"@type\": \"Aries Test Suite Interop Profile v1\"\n    \"suite_version\": \"1.39.5e22189\",\n    \"under_test_name\": \"Aries Static Agent Python\",\n    \"under_test_version\": \"0.9.3\",\n    \"test_time\": \"2019-11-23T18:59:06\", // when test suite launched\n    \"results\": [\n        {\"name\": \"issue-credential,1.0,holder,ignores-spurious-response\", \"pass\": false },\n        {\"name\": \"issue-credential,1.0,holder,sends-final-ack\", \"pass\": true },\n    ]\n}\n
    "},{"location":"concepts/0270-interop-test-suite/#backchannel","title":"Backchannel","text":"

    While the concept of a backchannel has been accepted by the community, there is not alignment with the definition of the backchannel provided here. Rather than maintaining this section as related work in the community evolves the concept, we're adding this note to say \"this section will likely change.\" Once backchannel implementations stabilize with a core definition, we'll refine this section as appropriate.

    The backchannel between test suite and agent under test is managed as a standard DIDComm protocol. The identifier for the message family is X. The messages include:

    "},{"location":"concepts/0270-interop-test-suite/#predefined-inventory","title":"Predefined Inventory","text":"

    TODO: link to the predefined identity for the test suite created by Daniel B, plus the RFC about other predefined DIDs. Any and all of these should be names as possible existing states in the KMS. Other initial states:

    "},{"location":"concepts/0270-interop-test-suite/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0270-interop-test-suite/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0289-toip-stack/","title":"0289: The Trust Over IP Stack","text":""},{"location":"concepts/0289-toip-stack/#summary","title":"Summary","text":"

    This Aries concept RFC introduces a complete architecture for Internet-scale digital trust that integrates cryptographic trust at the machine layer with human trust at the business, legal, and social layers.

    "},{"location":"concepts/0289-toip-stack/#motivations","title":"Motivations","text":"

    The importance of interoperability for the widespread adoption of an information network architecture has been proven by the dramatic rise to dominance of the Internet [1]. A key driver of the Internet's rise to dominance was the open source implementation of the TCP/IP stack in Version 4.2 of Berkeley Software Distrbution (BSD) of UNIX [2]. This widely-adopted open source implementation of the TCP/IP stack offered the capability for any two peer devices to form a connection and exchange data packets regardless of their local network. In addition, secure protocol suites such as the Secure Sockets Layer (SSL), and its modern version, Transport Layer Security (TLS), have been protecting Internet transactions since 1995.

    Without a doubt, implementations of the TCP/IP stack, followed by SSL/TLS, have driven a tremendous amount of innovation over the last 30 years. However, although protocols such as TLS offer world-class security, the architecture over which they have been built leaves a significant and widely-recognized gap: a means for any peer to establish trust over these digital connections. For example, while TLS does allow a user to trust she is accessing the right website, it does not offer, at least in an usable way, a way for the user to log in, or prove her identity, to the website. This gap has often been referred to as \"the Internet's missing identity layer\" [3].

    The purpose of this Aries Concept RFC is to fill this gap by defining a standard information network architecture that developers can implement to establish trusted relationships over digital communicatons networks.

    "},{"location":"concepts/0289-toip-stack/#architectural-layering-of-the-trust-over-ip-stack","title":"Architectural Layering of the Trust over IP Stack","text":"

    Since the ultimate purpose of an \"identity layer\" is not actually to identify entities, but to facilitate the trust they need to interact, co-author John Jordan coined the term Trust over IP (ToIP) for this stack. Figure 1 is a diagram of its four layers:

    Figure 1: The ToIP stack

    Note that it is actually a \"dual stack\": two parallel stacks encompassing both technology and governance. This reflects the fact that digital trust cannot be achieved by technology alone, but only by humans and technology working together.

    Important: The ToIP stack does not define specific governance frameworks. Rather it is a metamodel for how to design and implement digital governance frameworks that can be universally referenced, understood, and consumed in order to facilitate transitive trust online. This approach to defining governance makes it easier for humans\u2014and the software agents that represent us at Layer Two\u2014to make trust decisions both within and across trust boundaries.

    The ToIP Governance Stack plays a special role in ToIP architecture. See the descriptions of the specialized governance frameworks at each layer and also the special section on Scaling Digital Trust.

    "},{"location":"concepts/0289-toip-stack/#layer-one-public-utilities-for-decentralized-identifiers-dids","title":"Layer One: Public Utilities for Decentralized Identifiers (DIDs)","text":"

    The ToIP stack is fundamentally made possible by new advancements in cryptography and distributed systems, including blockchains and distributed ledgers. Their high availability and cryptographic verifiability enable strong roots of trust that are decentralized so they will not serve as single points of failure.

    "},{"location":"concepts/0289-toip-stack/#dids","title":"DIDs","text":"

    Adapting these decentralized systems to be the base layer of the ToIP stack required a new type of globally unique identifier called a Decentralized Identifier (DID). Starting with a research grant from the U.S. Department of Homeland Security Science & Technology division, the DID specification [4] and the DID Primer [5] were contributed to the W3C Credentials Community Group in June 2017. In September 2019 the W3C launched the DID Working Group to complete the job of turning DIDs into a full W3C standard [6].

    DIDs are defined by an RFC 3986-compliant URI scheme designed to provide four core properties:

    1. Permanence. A DID effectively functions as a Uniform Resource Name (URN) [7], i.e., once assigned to an entity (called the DID subject), a DID is a persistent identifier for that entity that should never be reassigned to another entity.
    2. Resolvability. A DID resolves to a DID document\u2014a data structure (encoded in JSON or other syntaxes) describing the public key(s) and service endpoint(s) necessary to engage in trusted interactions with the DID subject.
    3. Cryptographic verifiability. The cryptographic material in a DID document that enables a DID subject to prove cryptographic control of a DID.
    4. Decentralization. Because a DID is cryptographically generated and verified, it does not require a centralized registration authority such as those needed for phone numbers, IP addresses, or domain names today.

    Figure 2 shows the resemblance between DID syntax and URN syntax (RFC 8141).

    Figure 2: How DID syntax resembles URN syntax

    "},{"location":"concepts/0289-toip-stack/#did-methods","title":"DID Methods","text":"

    Like the URN specification, the DID specification also defines a generic URI scheme which is in turn used for defining other specific URI schemes. With DIDs, these are called DID methods. Each DID method is defined by its own DID method specification that must include:

    1. The target system (technically called a verifiable data registry) against which the DID method operates. In the ToIP stack this is called a utility. Note that a utility is not required to be implemented as a blockchain or distributed ledger. DID methods can be designed to work with any type of distributed database, file system, or other system that can anchor a cryptographic root of trust.
    2. The DID method name.
    3. The syntax of the DID method-specific string.
    4. The CRUD (Create, Read, Update, Delete) operations for DIDs and DID documents that conform to the specification.

    DIDs have already proved to be a popular solution to decentralized PKI (public key infrastructure) [8]. Over 40 DID methods have already been registered in the informal DID Method Registry [9] hosted by the W3C Credentials Community Group (which the W3C DID Working Group is planning to incorporate into a formal registry as one of its deliverables). The CCG DID Method Registry currently include methods for:

    "},{"location":"concepts/0289-toip-stack/#utility-governance-frameworks","title":"Utility Governance Frameworks","text":"

    A Layer One public utility may choose any governance model suited to the the constraints of its business model, legal model, and technical architecture. This is true whether the public utility is operated as a blockchain, distributed ledger, or decentralized file store, or whether it is permissioned, permissionless, or any hybrid. (Note that even permissionless blockchain networks still have rules\u2014formal or informal\u2014governing who can update the code.)

    All ToIP architecture requires is that the governance model conform to the requirements of the ToIP Governance Stack to support both interoperability and transitive trust. This includes transparent identification of the governance authority, the governance framework, and participant nodes or operators; transparent discovery of nodes and/or service endpoints; and transparent security, privacy, data protection, and other operational policies. See the Governance section below.

    Utility governance frameworks that conform to the ToIP Governance Stack model will support standard roles for all types of utility governance authorities. For example, the role currently supported by public-permissioned utilities such as those based on Hyperledger Indy include:

    "},{"location":"concepts/0289-toip-stack/#layer-one-support-for-higher-layers","title":"Layer One Support for Higher Layers","text":"

    DIDs and DID documents are not the only cryptographic data structures needed to support the higher layers. Others include:

    In summary, the interoperability of Layer One is currently defined by the W3C DID specification and by Aries RFCs for the other cryptographic data structures listed above. Any DID registry that supports all of these data structures can work with any agent, wallet, and secure data store that operates at Layer Two.

    "},{"location":"concepts/0289-toip-stack/#layer-two-the-didcomm-protocol","title":"Layer Two: The DIDComm Protocol","text":"

    The second layer of the Trust over IP stack is defined by the DIDComm secure messaging standards [10]. This family of specifications, which are now being defined in the DIDComm Working Group at the Decentralized Identity Foundation, establish a cryptographic means by which any two software agents (peers) can securely communicate either directly edge-to-edge or via intermediate cloud agents as shown in Figure 3).

    Figure 3: At Layer Two, agents communicate peer-to-peer using DIDComm standards

    "},{"location":"concepts/0289-toip-stack/#peer-dids-and-did-to-did-connections","title":"Peer DIDs and DID-to-DID Connections","text":"

    A fundamental feature of DIDComm is that by default all DID-to-DID connections are established and secured using pairwise pseudonymous peer DIDs as defined in the Peer DID Method Specification [11]. These DIDs are based on key pairs generated and stored by the local cryptographic key management system (KMS, aka \"wallet\") maintained by each agent. Agents then use the DID Exchange protocol to exchange peer DIDs and DID documents in order to establish and maintain secure private connections between each other\u2014including key rotation or revocation as needed during the lifetime of a trusted relationship.

    Because all of the components of peer DIDs and DID-to-DID connections are created, stored, and managed at Layer Two, there is no need for them to be registered in a Layer One public utility. In fact there are good privacy and security reasons not to\u2014these components can stay entirely private to the peers. As a general rule, the only ToIP actors who should need public DIDs at Layer One are:

    1. Credential issuers as explained in Layer Three below.
    2. Governance authorities at any layer as explained in the section on Scaling Digital Trust.

    This also means that, once formed, DID-to-DID connections can be used for any type of secure communications between the peers. Furthermore, these connections are capable of lasting literally forever. There are no intermediary service providers of any kind involved. The only reason a DID-to-DID connection needs to broken is if one or both of the peers no longer wants it.

    "},{"location":"concepts/0289-toip-stack/#agents-and-wallets","title":"Agents and Wallets","text":"

    At Layer Two, every agent is paired with a digital wallet\u2014or more accurately a KMS (key management system). This KMS can be anything from a very simple static file on an embedded device to a highly sophisticated enterprise-grade key server. Regardless of the complexity, the job of the KMS is to safeguard sensitive data: key pairs, zero-knowledge proof blinded secrets, verifiable credentials, and any other cryptographic material needed to establish and maintain technical trust.

    This job includes the difficult challenge of recovery after a device is lost or stolen or a KMS is hacked or corrupted. This is the province of decentralized key management. For more details, see the Decentralized Key Management System (DKMS) Design and Architecture document [12], and Dr. Sam Smith's paper on KERI (Key Event Receipt Infrastructure) [13].

    "},{"location":"concepts/0289-toip-stack/#secure-data-stores","title":"Secure Data Stores","text":"

    Agents may also be paired with a secure data store\u2014a database with three special properties:

    1. It is controlled exclusively by the DID controller (person, organization, or thing) and not by any intermediary or third party.
    2. All the data is encrypted with private keys in the subject\u2019s KMS.
    3. If a DID controller has more than one secure data store, the set of stores can be automatically synchronized according to the owner\u2019s preferences.

    Work on standardizing secure data stores has been proceeding in several projects in addition to Hyperledger Aries\u2014primarily at the Decentralized Identity Foundation (DIF) and the W3C Credentials Community Group. This has culminated in the formation of the Secure Data Store (SDS) Working Group at DIF.

    "},{"location":"concepts/0289-toip-stack/#guardianship-and-guardian-agentswallets","title":"Guardianship and Guardian Agents/Wallets","text":"

    The ToIP stack cannot become a universal layer for digital trust if it ignores the one-third of the world's population that do not have smartphones or Internet access\u2014or the physical, mental, or economic capacity to use ToIP-enabled infrastructure. This underscores the need for the ToIP layer to robustly support the concept of digital guardianship\u2014the combination of a hosted cloud agent/wallet service and an individual or organization willing to take legal responsibility for managing that cloud agent/wallet on behalf of the person under guardianship, called the dependent.

    For more about all aspects of digital guardianship, see the Sovrin Foundation white paper On Guardianship in Self-Sovereign Identity [14].

    "},{"location":"concepts/0289-toip-stack/#provider-governance-frameworks","title":"Provider Governance Frameworks","text":"

    At Layer Two, governance is needed primarily to establish interoperability testing and certification requirements, including security, privacy, data protection, for the following roles:

    "},{"location":"concepts/0289-toip-stack/#layer-two-support-for-higher-layers","title":"Layer Two Support for Higher Layers","text":"

    The purpose of Layer Two is to enable peers to form secure DID-to-DID connections so they can:

    1. Issue, exchange, and verify credentials over these connections using the data exchange protocols at Layer Three.
    2. Access the Layer One cryptographic data structures needed to issue and verify Layer Three credentials regardless of the public utility used by the issuer.
    3. Migrate and port ToIP data between agents, wallets, and secure data stores without restriction. This data portability is critical to the broad adoption and interoperability of ToIP.
    "},{"location":"concepts/0289-toip-stack/#layer-three-data-exchange-protocols","title":"Layer Three: Data Exchange Protocols","text":"

    Layer One and Layer Two together enable the establishment of cryptographic trust (also called technical trust) between peers. By contrast, the purpose of Layers Three and Four is to establish human trust between peers\u2014trust between real-world individuals and organizations and the things with which they interact (devices, sensors, appliances, vehicles, buildings, etc.)

    Part of the power of the DIDComm protocol at Layer Two is that it lays the foundation for secure, private agent-to-agent connections that can now \"speak\" any number of data exchange protocols. From the standpoint of the ToIP stack, the most important of these are protocols that support the exchange of verifiable credentials.

    "},{"location":"concepts/0289-toip-stack/#the-verifiable-credentials-data-model","title":"The Verifiable Credentials Data Model","text":"

    After several years of incubation led by Manu Sporny, David Longley, and other members of the W3C Credentials Community Group, the W3C Verifiable Claims Working Group (VCWG) was formed in 2017 and produced the Verifiable Credentials Data Model 1.0 which became a W3C Recommendation in September 2019 [15].

    Figure 4 is a diagram of the three core roles in verifiable credential exchange\u2014often called the \"trust triangle\". For more information see the Verifiable Credentials Primer [16].

    Figure 4: The three primary roles in the W3C Verifiable Credentials Data Model

    The core goal of the Verifiable Credentials standard is to enable us to finally have the digital equivalent of the physical credentials we store in our physical wallets to provide proof of our identity and attributes every day. This is why the presentation of a verifiable credential to a verified is call a proof\u2014it is both a cryptographic proof and a proof of some set of attributes or relationships a verifier needs to make a trust decision.

    "},{"location":"concepts/0289-toip-stack/#credential-proof-types","title":"Credential Proof Types","text":"

    The Verifiable Credentials Data Model 1.0 supports several different cryptographic proof types:

    1. JSON Web Tokens (JWTs) secured using JSON Web Signatures.
    2. Linked Data Signatures using JSON-LD.
    3. Zero Knowledge Proofs (ZKPs) using Camenisch-Lysyanskaya Signatures.

    All three proof types address specific needs in the market:

    To support all three of these credential proof types in the ToIP stack means:

    "},{"location":"concepts/0289-toip-stack/#credential-exchange-protocols","title":"Credential Exchange Protocols","text":"

    At Layer Three, the exchange of verifiable credentials is performed by agents using data exchange protocols layered over the DIDComm protocol. These data exchange protocol specifications are being published as part of the DIDComm suite [10]. Credential exchange protocols are unique to each credential proof type because the request and response formats are different. The goal of the ToIP technology stack is to standardize all supported credential exchange protocols so that any ToIP-compatible agent, wallet, and secure data store can work with any other agent, wallet, and secure data store.

    With fully interoperable verifiable credentials, any issuer may issue any set of claims to any holder who can then prove them to any verifier. Every verifier can decide which issuers and which claims it will trust. This is a fully decentralized system that uses the same trust triangle as the physical credentials we carry in our physical wallets today. This simple, universal trust model can be adapted to any set of requirements from any trust community. Even better, in most cases it does not require new policies or business relationships. Instead the same policies that apply to existing physical credentials can just be applied to a new, more flexible and useful digital format.

    "},{"location":"concepts/0289-toip-stack/#credential-governance-frameworks","title":"Credential Governance Frameworks","text":"

    Since Layer Three is where the ToIP stack crosses over from technical trust to human trust, this is the layer where governance frameworks become a critical component for interoperability and scalability of digital trust ecosystems. Credential governance frameworks can be used to specify:

    Standard roles that credential governance frameworks can define under the ToIP Governance Stack model include:

    "},{"location":"concepts/0289-toip-stack/#layer-three-support-for-higher-layers","title":"Layer Three Support for Higher Layers","text":"

    Layer Three enables human trust\u2014in the form of verifiable assertions about entities, attributes and relationships\u2014to be layered over the cryptographic trust provided by Layers One and Two. Layer Four is the application ecosystems that request and consume these verifiable credentials in order to support the specific trust models and policies of their own digital trust ecosystem.

    "},{"location":"concepts/0289-toip-stack/#layer-four-application-ecosystems","title":"Layer Four: Application Ecosystems","text":"

    Layer Four is the layer where humans interact with applications in order to engage in trusted interactions that serve a specific business, legal, or social purpose. Just as applications call the TCP/IP stack to communicate over the Internet, applications call the ToIP stack to register DIDs, form connections, obtain and exchange verifiable credentials, and engage in trusted data exchange using the protocols in Layers One, Two, and Three.

    The ToIP stack no more limits the applications that can be built on it than the TCP/IP stack limits the applications that can be built on the Internet. The ToIP stack simply defines the \"tools and rules\"\u2014technology and governance\u2014for those applications to interoperate within digital trust ecosystems that provide the security, privacy, and data protection that their members expect. The ToIP stack also enables the consistent user experience of trust decisions across applications and ecosystems that is critical to achieving widespread trust online\u2014just as a consistent user experience of the controls for driving a car (steering wheel, gas pedal, brakes, turn signals) are critical to the safety of drivers throughout the world.

    "},{"location":"concepts/0289-toip-stack/#ecosystem-governance-frameworks","title":"Ecosystem Governance Frameworks","text":"

    Layer Four is where humans will directly experience the ToIP Governance Stack\u2014specifically the trust marks and policy promises of ecosystem governance frameworks. These specify the purpose, principes, and policies that apply to all governance authorities and governance frameworks operating within that ecosystem\u2014at all four levels of the ToIP stack.

    The ToIP Governance Stack will define standard roles that can be included in an ecosystem governance framework (EGF) including:

    To fully understand the scope and power of ecosystem governance frameworks, let us dive deeper into the special role of the ToIP Governance Stack.

    "},{"location":"concepts/0289-toip-stack/#scaling-digital-trust","title":"Scaling Digital Trust","text":"

    The top half of Figure 5 below shows the basic trust triangle architecture used by verifiable credentials. The bottom half shows a second trust triangle\u2014the governance trust triangle\u2014that can solve a number of problems related to the real-world adoption and scalability of verifiable credentials and the ToIP stack.

    Figure 5: The special role of governance frameworks

    "},{"location":"concepts/0289-toip-stack/#governance-authorities","title":"Governance Authorities","text":"

    The governance trust triangle in Figure 5 represents the same governance model that exists for many of the most successful physical credentials we use every day: passports, driving licenses, credit cards, health insurance cards, etc.

    These credentials are \"backed\" by rules and policies that in many cases have taken decades to evolve. These rules and policies have been developed, published, and enforced by many different types of existing governance authorities\u2014private companies, industry consortia, financial networks, and of course governments.

    The same model can be applied to verifiable credentials simply by having these same governance authorities\u2014or new ones formed explicitly for ToIP governance\u2014publish digital governance frameworks. Any group of issuers who want to standardize, strengthen, and scale the credentials they offer can join together under the auspices of a sponsoring authority to craft a governance framework. No matter the form of the organization\u2014government, consortia, association, cooperative\u2014the purpose is the same: define the business, legal, and technical rules under which the members agree to operate in order to achieve trust.

    This of course is exactly how Mastercard and Visa\u2014two of the world\u2019s largest trust networks\u2014have scaled. Any bank or merchant can verify in seconds that another bank or merchant is a member of the network and thus bound by its rules.

    With the ToIP stack, this governance architecture can be applied to any set of roles and/or credentials, for any trust community, of any size, in any jurisdiction.

    As an historical note, some facets of the ToIP governance stack are inspired by the Sovrin Governance Framework (SGF) [17] developed starting in 2017 by the Sovrin Foundation, the governance authority for the Sovrin public ledger for self-sovereign identity (SSI).

    "},{"location":"concepts/0289-toip-stack/#defining-a-governance-framework","title":"Defining a Governance Framework","text":"

    In addition to the overall metamodel, the ToIP governance stack will provide an architectural model for individual governance frameworks at any level. This enables the components of the governance framework to be expressed in a standard, modular format so they can be easily indexed and referenced both internally and externally from other governance frameworks.

    Figure 6 shows this basic architectural model:

    Figure 6: Anatomy of a governance framework

    "},{"location":"concepts/0289-toip-stack/#discovery-and-verification-of-authoritative-issuers","title":"Discovery and Verification of Authoritative Issuers","text":"

    Verifiers often need to verify that a credential was issued by an authoritative issuer. The ToIP stack will give governance authorities multiple mechanisms for designating their set of authoritative issuers (these options are non-exclusive\u2014they can each be used independently or in any combination):

    1. DID Documents. The governance authority can publish the list of their DIDs in a DID document on one or more public utilities of its choice.
    2. Member Directories. A governance authority can publish a \"whitelist\" of DIDs via a whitelisting service available at a standard service endpoint published in the governance authority\u2019s own DID document.
    3. Credential registries. If search and discovery of authoritative issuers is desired, a governance authority can publish verifiable credentials containing both the DID and additional attributes for each authoritative issuer in a credential registry. Note that in this case the credential registry serves as a separate, cryptographically-verifiable holder of the credential\u2014a holder that is not the subject of the credential, but which can independently prove the validity of the credential.
    4. Verifiable credentials. As shown in Figure 5, the governance authority (or its designated auditors) can issue verifiable credentials to the authoritative issuers, which they in turn can provide directly to verifiers or indirectly via credential holders.
    "},{"location":"concepts/0289-toip-stack/#discovery-and-verification-of-authoritative-verifiers","title":"Discovery and Verification of Authoritative Verifiers","text":"

    Holders often need to verify that a credential was requested by an authoritative verifier, e.g. as part of a \u2018machine readable governance framework\u2019. The ToIP stack will give governance authorities multiple mechanisms for designating their set of authoritative verifiers (these options are non-exclusive\u2014they can be used independently or in any combination):

    1. DID Documents. The governance authority can publish the list of their DIDs in a DID document on one or more verifiable data registries of its choice.
    2. Member Directories. A governance authority can publish a \"whitelist\" of DIDs via a whitelisting service available at a standard service endpoint published in the governance authority\u2019s own DID document.
    3. Credential registries. If search and discovery of authoritative verifiers is desired, a governance authority can publish verifiable credentials containing both the DID and additional attributes for each authoritative verifiers in a credential registry. Note that in this case the credential registry serves as a separate, cryptographically-verifiable holder of the credential\u2014a holder that is not the subject of the credential, but which can independently prove the validity of the credential.
    4. Verifiable credentials. Similar to Figure 5, the governance authority (or its designated auditors) can issue verifiable credentials to the authoritative issuers in the governance framework. Those issuers can in turn provide proofs directly to holders or verifiers.
    "},{"location":"concepts/0289-toip-stack/#countermeasures-against-coercion","title":"Countermeasures against coercion","text":"

    The concept of \"self-sovereign\" identity presumes that parties are free to enter a transaction, to share personal and confidential information, and to walk away when requests by the other party are deemed unreasonable or even unlawful. In practice, this is often not the case: \"What do you give an 800-pound gorilla?\", answer: \"Anything that it asks for\". Examples of such 800-pound gorillas are some big-tech websites, immigration offices and uniformed individuals alleging to represent law-enforcement [20][21]. Also the typical client-server nature of web transactions reinforces this power imbalance, where the human party behind its client agent feels coerced in surrendering personal data as otherwise they are denied access to a product, service or location. Point in case are the infamous cookie walls, where a visitor of a website get the choice between \"accept all cookies or go into the maze-without-exit\".

    Governance frameworks may be certified to implement one or more potential countermeasures against different types of coercion. In case of a machine readable governance framework, some of such countermeasures may be automatically enforced, safeguarding its user from being coerced into action against their own interest. Different governance frameworks may choose different balances between full self-sovereignty and tight control, depending of the interests that are at play as well as applicable legislation.

    The following are examples of potential countermeasures against coercion. The governance framework can stimulate or enforce that some verifiable credentials are only presented when the holder agent determines that some requirements are satisfied. When a requirement is not fulfilled, the user is warned about the violation and the holder agent may refuse presentation of the requested verifiable credential. 1. Require authoritative verifier. Verifiers would need to be authorized within the applicable governance framework, see also section \u201cDiscovery and Verification of Authoritative Verifiers\u201d. 2. Require evidence collection. Requests for presentation of verifiable credentials may hold up as evidence in court, if the electronic signature on the requests is linked to the verifier in a non-repudiable way. 3. Require enabling anonymous complaints. The above evidence collection may be compromised if the holder can be uniquely identified from the collected evidence. So a governance framework may require the blinding of holder information, as well as instance-identifiable information about the evidence itself. 4. Require remote/proxy verification. Verification has only value to a holder, if it results in a positive decision by the verifier. Hence a holder should preferably only surrender personal data if such warrants a positive decision. It would save travel, if the requested decision is access to a physical facility. It would in any case prevent unnecessary disclosure of personal data. Some verifiers may consider their decision criteria confidential. Hence, different governance frameworks may choose different balances between holder privacy and verifier confidentiality. 5. Require complying holder agent. Some rogue holder agents may surrender personal data against the policies of the governance framework associated with that data. Issuers of such data may require verification of compliance of the holder\u2019s agent before issuing. 6. Require what-you-know authentication. Holders may be forced to surrender biometric authentication by rogue verifiers as well as some state jurisdictions. This is the reason that many bank apps require \u201cwhat-you-know\u201d authentication, next to biometric \u201cwhat-you-are\u201d or device-based \u201cwhat-you-have\u201d authentication. This may be needed even for when then the user views its own personal data in the app without electronic presentation, as some 800-pound gorillas require watching over the shoulder.

    "},{"location":"concepts/0289-toip-stack/#interoperability-with-other-governance-frameworks","title":"Interoperability with Other Governance Frameworks","text":"

    The ToIP governance stack is designed to be compatible with\u2014and an implementation vehicle for\u2014national governance frameworks such as the Pan-Canadian Trust Framework (PCTF) [18] being developed through a public/private sector collaboration with the Digital Identity and Authentication Council of Canada (DIACC). It should also interoperate with regional and local governance frameworks of all kinds. For example, the Province of British Columbia (BC) has implemented a ToIP-compatible verifiable credential registry service called OrgBook BC. OrgBook is a holder service for legally registered entities in BC that was built using Indy Catalyst and Hyperledger Aries Cloud Agent - Python. Other provinces such as Ontario and Alberta as well as the Canadian federal government have begun to experiment with these services for business credentials, giving rise to new kind of network where trust is at the edge. For more information see the VON (Verifiable Organization Network) [19].

    "},{"location":"concepts/0289-toip-stack/#building-a-world-of-interoperable-digital-trust-ecosystems","title":"Building a World of Interoperable Digital Trust Ecosystems","text":"

    The Internet is a network of networks, where the interconnections between each network are facilitated through the TCP/IP stack. The ToIP-enable Internet is a digital trust ecosystem of digital trust ecosystems, where the interconnections between each digital trust ecosystem are facilitated through the ToIP stack. The boundaries of each digital trust ecosystem are determined by the governance framework(s) under which its members are operating.

    This allows the ToIP-enabled Internet to reflect the same diversity and richness the Internet has today, but with a new ability to form and maintain trust relationships of any kind\u2014personal, business, social, academic, political\u2014at any distance. These trust relationships can cross trust boundaries as easily as IP packets can cross network boundaries today.

    "},{"location":"concepts/0289-toip-stack/#conclusion-a-trust-layer-for-the-internet","title":"Conclusion: A Trust Layer for the Internet","text":"

    The purpose of the ToIP stack is to define a strong, decentralized, privacy-respecting trust layer for the Internet. It leverages blockchain technology and other new developments in cryptography, decentralized systems, cloud computing, mobile computing, and digital governance to solve longstanding problems in establishing and maintaining digital trust.

    This RFC will be updated to track the evolution of the ToIP stack as it is further developed, both through Hyperledger Aries and via other projects at the Linux Foundation. We welcome comments and contributions.

    "},{"location":"concepts/0289-toip-stack/#references","title":"References","text":"
    1. Petros Kavassalis, Richard Jay Solomon, Pierre-Jean Benghozi, The Internet: a Paradigmatic Rupture in Cumulative Telecom Evolution, Industrial and Corporate Change, 1996; accessed September 5 2019.
    2. FreeBSD, What, a real UNIX\u00ae?, accessed September 5, 2019.
    3. Kim Cameron, The Laws of Identity, May 2005; accessed November 2, 2019.
    4. Drummond Reed, Manu Sporny, Markus Sabadello, David Longley, Christopher Allen, Ryan Grant, Decentralized Identifiers (DIDs) v1.0, December 2019; accessed January 24, 2020.
    5. W3C Credentials Community Group, DID Primer, January 2019; accessed July 6, 2019.
    6. W3C DID Working Group, Home Page, September 2019; accessed November 2, 2019.
    7. Uniform Resource Names (URNs), RFC 8141, April 2017; accessed November 2, 2019.
    8. Greg Slepak, Christopher Allen, et al, Decentralized Public Key Infrastructure, December 2015, accessed January 24, 2020.
    9. W3C Credentials Community Group, DID Method Registry, June 2019; accessed July 6, 2019.
    10. Daniel Hardman, DID Communication, January 2019; accessed July 6, 2019.
    11. Daniel Hardman et al, Peer DID Method 1.0 Specification, July 2019; accessed July 6, 2019.
    12. Drummond Reed, Jason Law, Daniel Hardman, Mike Lodder, DKMS Design and Architecture V4, March 2019; accessed November 2, 2019.
    13. Samuel M. Smith, Key Event Receipt Infrastructure (KERI) , July 2019, accessed February 4, 2020.
    14. Sovrin Governance Framework Working Group, On Guardianship in Self-Sovereign Identity, December 2019, accessed April 10, 2020.
    15. Manu Sporny, Grant Noble, Dave Longley, Daniel C. Burnett, Brent Zundel, Verifiable Credentials Data Model 1.0, September 2019; accessed November 2, 2019.
    16. Manu Sporny, Verifiable Credentials Primer, February 2019; accessed July 6, 2019.
    17. Sovrin Foundation, Sovrin Governance Framework V2, March 2019; accessed December 21, 2019.
    18. DIACC, Pan-Canadian Trust Framework, May 2019; accessed July 6, 2019.
    19. Governments of British Columbia, Ontario, and Canada, Verifiable Organizations Network (VON),June 2019; accessed July 6, 2019.
    20. Oskar van Deventer et al, TNO, Netherlands, Self-Sovereign Identity - The Good, The Bad And The Ugly, May 2019.
    21. Oskar van Deventer (TNO), Alexander Blom (Bloqzone), Line Kofoed (Bloqzone) Verify the Verifier - anti-coersion by design, October 2020.
    "},{"location":"concepts/0289-toip-stack/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0302-aries-interop-profile/","title":"0302: Aries Interop Profile","text":""},{"location":"concepts/0302-aries-interop-profile/#summary","title":"Summary","text":"

    This RFC defines the process for the community of Aries agent builders to:

    \"Agent builders\" are organizations or teams that are developing open source code upon which agents can be built (e.g. aries-framework-dotnet), or deployable agents (e.g. Aries Mobile Agent Xamarin), or commercially available agents.

    An Aries Interop Profile (AIP) version provides a clearly defined set of versions of RFCs for Aries agent builders to target their agent implementation when they wish it to be interoperable with other agents supporting the same Aries Interop Profile version. The Aries Interop Profile versioning process is intended to provide clarity and predictability for Aries agent builders and others in the broader Aries community. The process is not concerned with proposing new, or evolving existing, RFCs, nor with the development of Aries code bases.

    At all times, the Reference section of this RFC defines one or more current Aries Interop Profile versions -- a number and set of links to specific commits of concept and ../../features RFCs, along with a list of all previous Aries Interop Profile versions. Several current Aries Interop Profile versions can coexist during periods when multiple major Aries Interop Profile versions are in active use (e.g. 1.x and 2.x). Each entry in the previous versions list includes a link to the commit of this RFC associated with that Aries Interop Profile version. The Reference section MAY include one <major>.next version for each existing current major Aries Interop Profile versions. Such \"next\" versions are proposals for what is to be included in the next minor AIP version.

    Once a suitably populated Aries test suite is available, each Aries Interop Profile version will include a link to the relevant subset of test cases. The test cases will include only those targeting the specific versions of the ../../concepts and ../../features RFCs in that version of Aries Interop Profile. A process for maintaining the link between the Aries Interop Profile version and the test cases will be defined in this RFC once the Aries test suite is further evolved.

    This RFC includes a section maintained by Aries agent builders listing their Aries agents or agent deployments (whether open or closed source). This list SHOULD include the following information for each listed agent:

    An Aries agent builder SHOULD include an entry in the table per major version supported. Until there is a sufficiently rich test suite that produces linkable results, builders SHOULD link to and maintain a page that summarizes any exceptions and extensions to the agent's AIP support.

    The type of the agent MUST be selected from an enumerated list above the table of builder agents.

    "},{"location":"concepts/0302-aries-interop-profile/#motivation","title":"Motivation","text":"

    The establishment of Aries Interop Profile versions defined by the Aries agent builder community allows the independent creation of interoperable Aries agents by different Aries agent builders. Whether building open or closed source implementations, an agent that aligns with the set of RFC versions listed as part of an Aries Interop Profile version should be interoperable with any other agent built to align with that same version.

    "},{"location":"concepts/0302-aries-interop-profile/#tutorial","title":"Tutorial","text":"

    This RFC MUST contain the current Aries Interop Profile versions as defined by a version number and a set of links to concept and feature RFCs which have been agreed to by a community of Aries agent builders. \"Agreement\" is defined as when the community agrees to merge a Pull Request (PR) to this RFC that affects an Aries Interop Profile version number and/or any of the links to concept and feature RFCs. PRs that do not impact the Aries Interop Profile version number or links can (in general) be merged with less community scrutiny.

    Each link to a concept or feature RFCs MUST be to a specific commit of that RFC. RFCs in the list MAY be flagged as deprecated. Linked RFCs that reference external specs or standards MUST refer to as specific a version of the external resource as possible.

    Aries Interop Profile versions SHOULD have a link (or links) to a version (specific commit) of a test suite (or test cases) which SHOULD be used to verify compliance with the corresponding version of Aries Interop Profile. Aries agent builders MAY self-report their test results as part of their entries in the list of agents.

    Aries Interop Profile versions MUST evolve at a pace determined by the Aries agent builder community. This pace SHOULD be at a regular time interval so as to facilitate the independent but interoperable release of Aries Agents. Aries agent builders are encouraged to propose either updates to the list of RFCs supported by Aries Interop Profile through GitHub Issues or via a Pull Request. Such updates MAY trigger a change in the Aries Interop Profile version number.

    All previous versions of Aries Interop Profile MUST be listed in the Previous Versions section of the RFP and must include a link to the latest commit of this RFC at the time that version was active.

    A script in the /code folder of this repo can be run to list RFCs within an AIP version that have changed since the AIP version was set. For script usage information run the following from the root of the repo:

    python code/aipUpdates.py --help

    "},{"location":"concepts/0302-aries-interop-profile/#sub-targets","title":"Sub-targets","text":"

    AIP 2.0 is organized into a set of base requirements, and additional optional targets. These requirements are listed below. When indicating levels of support for AIP 2.0, subtargets are indicated in this format: AIP 2.0/INDYCREDS/MEDIATE with the subtargets listed in any order.

    Any RFCs within a single AIP Version and it's subtargets MUST refer to the exact same version of the RFC.

    "},{"location":"concepts/0302-aries-interop-profile/#discover-features-usage","title":"Discover Features Usage","text":"

    AIP Targets can be disclosed in the discover_../../features protocol, using the feature-type of aip. The feature's id is AIP<major>.<minor> for base compatibility, and AIP<major>.<minor>/<subtarget> for subtargets, each subtarget being included individually.

    Example:

    {\n  \"@type\": \"https://didcomm.org/discover-features/2.0/disclosures\",\n  \"disclosures\": [\n    {\n      \"feature-type\": \"aip\",\n      \"id\": \"AIP2.0\",\n    },\n    {\n      \"feature-type\": \"aip\",\n      \"id\": \"AIP2.0/INDYCRED\"\n    }\n  ]\n}\n
    "},{"location":"concepts/0302-aries-interop-profile/#reference","title":"Reference","text":"

    The Aries Interop Profile version number and links to other RFCs in this section SHOULD only be updated with the agreement of the Aries agent builder community. There MAY be multiple active major Aries Interop Profile versions. A list of previous versions of Aries Interop Profile are listed after the current version(s).

    "},{"location":"concepts/0302-aries-interop-profile/#aries-interop-profile-version-10","title":"Aries Interop Profile Version: 1.0","text":"

    The initial version of Aries Interop Profile, based on the existing implementations such as aries-cloudagent-python, aries-framework-dotnet, Open Source Mobile Agent and Streetcred.id's IOS agent. Agents adhering to AIP 1.0 should be able to establish connections, exchange credentials and complete a connection-less proof-request/proof transaction.

    RFC Type RFC/Link to RFC Version Concept 0003-protocols Concept 0004-agents Concept 0005-didcomm Concept 0008-message-id-and-threading Concept 0011-decorators Concept 0017-attachments Concept 0020-message-types Concept 0046-mediators-and-relays Concept 0047-json-LD-compatibility Concept 0050-wallets Concept 0094-cross-domain messaging Feature 0015-acks Feature 0019-encryption-envelope Feature 0160-connection-protocol Feature 0025-didcomm-transports Feature 0035-report-problem Feature 0036-issue-credential Feature 0037-present-proof Feature 0056-service-decorator"},{"location":"concepts/0302-aries-interop-profile/#changelog-aip-10","title":"Changelog - AIP 1.0","text":"

    The original commit used in the definition of AIP 1.0 was: 64e5e55

    The following clarifications have been made to RFCs that make up AIP 1.0:

    "},{"location":"concepts/0302-aries-interop-profile/#aip-v10-test-suite","title":"AIP v1.0 Test Suite","text":"

    To Do: Link(s) to version(s) of the test suite/test cases applicable to this Aries Interop Profile version.

    "},{"location":"concepts/0302-aries-interop-profile/#aries-interop-profile-version-20","title":"Aries Interop Profile Version: 2.0","text":"

    The following are the goals used in selecting RFC versions for inclusion in AIP 2.0, and the RFCs added as a result of each goal:

    "},{"location":"concepts/0302-aries-interop-profile/#aip-20-changelog-by-pull-requests","title":"AIP 2.0 Changelog by Pull Requests","text":"

    Since approval of the AIP 2.0 profile, the following RFCs have been clarified by updating the commit in the link to the RFC:

    "},{"location":"concepts/0302-aries-interop-profile/#aip-20-changelog-by-clarifications","title":"AIP 2.0 Changelog by Clarifications","text":"

    The original commit used in the definition of AIP 2.0 was: b3a3942ef052039e73cd23d847f42947f8287da2

    The following clarifications have been made to RFCs that make up AIP 2.0. This list excludes commits changed solely because of status changes:

    "},{"location":"concepts/0302-aries-interop-profile/#base-requirements","title":"Base Requirements","text":"RFC Type RFC/Link to RFC Version Note Concept 0003-protocols AIP V1.0, Reformatted Concept 0004-agents AIP V1.0, Unchanged Concept 0005-didcomm AIP V1.0, Minimally Updated Concept 0008-message-id-and-threading AIP V1.0, Updated Concept 0011-decorators AIP V1.0, Updated Concept 0017-attachments AIP V1.0, Updated Concept 0020-message-types AIP V1.0, UpdatedMandates message prefix https://didcomm.org for Aries Protocol messages. Concept 0046-mediators-and-relays AIP V1.0, Minimally Updated Concept 0047-json-LD-compatibility AIP V1.0, Minimally Updated Concept 0050-wallets AIP V1.0, Unchanged Concept 0094-cross-domain messaging AIP V1.0, Updated Concept 0519-goal-codes Feature 0015-acks AIP V1.0, Updated Feature 0019-encryption-envelope AIP V1.0, UpdatedSee envelope note below Feature 0023-did-exchange Feature 0025-didcomm-transports AIP V1.0, Minimally Updated Feature 0035-report-problem AIP V1.0, Updated Feature 0044-didcomm-file-and-mime-types Feature 0048-trust-ping Feature 0183-revocation-notification Feature 0360-use-did-key Feature 0434-outofband Feature 0453-issue-credential-v2 Update to V2 Protocol Feature 0454-present-proof-v2 Update to V2 Protocol Feature 0557-discover-features-v2"},{"location":"concepts/0302-aries-interop-profile/#mediate-mediator-coordination","title":"MEDIATE: Mediator Coordination","text":"RFC Type RFC/Link to RFC Version Note Feature 0211-route-coordination Feature 0092-transport-return-route"},{"location":"concepts/0302-aries-interop-profile/#indycred-indy-based-credentials","title":"INDYCRED: Indy Based Credentials","text":"RFC Type RFC/Link to RFC Version Note Feature 0592-indy-attachments Evolved from AIP V1.0 Concept 0441-present-proof-best-practices"},{"location":"concepts/0302-aries-interop-profile/#ldcred-json-ld-based-credentials","title":"LDCRED: JSON-LD Based Credentials","text":"RFC Type RFC/Link to RFC Version Note Feature 0593-json-ld-cred-attach Feature 0510-dif-pres-exch-attach"},{"location":"concepts/0302-aries-interop-profile/#bbscred-bbs-based-credentials","title":"BBSCRED: BBS+ Based Credentials","text":"RFC Type RFC/Link to RFC Version Note Feature 0593-json-ld-cred-attach Feature 0646-bbs-credentials Feature 0510-dif-pres-exch-attach"},{"location":"concepts/0302-aries-interop-profile/#chat-chat-related-features","title":"CHAT: Chat related ../../features","text":"RFC Type RFC/Link to RFC Version Note Feature 0095-basic-message"},{"location":"concepts/0302-aries-interop-profile/#aip-20-rfcs-removed","title":"AIP 2.0 RFCs Removed","text":"

    [!WARNING] After discussion amongst the Aries implementers, the following RFCs initially in AIP 2.0 have been removed as both never implemented (as far as we know) and/or impractical to implement. Since the RFCs have never been implemented, their removal does not have a practical impact on implementations. Commentary below the table listing the removed RFCs provides the reasoning for the removal of each RFC.

    RFC Type RFC/Link to RFC Version Note Feature 0317-please-ack Removed from AIP 2.0 Feature 0587-encryption-envelope-v2 Removed from AIP 2.0 Feature 0627-static-peer-dids The use of static peer DIDs in Aries has evolved and all AIP 2.0 implementations should be using DID Peer types 4 (preferred), 1 or 2. "},{"location":"concepts/0302-aries-interop-profile/#aip-v20-test-suite","title":"AIP v2.0 Test Suite","text":"

    The Aries Agent Test Harness has a set of tests tagged to exercise AIP 1.0 and AIP 2.0, including the extended targets.

    "},{"location":"concepts/0302-aries-interop-profile/#implementers-note-about-didcomm-envelopes-and-the-accept-element","title":"Implementers Note about DIDComm Envelopes and the ACCEPT element","text":"

    [!WARNING] The following paragraph is struck out as no longer relevant, since the 0587-encryption-envelope-v2 RFC has been removed from AIP 2.0. The upcoming (to be defined) AIP 3.0 will include the transition from DIDComm v1 to the next DIDComm generation, and at that time, the 0587-encryption-envelope-v2 will again be relevant.

    AIP 2.0 contains two RFCs that reference envelopes 0019-encryption-envelope and 0587-encryption-envelope-v2 (links above). The important feature that Aries implementers should understand to differentiate which envelope format can or is being used by an agent is the accept element of the DIDComm service endpoint and the out-of-band invitation message. If the accept element is not present, the agent can only use the RFC 0019-encryption-envelope present. If it is present, the values indicate the envelope format(s) the agent does support. See the RFCs for additional details.

    "},{"location":"concepts/0302-aries-interop-profile/#previous-versions","title":"Previous Versions","text":"

    Will be the version number as a link to the latest commit of this RFC while the version was current.

    "},{"location":"concepts/0302-aries-interop-profile/#aries-agent-builders-and-agents","title":"Aries Agent Builders and Agents","text":"

    A list of agents that claim compatibility with versions of Aries Interop Profile. A entry can be included per agent and per major Aries Interop Profile version.

    The agent type MUST be one of the following:

    Name / Version / Link Agent Type Builder / Link Aries Interop Profile Version Test Results Notes"},{"location":"concepts/0302-aries-interop-profile/#drawbacks","title":"Drawbacks","text":"

    It may be difficult to agree on the exact list of RFCs to support in a given version.

    "},{"location":"concepts/0302-aries-interop-profile/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Continuing with the current informal discussions of what agents/frameworks should support and when is an ineffective way of enabling independent building of interoperable agents.

    "},{"location":"concepts/0302-aries-interop-profile/#prior-art","title":"Prior art","text":"

    This is a typical approach to creating an early protocol certification program.

    "},{"location":"concepts/0302-aries-interop-profile/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0302-aries-interop-profile/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0345-community-coordinated-update/","title":"0345: Community Coordinated Update","text":""},{"location":"concepts/0345-community-coordinated-update/#summary","title":"Summary","text":"

    This RFC describes the recommended process for coordinating a community update. This is not a mandate; this process should be adapted as useful to the circumstances of the update being performed.

    "},{"location":"concepts/0345-community-coordinated-update/#motivation","title":"Motivation","text":"

    Occasionally, an update will be needed that requires a coordinated change to be made across the community. These should be rare, but are inevitable. The steps in this process help avoid a coordinated software deployment, where multiple teams must fit a tight timeline of software deployment to avoid compatibility problems. Tightly coordinated software deployments are difficult and problematic, and should be avoided whenever possible.

    "},{"location":"concepts/0345-community-coordinated-update/#tutorial","title":"Tutorial","text":"

    This process descries how to move from OLD to NEW. OLD and NEW represent the required change, where OLD represents the item being replaced, and NEW represents the item OLD will be replaced with. Often, these will be strings.

    In brief, we first accept OLD and NEW while still defaulting to OLD, Then we default to NEW (while still accepting OLD), and then we remove support for OLD. These steps are coordinated with the community with a gracious timeline to allow for development cycles and deployment ease.

    "},{"location":"concepts/0345-community-coordinated-update/#prerequisite-community-agreement-on-change","title":"Prerequisite: Community agreement on change.","text":"

    Before these steps are taken, the community MUST agree on the change to be made.

    "},{"location":"concepts/0345-community-coordinated-update/#step-1-accept-old-and-new","title":"Step 1: Accept OLD and NEW","text":"

    The first step of the process is to accept both OLD and NEW from other agents. Typically, this is done by detecting and converting one string to the other in as few places in the software as possible. This allows the software to use a common value internally, and constrains the change logic to where the values are received.

    OLD should still be sent on outbound communication to other agents.

    During step 1, it is acceptable (but optional) to begin sending NEW when receiving NEW from the other agent. OLD should still be sent by default when the other Agent's support is unknown.

    This step is formalized by writing and RFC detailing which changes are expected in this update. This step is scheduled in the community by including the update RFC in a new version of the Interop Profile and setting a community target. The schedule should allow a generous time for development, generally between 1 and 3 months.

    Step 1 Coordination: This is the most critical coordination step. The community should have completed step 1 before moving to step 2.

    "},{"location":"concepts/0345-community-coordinated-update/#step-2-default-to-new","title":"Step 2: Default to NEW","text":"

    The second step changes the outbound value in use from OLD to NEW. Communication will not break with agents who have completed Step 1.

    OLD must still be accepted during step 2. OLD becomes deprecated.

    During step 2, it is acceptable (but optional) to keep sending OLD when receiving OLD from the other agent. NEW should still be sent by default when the other Agent's support is unknown.

    This step is formalized by writing an RFC detailing which changes are expected in this update. This step is scheduled by including the update RFC in a new version of the Interop Profile and setting a community target date. The schedule should allow a generous time for development, generally between 1 and 3 months.

    Step 2 Coordination: The community should complete step 2 before moving to step 3 to assure that OLD is no longer being sent prior to removing support.

    "},{"location":"concepts/0345-community-coordinated-update/#step-3-remove-support-for-old","title":"Step 3: Remove support for OLD.","text":"

    Software will be updated to remove support for OLD. Continued use is expected to result in a failure or error as appropriate

    This step is formalized by writing an RFC detailing which changes are expected in this update. Upon acceptance of the RFC, OLD is considered invalid. At this point, nobody should be sending the OLD.

    Step 3 Coordination: The deadline for step 3 is less important than the previous steps, and may be scheduled at the convenience of each development team.

    "},{"location":"concepts/0345-community-coordinated-update/#reference","title":"Reference","text":"

    This process should only be used for changes that are not detectable via the Discover Features protocol, either because the Discover Features Protocol cannot yet be run or the Discover Features Protocol does not reveal the change.

    "},{"location":"concepts/0345-community-coordinated-update/#changes-not-applicable-to-this-process","title":"Changes NOT applicable to this process","text":"

    Any changes that can be handled by increasing the version of a protocol should do so. The new version can be scheduled via Interop Profile directly without this process.

    Example proper applications of this process include switching the base common Message Type URI, and DID Doc Service Types.

    "},{"location":"concepts/0345-community-coordinated-update/#pace","title":"Pace","text":"

    The pace for Steps 1 and 2 should be appropriate for the change in question, but should allow generous time to allow for developer scheduling, testing, and production deployment schedules. App store approval process sometimes take a bit of time. A generous time allowance eases the burden of implementing the change.

    "},{"location":"concepts/0345-community-coordinated-update/#drawbacks","title":"Drawbacks","text":"

    This approach invites the drawbacks of sanity, unpanicked deployments, and steady forward community progress.

    "},{"location":"concepts/0345-community-coordinated-update/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0345-community-coordinated-update/#prior-art","title":"Prior art","text":"

    This process was discussed in Issue 318 and in person at the 2019 December Aries Connectathon.

    "},{"location":"concepts/0345-community-coordinated-update/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0345-community-coordinated-update/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0346-didcomm-between-two-mobile-agents/","title":"0346: DIDComm Between Two Mobile Agents Using Cloud Agent Mediator","text":""},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#summary","title":"Summary","text":"

    Explains how one mobile edge agent can send messages to another mobile edge agent through cloud agents. The sender edge agent also determines the route of the message. The recipient, on the other hand, can consume messages at its own pace and time.

    "},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#motivation","title":"Motivation","text":"

    The DIDCOMM between two mobile edge agents should be easy and intuitive for a beginner to visualize and to implement.

    "},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#scenario","title":"Scenario","text":"

    Alice sends a connection request message to Bob and Bob sends back an acceptance response. For simplicity's sake, we will only consider the cloud agents in play while sending and receiving a message for Alice.

    "},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#cloud-agent-registration-process","title":"Cloud Agent Registration Process","text":"

    A registration process is necessary for an edge agent to discover cloud agents that it can use to send a message through them. Cloud agents in the simplest form are routers hosted as a web application that solves the problem of availability by providing a persistent IP address. The Web server has a wallet of it's own storing its private key as a provisioning record, along with any information needed to forward messages to other agents. Alice wants to accept a connection invitation from Bob. But before doing so Alice needs to register herself with one or more cloud agents. The more cloud agents she registers with the more cloud agents she can use in transporting her message to Bob. To register herself with a cloud agent she visits the website of a cloud agent and simply scans a QR code.

    The cloud agent registration invite looks like below

    {\u200b\n    \"@type\": \"https://didcomm.org/didexchange/1.0/cloudagentregistrationinvitation\",\u200b\n    \"@id\": \"12345678900987654321\",\u200b\n    \"label\": \"CloudAgentA\",\u200b\n    \"recipientKeys\": [\"8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K\"],\u200b\n    \"serviceEndpoint\": \"https://cloudagenta.com/endpoint\",\n    \"responseEndpoint\": \"https://cloudagenta.com/response\", \n    \"consumer\": \"b1004443feff4f3cba25c45ef35b492c\",\n    \"consumerEndpoint\" : \"https://cloudagenta.com/consume\"\u200b\n}\u200b\n

    The registration data is base64url-encoded and is added to alink as part of the c_a_r query param. The recipient key is the public key of \"Cloud Agent A\". The service endpoint is where the edge agent should send the message to. Response endpoint is where a response that is being sent to Alice should be sent to. For example, if Bob wants to send a message to Alice, then Bob should send the message to the response endpoint. Consumer endpoint is where Alice's edge agent should consume the messages that are sent to her. The \"Consumer\" is an identifier to identify Alice's edge agent by the cloud agent \"A\". This identifier is different with each cloud agent and hence provides low correlation risk. Each time an invitation QR code is generated, a new consumer id is generated. No acknowledgment is required to be sent to the cloud agent or vice versa as the consumer-generated is never repeated.

    All the endpoint data and the public key of the cloud agents are then stored as non secret records in Alice's wallet with a tag \"cloud-agent\"

    "},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#how-connection-request-from-alice-flows-to-bob","title":"How connection request from Alice flows to Bob","text":"

    When Alice scans Bob's QR code invitation. It starts preparing the connection request message. It first queries the wallet record service for records tagged with \"cloud-agent\" and puts them in a list. The edge agent now randomly chooses one from the list (say Cloud Agent \"A\") and creates a new list without the cloud agent that is already chosen. Alice's edge agent creates the connection request message json and adds the service endpoint as the chosen cloud agent's response endpoint together with its consumer id.

    \"serviceEndpoint\": \"https://cloudagenta.com/response/b1004443feff4f3cba25c45ef35b492c\"\n

    It then packs this message by Bob's recipient key and then creates another json message structure like the below by ising the forward message type

    {\u200b\n    \"@type\": \"https://didcomm.org/routing/1.0/forward\",\u200b\n    \"@id\": \"12345678900987654321\",\u200b\n    \"msg\": \"<Encrypted message for Bob here>\",\n    \"to\": \"<Service endpoint of Bob>\"\u200b\n}\u200b\n

    It then packs it with the public key of cloud agent \"A\".

    Now it randomly chooses cloud agent from the new list and keeps on repeating the process of writing the message forwarding request.

    For example, say the next random cloud agent that it chooses is Cloud Agent \"C\". So now it creates another message forward json structure as below

    {\u200b\n    \"@type\": \"https://didcomm.org/routing/1.0/forward\",\u200b\n    \"@id\": \"12345678900987654321\",\u200b\n    \"msg\": \"<Encrypted message for Cloud Agent A>\",\n    \"to\": \"<Service endpoint of Cloud Agent A>\"\u200b\n}\u200b\n
    And then packs with Cloud Agent \"C\"'s public key.

    This process happens till it has exhausted all the list of the cloud agent in the list and then sends the message to the service endpoint of the last cloud agent (say Cloud Agent \"B\") chosen. For example, the message could have randomly been packed for this path, B->C->A where A is one of Bob's cloud agents that stores the message on the distributed log.

    "},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#message-forwarding-process-by-cloud-agents","title":"Message Forwarding process by cloud agents","text":"

    When the message is reached to cloud agent \"B\", the message is first unpacked by cliud agent \"B\"'s private key. It then finds out the message type is of \"forward\". It then processes the message by taking the value of the \"message\" attribute in the decrypted json and sending it to the forwardTo URI.

    Thus Cloud Agent \"B\" unpacks the message and forward the message to Cloud Agent \"C\" who then again unpacks and forwards it to Cloud Agent \"A\". Cloud Agent \"A\" ultimately unpacks and forwards it to Bob's edge agent (For simplicity sake we are not describing how the message reaches Bob through Bob's registered cloud agents)

    "},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#bob-returns-a-response-back","title":"Bob returns a response back","text":"

    Bob when recives the connection request message from Alice. It then creates a connection accept response and sends the response back to Alice at the service endpoint of Alice which is

    \"serviceEndpoint\": \"https://cloudagenta.com/response/b1004443feff4f3cba25c45ef35b492c\"\n

    For simplicity sake, we are not describing how the message ends up at the above endpoint from Bob after multiple routing through Bob's cloud agents. When the message actually ends up at the service endpoint mentioned by Alice, which is the response endpoint of cloud agent \"A\", the cloud agent simply stores it in a distributed log(NEEDS A LINK TO KAFKA INBOX RFC) using the consumer id as a key

    "},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#alice-consumes-connection-accepted-response-from-bob","title":"Alice consumes connection accepted response from Bob","text":"

    Alice's edge agent periodically checks the consumer endpoint of all the cloud agents it has registered with. For each cloud agent, Alice passes the unique consumer id that was used in registration so that cloud agent can return the correct messages. When it does the same for cloud agent \"A\", it simply consumes the message from the distributed log.

    "},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#drawbacks-and-alternatives","title":"Drawbacks and Alternatives","text":"

    In other suggested message formatting protocol Alice would provide a list of routing keys and the endpoint of the first hop in the chain of cloud agents. That gives allice confidence that bob is forced to use the path she has provided. The proposed routing in this RFC lacks that confidence. In contrast, routing with a list of routing keys requires a lot of overhead set up before establishing a connection. This proposed routing simplifies that overhead and provides more flexibility.

    "},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#related-art","title":"Related art","text":"

    [related-art] #prior-art Aries-rfc Aries RFC 0046: Mediators and Relays

    "},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#prior-art","title":"Prior art","text":""},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#unresolved-questions","title":"Unresolved questions","text":"

    Does separation of a \"service endpoint\" and \"Consumer endpoint\" provide a point of correlation that can be avoided by handling all messages through a single service endpoint?

    Can a cloud agent have their own army of servers that just basically looks into a registry of servers and randomly chooses an entry and exit node and a bunch of hops and just passes the message along. The exit node will then pass the message to the next cloud agent?

    "},{"location":"concepts/0346-didcomm-between-two-mobile-agents/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0420-rich-schemas-common/","title":"0420: Rich Schema Objects Common","text":""},{"location":"concepts/0420-rich-schemas-common/#summary","title":"Summary","text":"

    A low-level description of the components of an anonymous credential ecosystem that supports rich schemas, W3C Verifiable Credentials and Presentations, and correspondingly rich presentation requests.

    Please see 0250: Rich Schema Objects for high-level description.

    This RFC provides more low-level description of Rich Schema objects defining how they are identified and referenced. It also defines a general template and common part for all Rich Schema objects.

    "},{"location":"concepts/0420-rich-schemas-common/#motivation","title":"Motivation","text":"

    Please see 0250: Rich Schema Objects for use cases and high-level description of why Rich Schemas are needed.

    This RFC serves as a low-level design of common parts between all Rich Schema objects, and can help developers to properly implement Rich Schema transactions on the Ledger and the corresponding client API.

    "},{"location":"concepts/0420-rich-schemas-common/#tutorial-general-principles","title":"Tutorial: General Principles","text":"

    By Rich Schema objects we mean all objects related to Rich Schema concept (Context, Rich Schema, Encoding, Mapping, Credential Definition, Presentation Definition)

    Let's discuss a number of items common for all Rich Schema objects

    "},{"location":"concepts/0420-rich-schemas-common/#components-and-repositories","title":"Components and Repositories","text":"

    The complete architecture for every Rich Schema object involves three separate components: - aries-vdri: This is the location of the aries-verifiable-data-registy-interface. Changes to this code will enable users of any data registry with an aries-vdri-compatible data manager to handle Rich Schema objects. - Specific Verifiable Data Registry implementation (for example, indy-vdr). It needs to comply with the interface described by the aries-verifiable-data-registry-interface and is built to plug in to the aries ecosystem. It contains the code to communicate with a specific data registry (ledger).

    "},{"location":"concepts/0420-rich-schemas-common/#immutability-of-rich-schema-objects","title":"Immutability of Rich Schema Objects","text":"

    The following Rich Schema objects are immutable: - Context - Rich Schema - Encoding - Mapping

    The following Rich Schema objects can be mutable: - Credential Definition - Presentation Definition

    Credential Definition and Presentation Definition should be immutable in most of the cases, but some applications may consider them as mutable objects.

    Credential Definition can be considered as a mutable object since the Issuer may rotate keys present there. However, rotation of Issuer's keys should be done carefully as it will invalidate all credentials issued for this key.

    Presentation Definition can be considered as a mutable object since restrictions to Issuers, Schemas and Credential Definitions to be used in proof may evolve. For example, Issuer's key for a given Credential Definition may be compromised, so Presentation Definition can be updated to exclude this Credential Definition from the list of recommended ones.

    Please note, that some ledgers (Indy Ledger for example) have configurable auth rules which allow to have restrictions on mutability of particular objects, so that it can be up to applications and network administrators to decide if Credential Definition and Presentation Definition are mutable.

    "},{"location":"concepts/0420-rich-schemas-common/#identification-of-rich-schema-objects","title":"Identification of Rich Schema Objects","text":"

    The suggested Identification scheme allows to have a unique Identifier for any Rich Schema object. DID's method name (for example did:sov) allows to identify Rich Schema objects with equal content within different data registries (ledgers).

    "},{"location":"concepts/0420-rich-schemas-common/#referencing-rich-schema-objects","title":"Referencing Rich Schema Objects","text":""},{"location":"concepts/0420-rich-schemas-common/#relationship","title":"Relationship","text":"

    A presentation definition may use only a subset of the attributes of a schema.

    "},{"location":"concepts/0420-rich-schemas-common/#usage-of-json-ld","title":"Usage of JSON-LD","text":"

    The following Rich Schema objects must be in JSON-LD format: - Schema - Mapping - Presentation Definition

    Context object can also be in JSON-LD format.

    If a Rich Schema object is a JSON-LD object, the content's @id field must be equal to the id.

    More details about JSON-LD usage may be found in the RFCs for specific rich schema objects.

    "},{"location":"concepts/0420-rich-schemas-common/#how-rich-schema-objects-are-stored-in-the-data-registry","title":"How Rich Schema objects are stored in the Data Registry","text":"

    Any write request for Rich Schema object has the same fields:

    'id': <Rich Schema object's ID>                # DID string \n'content': <Rich Schema object as JSON>        # JSON-serialized string\n'rs_name': <rich schema object name>           # string\n'rs_version': <rich schema object version>     # string\n'rs_type': <rich schema object type>           # string enum (currently one of `ctx`, `sch`, `map`, `enc`, `cdf`, `pdf`)\n'ver': <format version>                        # string                              \n
    - id is a unique ID (for example a DID with a id-string being base58 representation of the SHA2-256 hash of the content field) - The content field here contains a Rich Schema object in JSON-LD format (see 0250: Rich Schema Objects). It's passed and stored as-is. The content field must be serialized in the canonical form. The canonicalization scheme we recommend is the IETF draft JSON Canonicalization Scheme (JCS). - metadata contains additional fields which can be used for human-readable identification - ver defines the version of the format. It defines what fields and metadata are there, how id is generated, what hash function is used there, etc. - Author's and Endorser's DIDs are also passed as a common metadata fields for any Request.

    If a Rich Schema object is a JSON-LD object, the content's @id field must be equal to the id.

    "},{"location":"concepts/0420-rich-schemas-common/#querying-rich-schema-objects-from-the-data-registry","title":"Querying Rich Schema objects from the Data Registry","text":"

    The following information is returned from the Ledger in a reply for any get request of a Rich Schema object:

    'id': <Rich Schema object's ID>              # DID string \n'content': <Rich Schema object as JSON>      # JSON-serialized string\n'rs_name': <rich schema object name>         # string\n'rs_version': <rich schema object version>   # string\n'rs_type': <rich schema object type>         # string enum (currently one of `ctx`, `sch`, `map`, `enc`, `cdf`, `pdf`)\n'ver': <format version>                      # string\n'from': <author DID>,                        # DID string\n'endorser': <endorser DID>,                  # DID string\n

    Common fields specific to a Ledger are also returned.

    "},{"location":"concepts/0420-rich-schemas-common/#aries-data-registry-interface","title":"Aries Data Registry Interface","text":"

    We can have a unified API to write and read Rich Schema objects from a Data Registry. Just two methods are sufficient to handle all Rich Schema types: - write_rich_schema_object - read_rich_schema_object_request

    "},{"location":"concepts/0420-rich-schemas-common/#write_rich_schema_object","title":"write_rich_schema_object","text":"

    Writes a Rich Schema object to the ledger.\n\n#Params\nsubmitter: information about submitter\ndata: {\n    id: Rich Schema object's unique ID for example a DID with an id-string being\n        base58 representation of the SHA2-256 hash of the `content` field),\n    content: Rich Schema object as a JSON or JSON-LD string,\n    rs_name: Rich Schema object name,\n    rs_version: Rich Schema object version,\n    rs_type: Rich schema object type's enum string (currently one of `ctx`, `sch`, `map`, `enc`, `cdf`, `pdf`),\n    ver: the version of the generic object template\n},\nregistry: identifier for the registry\n\n#Returns\nregistry_response: result as json,\nerror: {\n    code: aries common error code,\n    description:  aries common error description\n}\n
    The combination of rs_type, rs_name, and rs_version must be unique among all rich schema objects on the ledger.

    "},{"location":"concepts/0420-rich-schemas-common/#read_rich_schema_object_by_id","title":"read_rich_schema_object_by_id","text":"
    Reads a Rich Schema object from the ledger by its unique ID.\n\n#Params\nsubmitter (optional): information about submitter\ndata: {\n    id: Rich Schema object's ID (as a DID for example),\n    ver: the version of the generic object template\n},\nregistry: identifier for the registry\n\n#Returns\nregistry_response: result as json,\nerror: {\n    code: aries common error code,\n    description:  aries common error description\n}\n
    "},{"location":"concepts/0420-rich-schemas-common/#read_rich_schema_object_by_metadata","title":"read_rich_schema_object_by_metadata","text":"
    Reads a Rich Schema object from the ledger by its unique combination of (name, version, type)\n\n#Params\nsubmitter (optional): information about submitter\ndata: {\n    rs_name: Rich Schema object name,\n    rs_version: Rich Schema object version,\n    rs_type: Rich schema object type's enum string (currently one of `ctx`, `sch`, `map`, `enc`, `cdf`, `pdf`),\n    ver: the version of the generic object template\n},\nregistry: identifier for the registry\n\n#Returns\nregistry_response: result as json,\nerror: {\n    code: aries common error code,\n    description:  aries common error description\n}\n
    "},{"location":"concepts/0420-rich-schemas-common/#reference","title":"Reference","text":""},{"location":"concepts/0420-rich-schemas-common/#drawbacks","title":"Drawbacks","text":"

    Rich schema objects introduce more complexity.

    "},{"location":"concepts/0420-rich-schemas-common/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0420-rich-schemas-common/#rich-schema-object-id","title":"Rich Schema object ID","text":"

    The following options on how a Rich Schema object can be identified exist: - DID unique for each Rich Schema - DID URL with the origin (issuer's) DID as a base - DID URL with a unique (not issuer-related) DID as a base - UUID or other unique ID

    UUID doesn't provide global resolvability. We can not say what ledger the Rich Schema object belongs to by looking at the UUID.

    DID and DID URL give persistence, global resolvability and decentralization. We can resolve the DID and understand what ledger the Rich Schema object belongs to. Also we can see that the object with the same id-string on different ledger is the same object (if id-string is calculated against a canonicalized hash of the content).

    However, Rich Schema's DIDs don't have cryptographic verifiability property of common DIDs, so this is a DID not associated with keys in general case. This DID belongs neither to person nor organization nor thing.

    Using Issuer's DID (origin DID) as a base for DID URL may be too Indy-specific as other ledgers may not have an Issuer DID. Also it links a Rich Schema object to the Issuer belonging to a particular ledger.

    So, we are proposing to use a unique DID for each Rich Schema object as it gives more natural way to identify an entity in the distributed ledger world.

    "},{"location":"concepts/0420-rich-schemas-common/#rich-schema-object-as-did-doc","title":"Rich Schema object as DID DOC","text":"

    If Rich Schema objects are identified by a unique DID, then a natural question is whether each Rich Schema object needs to be presented as a DID DOC and resolved by a DID in a generic way.

    We are not requiring to define Rich Schema objects as DID DOCs for now. We may re-consider this in future once DID DOC format is finalized.

    "},{"location":"concepts/0420-rich-schemas-common/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0420-rich-schemas-common/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0430-machine-readable-governance-frameworks/","title":"Aries RFC 0430: Machine-Readable Governance Frameworks","text":""},{"location":"concepts/0430-machine-readable-governance-frameworks/#summary","title":"Summary","text":"

    Explains how governance frameworks are embodied in formal data structures, so it's possible to react to them with software, not just with human intelligence.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#motivation","title":"Motivation","text":"

    We need to be able to write software that reacts to arbitrary governance frameworks in standard ways. This will allow various desirable ../../features.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#tutorial","title":"Tutorial","text":"

    A governance framework (also called a trust framework in some contexts) is a set of rules that establish trust about process (and indirectly, about outcomes) in a given context. For example, the rules that bind buyers, merchants, vendors, and a global credit card company like Mastercard or Visa constitute a governance framework in a financial services context \u2014 and they have a corresponding trust mark to make the governance framework's relevance explicit. The rules by which certificate authorities are vetted and accepted by browser manufacturers, and by which CAs issue derivative certificates, constitute a governance framework in a web context. Trust frameworks are like guy wires: they balance opposing forces to produce careful alignment and optimal behavior.

    Decentralized identity doesn't eliminate all forms of centralized authority, but its opt-in collaboration, openness, and peer orientation makes the need for trust rules particularly compelling. Somehow, a community needs to agree on answers to questions like these:

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#sample-questions-answered-in-a-trust-framework","title":"Sample Questions Answered in a Trust Framework","text":"

    Many industry groups are exploring these questions, and are building careful documentation of the answers they produce. It is not the purpose of this RFC to duplicate or guide such work. Rather, it's our goal to answer a secondary question:

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#the-question-tackled-by-this-rfc","title":"The Question Tackled By This RFC","text":"

    How can answers to these questions be represented so they are consumable as artifacts of software policy?

    When we have good answers to this question, we can address feature requests like the following:

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#desirable-features","title":"Desirable Features","text":""},{"location":"concepts/0430-machine-readable-governance-frameworks/#sample-data-structure","title":"Sample Data Structure","text":"

    Trust frameworks generally begin as human-friendly content. They have to be created, reviewed, and agreed upon by experts from various disciplines: legal, business, humanitarian, government, trade groups, advocacy groups, etc. Developers can help by surfacing how rules are (or are not) susceptible to modeling in formal data structures. This can lead to an iterative process, where data structures and human conversations create refinement pressure on each other until the framework is ready for release.

    [TODO: The following blurb of JSON is one way to embody what we're after. I can imagine other approaches but haven't thought them through in detail. I'm less interested in the details of the JSON, for now, than in the ../../concepts we're trying to communicate and automate. So have a conversation about whether this format works for us, or should be tweaked/replaced.]

    Each problem domain will probably have unique requirements. Therefore, we start with a general governance framework recipe, but plan for extension. We use JSON-LD for this purpose. Here we present a simple example for the problem domain of university credentials in Germany. It manifests just the components of a governance framework that are common across all contexts; additional JSON-LD @context values can be added to introduce more structure as needed. (See Field Details for explanatory comments.)

    {\n    \"@context\": [\n        // The first context must be this RFC's context. It defines core properties.\n        \"https://github.com/hyperledger/aries-rfcs/blob/main../../concepts/0430-machine-readable-governance-frameworks/context.jsonld\", \n        // Additional contexts can be added to extend.\n        \"https://kmk.org/uni-accred-trust-fw\"\n    ],\n    \"name\": \"Universit\u00e4tsakkreditierung\"\n    \"version\": \"1.0\",\n    \"logo\": \"http://kmk.org/uni-accred-trust-fw/logo.png\",\n    \"description\": \"Governs accredited colleges and universities in Germany.\",\n    \"docs_uri\": \"http://https://kmk.org/uni-accred-trust-fw/v1\",\n    \"data_uri\": \"http://https://kmk.org/uni-accred-trust-fw/v1/tf.json\",\n    \"topics\": [\"education\"],\n    \"jurisdictions\": [\"de\", \"eu\"],\n    \"geos\": [\"Deutschland\"],\n    \"roles\": [\"accreditor\", \"school\", \"graduate\", \"safe-verifier\"],\n    \"privileges\": [\n        {\"name\": \"accredit\", \"uri\": \"http://kmk.org/tf/accredit\"},\n        {\"name\": \"issue-edu\", \"uri\": \"http://kmk.org/tf/issue-edu\"},\n        {\"name\": \"hold-edu\", \"uri\": \"http://kmk.org/tf/hold-edu\"},\n        {\"name\": \"request-proof\", \"uri\", \"http://kmk.org/tf/request-proof\"\n    ],\n    \"duties\": [\n        {\"name\": \"safe-accredit\", \"uri\": \"http://kmk.org/tf/responsible-accredit\"},\n        {\"name\": \"GDPR-dat-control\", \"uri\": \"http://europa.eu/gdpr/trust-fw/gdpr-data-controller\"}\n        {\"name\": \"GDPR-edu-verif\", \"uri\": \"http://kmk.org/tf/gdpr-verif\"}\n        {\"name\": \"accept-kmk-tos\", \"uri\": \"http://kmk.org/tf/tos\"}\n    ],\n    \"define\": [\n        {\"name\": \"KMK\": \"id\": \"did:example:abc123\"},\n        {\"name\": \"KMK\": \"id\": \"did:anotherexample:def456\"},\n    ], \n    \"rules\": [\n        {\"grant\": [\"accredit\"], \"when\": {\"name\": \"KMK\"},\n            \"duties\": [\"safe-accredit\"]},\n        {\"grant\": [\"issue-edu\"], \"when\": {\n                // Proof request (see RFC 0037) specifying that\n                // institution is accredited by KMK.\n            },\n            // Any party who fulfills these criteria is considered\n            // to have the \"school\" role.\n            \"thus\": [\"school\"],\n            // And is considered to have the \"GDPR-dat-control\" duty.\n            \"duties\": [\"GDPR-dat-control\", \"accept-kmk-tos\"]\n        },\n        {\"grant\": \"hold-edu\", \"when\": {\n                // Proof request specifying that holder is a human.\n                // The presence of this item in the GF means that\n                // conforming issuers are supposed to verify\n                // humanness before issuing. Issuers can impose\n                // additional criteria; this is just the base\n                // requirement.\n            },\n            // Any party who fulfills these criteria is considered\n            // to qualify for the \"graduate\" role.\n            \"thus\": \"graduate\",\n            \"duties\": [\"accept-kmk-tos\"]\n        },\n        // In this governance framework, anyone can request proof based\n        // on credentials, as long as they demonstrate that they possess\n        // an \"approved verifier\" credential.\n        {\n            \"grant\": \"request-proof\", \"when\": {\n                // Proof request specifying that the party must possess\n                // a credential that makes them an approved verifier.\n                // The presence of this item in the GF means that\n                // provers should, at a minimum, verify the verifiers\n                // in this way before sharing proof. Provers can impose\n                // additional criteria of their own; this is just the\n                // base requirement.\n            }, \"thus\": \"safe-verifier\",\n            \"duties\": [\"GDPR-edu-verif\", \"accept-kmk-tos\"]\n        },\n        // Is there an authority that audits interactions?\n        \"audit\": {\n            // Where should reports be submitted via http POST?\n            \"uri\": \"http://kmk.org/audit\",\n            // How likely is it that a given interaction needs to\n            // be audited? Each party in the interaction picks a\n            // random number between 0 and 1, inclusive; if the number\n            // is <= this number, then that party submits a report about it.\n            \"probability\": \"0.01\"\n        },\n        // Is there an authority to whom requests for redress can\n        // be made, if one party feels like another violates\n        // the governance framework? \n        \"redress\": {\n            \"uri\": \"http://kmk.org/redress\"\n        }\n    }    \n}\n
    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#using-the-sample","title":"Using the Sample","text":"

    Let's look at how the above structure can be used to influence behavior of verifiable credential management software, and the parties that use it.

    We begin by noticing that KMK (KultusMinisterKonferenz), the accrediting body for universities in Germany, has a privileged role in this governance framework. It is given the right to operate as \"KMK\" as long as it proves control of one of the two DIDs named in the define array.

    We posit an issuer, Faber College, that wants to issue credentials compliant with this governance framework. This means that Faber College wants the issue-edu privilege defined at http://kmk.org/tf/issue-edu (see the second item in the privileges array). It wants to create credentials that contain the following field: \"trust_framework\": \"http://https://kmk.org/uni-accred-trust-fw/v1/tf.json\" (see the data_uri field). It wants to have a credential from KMK proving its accreditation (see second item in the rules array).

    Faber is required by this governance framework to accept the terms of service published at http://kmk.org/tf/tos, because it can't get the issue-edu privilege without incurring that duty (see the accept-kmk-tos duty in the second item in the rules array). KMK by implication incurs the obligation to enforce these terms of service when it issues a credential attesting Faber's accreditation and compliance with the governance framework.

    Assuming that Faber proceeds and satisfies KMK, Faber is now considered a school as far as this governance framework is concerned.

    Now, let us suppose that Alice, a student at Faber, wants to get a diploma as a verifiable credential. In addition to whatever else Faber does before it gives Alice a diploma, Faber is obligated by the governance framework to challenge Alice to prove she's a human being (see when in the third item of the rules array). Hopefully this is easy, and was done long before graduation. :-) It is also obligated to introduce Alice to the terms of service for KMK, since Alice will be acquiring the graduate role and this rule has the accept-kmk-tos duty. How Faber does this is something that might be clarified in the terms of service that Faber already accepted; we'll narrate one possible approach.

    Alice is holding a mobile app that manages credentials for her. She clicks an invitation to receive a credential in some way. What she sees next on her screen might look something like this:

    Her app knew to display this message because the issuer, Faber College, communicated its reliance on this governance framework (by referencing its data_uri) as part of an early step in the issuance process (e.g., in the invitation or in the offer-credential message). Notice how metadata from the governance framework \u2014 its title, version, topics, and descriptions \u2014 show up in the prompt. Notice as well that governance frameworks have reputations. This helps users determine whether the rules are legitimate and worth using. The \"More Info\" tab would link to the governance framework's docs_uri page.

    Alice doesn't have to re-accept the governance framework if she's already using it (e.g., if she already activated it in her mobile app because it's relevant to other credentials she holds). As a person works regularly within a particular credential domain, decisions like these will become cached and seamless. However, we're showing the step here, for completeness.

    Suppose that Alice accepts the proposed rules. The governance framework requires that she also accept the KMK terms of service. These might require her to report any errors in her credential promptly, and clarify that she has the right to appeal under certain conditions (see the redress section of the governance framework data structure). They might also discuss the KMK governance framework's requirement for random auditing (see the audit section).

    A natural way to introduce Alice to these topics might be to combine them with a normal \"Accept terms of service\" screen for Faber itself. Many issuers are likely to ask holders to agree to how they want to manage revocation, privacy, and GDPR compliance; including information about terms that Faber inherited from the governance framework would be an easy addition.

    Suppose, therefore, that Alice is next shown a \"Terms of Service\" screen like the following.

    Note the hyperlink back to the governance framework; if Alice already accepted the governance framework in another context, this helps her know what governance framework is in effect for a given credential.

    After Alice accepts the terms, she now proceeds with the issuance workflow. For the most part, she can forget about the governance framework attached to her credential \u2014 but the software doesn't. Some of the screens it might show her, because of information that it reads in the governance framework, include things like:

    Or, alternatively:

    In either case, proof of the issuer's qualifications was requested automatically, using canned criteria (see the second item in the governance framework's rules array).

    A similar kind of check can be performed on verifiers:

    Or, alternatively:

    Trust framework knowledge can also be woven into other parts of a UI, as for example:

    And:

    And:

    The point here is not the specifics in the UI we're positing. Different UX designers may make different choices. Rather, it's that by publishing a carefully versioned, machine-readable governance framework, such UIs become possible. The user's experience becomes less about individual circumstances, and more about general patterns that have known reputations, dependable safeguards, and so forth.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#versioning","title":"Versioning","text":"

    Trust framework data structures follow semver rules:

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#localization","title":"Localization","text":"

    Trust frameworks can offer localized alternatives of text using the same mechanism described in RFC 0043: l10n; treat the governance framework JSON as a DIDComm message and use decorators as it describes.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#reference","title":"Reference","text":"

    We've tried to make the sample JSON above self-describing. All fields are optional except the governance framework's name, version, data_uri, and at least one define or rules item to confer some trust.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#field-details","title":"Field Details","text":""},{"location":"concepts/0430-machine-readable-governance-frameworks/#name","title":"name","text":"

    A short descriptive string that explains the governance framework's purpose and focus. Extends http://schema.org/name.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#version","title":"version","text":"

    A semver-formatted value. Typically only major and minor segments are used, but patch should also be supported if present. Extends http://schema.org/version with the major/minor semantics discussed under Versioning above.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#logo","title":"logo","text":"

    A URI that references something visually identifying for this framework, suitable for display to a user. Extends http://schema.org/logo.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#description","title":"description","text":"

    Longer explanatory comment about the purpose and scope of the framework. Extends http://schema.org/description.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#docs_uri","title":"docs_uri","text":"

    Where is this governance framework officially published in human-readable form? A human should be able to browse here to learn more. Extends http://schema.org/url.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#data_uri","title":"data_uri","text":"

    Where is this governance framework officially published as a machine-readable data structure? A computer should be able to GET this JSON (MIME type = application/json) at the specified URI.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#topics","title":"topics","text":"

    In which problem domains is this governance framework relevant? Think of these like hash tags; they constitute a loose, overlapping topic cloud rather than a normative taxonomy; the purpose is to facilitate search.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#geos","title":"geos","text":"

    In which geographies is this governance framework relevant? May be redundant with jurisdictions in many cases.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#jurisdictions","title":"jurisdictions","text":"

    In which legal jurisdictions is this governance framework relevant? Values here should use ISO 639-2 or 3 language code, possibly narrowed to a standard province and even county/city using > as the narrowing character, plus standard abbreviations where useful: us>tx>houston for \"Houston, Texas, USA\" or ca>qc for the province of Quebec in Canada.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#roles","title":"roles","text":"

    Names all the roles that are significant to understanding interactions in this governance framework. These map to X in rules like \"X can do Y if Z.\"

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#privileges","title":"privileges","text":"

    Names all the privileges that are significant to understanding interactions in this governance framework. These map to Y in rules like \"X can do Y if Z.\" Each privilege is defined for humans at the specified URI, so a person can understand what it entails.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#duties","title":"duties","text":"

    Name all the duties that are significant to understanding interactions in this governance framework. Each duty is defined for humans at the specified URI, so a person can understand what it entails.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#define","title":"define","text":"

    Uses an array of {\"name\":x, \"id\": did value} objects to define key participants in the ecosystem.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#rules","title":"rules","text":"

    Uses SGL syntax to describe role-based rules of behavior like \"X can do Y if Z,\" where Z is a criterion following \"when\".

    Another sample governance framework (including the human documentation that would accompany the data structure) is presented as part of the discussion of guardianship in RFC 0103.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#drawbacks","title":"Drawbacks","text":""},{"location":"concepts/0430-machine-readable-governance-frameworks/#timing","title":"Timing?","text":"

    It may be early in the evolution of the ecosystem to attempt to standardize governance framework structure. (On the other hand, if we don't standardize now, we may be running the risk of unwise divergence.)

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#overkill","title":"Overkill?","text":"

    Joe Andrieu has pointed out on W3C CCG mailing list discussions that some important use cases for delegation involve returning to the issuer of a directed capability to receive the intended privilege. This contrasts with the way verifiable credentials are commonly used (across trust domain boundaries).

    Joe notes that governance frameworks are unnecessary (and perhaps counterproductive) for the simpler, within-boundary case; if the issuer of a directed capability is also the arbiter of trust in the end, credentials may be overkill. To the extent that Joe's insight applies, it may suggest that formalizing governance framework data structures is also overkill in some use cases.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0430-machine-readable-governance-frameworks/#prior-art","title":"Prior art","text":"

    Some of the work on consent receipts, both in the Kantara Initiative and here in RFC 0167, overlaps to a small degree. However, this effort and that one are mainly complementary rather than conflicting.

    "},{"location":"concepts/0430-machine-readable-governance-frameworks/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0430-machine-readable-governance-frameworks/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0430-machine-readable-governance-frameworks/gov-fw-covid-19/","title":"Gov fw covid 19","text":"
    {\n    \"@context\": [\n        \"https://github.com/hyperledger/aries-rfcs/blob/main/concepts/0430-machine-readable-governance-frameworks\", \n        \"https://fightthevirus.org/covid19-fw\"\n    ],\n    \"name\": \"COVID-19 Creds\"\n    \"1.0\",\n    \"description\": \"Which health-related credentials can be trusted for which levels of assurance, given which assumptions.\",\n    \"docs_uri\": \"http://fightthevirus.org/covid19-fw/v1\",\n    \"data_uri\": \"http://fightthevirus.org/covid19-fw/v1/tf.json\",\n    \"topics\": [\"health\", \"public safety\"],\n    \"jurisdictions\": [\"us\", \"uk\", \"eu\"],\n    \"roles\": [\"healthcare-provider\", \"healthcare-worker\", \"patient\"],\n    \"privileges\": [\n        {\"name\": \"travel\", \"uri\": \"http://ftv.org/tf/travel\"},\n        {\"name\": \"receive-healthcare\", \"uri\": \"http://ftv.org/tf/be-patient\"},\n        {\"name\": \"tlc-fragile\", \"uri\": \"http://ftv.org/tf/tlc\"},\n        {\"name\": \"visit-hot-zone\", \"uri\": \"http://ftv.org/tf/visit\"}\n    ],\n    // Name all the duties that are significant to understanding\n    // interactions in this governance framework. Each duty is defined for humans\n    // at the specified URI, so a person can understand what it\n    // entails.\n    \"duties\": [\n        {\"name\": \"safe-accredit\", \"uri\": \"http://kmk.org/tf/responsible-accredit\"},\n        {\"name\": \"GDPR-dat-control\", \"uri\": \"http://europa.eu/gdpr/trust-fw/gdpr-data-controller\"}\n        {\"name\": \"GDPR-edu-verif\", \"uri\": \"http://kmk.org/tf/gdpr-verif\"}\n        {\"name\": \"accept-kmk-tos\", \"uri\": \"http://kmk.org/tf/tos\"}\n    ],\n    // Use DIDs to define key participants in the ecosystem. KMK is\n    // the accreditation authority for higher education in Germany.\n    // Here we show it using two different DIDs.\n    \"define\": [\n        {\"name\": \"KMK\": \"id\": \"did:example:abc123\"},\n        {\"name\": \"KMK\": \"id\": \"did:anotherexample:def456\"},\n    ], \n    // Describe role-based rules of behavior like \"X can do Y if Z,\"\n    // where Z is a criterion following \"when\".\n    \"rules\": [\n        {\"grant\": [\"accredit\"], \"when\": {\"name\": \"KMK\"},\n            \"duties\": [\"safe-accredit\"]},\n        {\"grant\": [\"issue-edu\"], \"when\": {\n                // Proof request (see RFC 0037) specifying that\n                // institution is accredited by KMK.\n            },\n            // Any party who fulfills these criteria is considered\n            // to have the \"school\" role.\n            \"thus\": [\"school\"],\n            // And is considered to have the \"GDPR-dat-control\" duty.\n            \"duties\": [\"GDPR-dat-control\", \"accept-kmk-tos\"]\n        },\n        {\"grant\": \"hold-edu\", \"when\": {\n                // Proof request specifying that holder is a human.\n                // The presence of this item in the TF means that\n                // conforming issuers are supposed to verify\n                // humanness before issuing. Issuers can impose\n                // additional criteria; this is just the base\n                // requirement.\n            },\n            // Any party who fulfills these criteria is considered\n            // to qualify for the \"graduate\" role.\n            \"thus\": \"graduate\",\n            \"duties\": [\"accept-kmk-tos\"]\n        },\n        // In this governance framework, anyone can request proof based\n        // on credentials. No criteria are tested to map an entity\n        // to the \"anyone\" role.\n        {\n            \"grant\": \"request-proof\", \"thus\": \"anyone\",\n            \"duties\": [\"GDPR-edu-verif\", \"accept-kmk-tos\"]\n        },\n    ],\n    // Is there an authority that audits interactions?\n    \"audit\": {\n        // Where should reports be submitted via http POST?\n        \"uri\": \"http://kmk.org/audit\",\n        // How likely is it that a given interaction needs to\n        // be audited? Each party in the interaction picks a\n        // random number between 0 and 1, inclusive; if the number\n        // is <= this number, then that party submits a report about it.\n        \"probability\": \"0.01\"\n    },\n    // Is there an authority to whom requests for redress can\n    // be made, if one party feels like another violates\n    // the governance framework? \n    \"redress\": {\n        \"uri\": \"http://kmk.org/redress\"\n    }\n}   \n
    "},{"location":"concepts/0440-kms-architectures/","title":"0440: KMS Architectures","text":""},{"location":"concepts/0440-kms-architectures/#summary","title":"Summary","text":"

    A Key Management Service (KMS) is designed to handle protecting sensitive agent information like keys, credentials, protocol state, and other data. User authentication, access control policies, and cryptography are used in various combinations to mitigate various threat models and minimize risk. However, how to do this in practice is not intuitive and done incorrectly results in flawed or weak designs. This RFC proposes best practices for designing a KMS that offers reasonable tradeoffs in flexibility for implementers with strong data security and privacy guarantees.

    "},{"location":"concepts/0440-kms-architectures/#motivation","title":"Motivation","text":"

    A KMS needs to be flexible to support various needs that arise when implementing agents. Mobile device needs are very different from an enterprise server environment, but ultimately the secrets still need to be protected in all environments. Some KMSs have already been implemented but fail to consider all the various threat models that exist within their designs. Some overlook good authentication schemes. Some misuse cryptography making the implementation insecure. A good KMS should provide the ability to configure alternative algorithms that are validated against specific standards like the Federal Information Processing Standards (FIPS). This RFC is meant to reduce the chances that an insecure implementation could be deployed while raising awareness of principles used in more secure designs.

    "},{"location":"concepts/0440-kms-architectures/#tutorial","title":"Tutorial","text":"

    A KMS can be broken into three main components with each component having potential subcategories. These components are designed to handle specific use cases and should be plug-and-play. The components are listed below and described in detail in the following sections:

    1. Enclave -
      • Safeguards cryptographic keys
      • Key Generation
        • Encryption
        • Digital Signatures
        • Key exchange
        • Proof generation and verification
    2. Persistence -
      • Stores non-key data
        • Verifiable credentials
        • Protocol states
        • DID documents
        • Other metadata
    3. LOX -
      • Handle user authentication
      • Access control enforcement
      • Session/context establishment and management to the previous layers as described here.
    "},{"location":"concepts/0440-kms-architectures/#architecture","title":"Architecture","text":"

    LOX sits between clients and the other subsystems. LOX asks the Enclave to do specific cryptographic operations and may pass the results to clients or Persistence or LOX may consume the results itself. The persistence layer does not directly interact with the enclave layer nor does the enclave layer interact directly with the persistence layer.

    "},{"location":"concepts/0440-kms-architectures/#lox","title":"LOX","text":"

    LOX is the first layer KMS consumers will encounter and where the bulk of KMS work for implementers happens. LOX is divided into the following subcomponents that are not mutually exclusive:

    1. Authentication - Credentials for accessing the KMS and how to communicate with the KMS. Username/Passwords, PINS, Cryptographic keys, Verifiable credentials, Key fobs, key cards, and OpenID Connect are common methods for this layer. Any sensitive data handled by in this layer should be cleared from memory and minimize its footprint.
    2. Access control - Policies that indicate who can access the data and how data are handled.
    3. Audit - Logging about who does what and when and how verbose are the details

    Connecting to a KMS is usually done using functional system or library API calls, physical means like USB, or networks like Bluetooth, SSH, or HTTPS. These connections should be secured using encryption techniques like TLS or SSH or Signal other methods to prevent eavesdropping on authentication credentials from end users. This is often the most vulnerable part of the system because its easy to design something with weak security like passwords with only 6 characters and sent in plaintext. It is preferred to use keys and multi-factor authentication techniques for connecting to LOX. Since password based sign ins are the most common, the following is a list of good methods for handling password based sign ins.

    "},{"location":"concepts/0440-kms-architectures/#use-hashing-specially-designed-for-passwords","title":"Use hashing specially designed for passwords","text":"

    Password based hashes are designed to be slow so the outputs are not so easily subjected to a brute-force dictionary attack. Simply hashing the password with cryptographic algorithms like Sha2/Sha3/Blake2 is not good enough. Below is a list of approved algorithms

    1. PBKDF2
    2. Bcrypt
    3. Scrypt
    4. Argon2

    The settings for using each of these should be enough to make even the strongest hardware take about 1-2 seconds to complete. The recommended settings in each section, also apply to mobile devices.

    PBKDF2

    Many applications use PBKDF2 which is NIST approved. However, PBKDF2 can use any SHA-family hash algorithm. Thus, it can be made weak if paired with SHA1 for example. When using PBKDF2, choose SHA2-512, which is significantly slower in current GPUs. The PBKDF2 parameters are

    Bcrypt

    Bcrypt is a functional variant of the Blowfish cipher designed specially for password hashing. Unfortunately, the password sizes are limited to the first 72 bytes, any extra bytes are ignored. The recommended number of rounds to use is \u226514. Bcrypt can support up to 31 rounds. Bcrypt is less resistant to ASIC and GPU attacks because it uses constant memory making it easier to build hardware-accelerated password crackers. When properly configured, Bcrypt is considered secure and is widely used in practice.

    Scrypt

    Scrypt is designed to make it costly to perform large-scale custom hardware attacks by requiring large amounts of memory. It is memory-intensive on purpose to prevent GPU, ASIC and FPGA attacks. Scrypt offers multiple parameters:

    The memory in Scrypt is accessed in strongly dependent order at each step, so the memory access speed is the algorithm's bottleneck. The memory required is calculated as 128 * N * r * p bytes. Example: 128*16384*8*1 = 16MB

    Choosing parameters depends on how much waiting is desired and what level of security (cracking resistance) should be achieved.

    MyEtherWallet uses N=8192, r=8, p=1. This is not considered strong enough for crypto wallets. Parameters of N=16384,r=8,p=1 (RAM = 2MB) typically take around 0.5 seconds, used for interactive sign ins. This doesn't hammer server side performance too much where many users can login at the same time. N=1048576,r=8,p=1 (RAM = 1GB) takes around 2-3 seconds. Scrypt is considered highly secure when properly configured.

    Argon2

    Argon2 is optimized for the x86 architecture and exploits the cache and memory layouts of modern Intel and AMD processors. It was the password hash competition winner and is recommended over PBKDF2, Bcrypt and Scrypt. Not recommended for ARM ABI's as performance tends to be much slower. This performance hit seem be desirable but what tends to happen is the parameters are configured lower for ARM environments to be reasonable but then can be cracked at 2-3 times faster when the hash is brute-forced on x86. Argon2 comes in three flavors: Argon2d, Argon2i and Argon2id and uses the following parameters

    Parameters p=2,m=65536,n=128 typically take around 0.5 seconds, used for interactive sign ins. Moderate parameters are p=3,m=262144,n=192 typically take around 2-3 seconds, and sensitive parameters are p=4,m=1048576,n=256. Always time it in your environments.

    "},{"location":"concepts/0440-kms-architectures/#session-establishment","title":"Session establishment","text":"

    Upon authentication to LOX, LOX should establish connections to the enclave component and persistence component which should appear opaque to the client. LOX may need to authenticate to the enclave component or persistence component depending on where client access credentials are stored by implementors. Its preferable to store these in keychains or keystores if possible where the access is determined by the operating system and can include stronger mechanisms like TouchID or FaceID and hardware tokens in addition to passwords or pins. As described in LOX, the credentials for accessing the enclave and persistence layer can then be retrieved or generated if the client is new and stored in a secure manner.

    "},{"location":"concepts/0440-kms-architectures/#using-os-keychain","title":"Using OS Keychain","text":""},{"location":"concepts/0440-kms-architectures/#using-password-based-key-derivation","title":"Using Password based key derivation","text":""},{"location":"concepts/0440-kms-architectures/#session-management","title":"Session management","text":"

    Active connections to these other layers may possibly be pooled for efficiency reasons but care must be taken to avoid accidental permission grants. For example, Alice must not be able to use Bob's connection and Bob for Alice. However, the same database connection credentials might be used transparently for Alice and Bob, in which case the database connection can be reused. This should be an exception as credential sharing is strongly discouraged. Auditing in the database might not be able to determine whether Alice or Bob performed a specific query. Connections to enclaves and persistence usually requires session or context objects to be handled. These must not be returned to clients, but rather maintained by LOX. When a client connection is closed, these must be securely deleted and/or closed.

    "},{"location":"concepts/0440-kms-architectures/#enclave","title":"Enclave","text":"

    The enclave handles as many operations related to cryptography and key management as possible to ensure keys are adequately protected. Keys can potentially be stolen by means of side channel attacks from memory, disk, operational timing, voltage, heat, extraction, and others. Each enclave has been designed to consider certain threat models and mitigate these risks. For example, a Thales HSM is very different than a Yubico HSM or an Intel SGX Enclave. The correct mental model is to think about the formal guarantees that are needed and pick and choose/design the enclave layer to suite those needs. Build the system that meets the definition(s) of security then prove it meets the requirements. An enclave functions as a specialized cryptography component. The enclave provides APIs for passing in data to be operated on and executes its various cryptographic operations internally and only returns results to callers. The following is a list of operations that enclaves can support. The list will vary depending on the vendor.

    1. Generate asymmetric key
    2. Generate symmetric key
    3. Generate random
    4. Put key
    5. Delete key
    6. Wrap key
    7. Unwrap key
    8. Export wrapped key
    9. Import wrapped key
    10. List keys
    11. Update key metadata/capability
    12. Get key info/metadata
    13. Derive key agreement
    14. Sign
    15. Verify
    16. Encrypt
    17. Decrypt
    18. Get enclave info - metadata about the enclave like device info, version, manufacturer
    19. Query enclave capability
    20. Audit - e.g. enable/disable auditing
    21. Log - e.g. read audit logs
    22. Attestation - e.g. generate proofs about the enclave

    Most hardware implementations do not allow key material to be passed through the API, even in encrypted form, so a system of external references is required that allows keys to be referenced in a way that supports:

    1. Consistency - The same ID refers to the same key every time.
    2. Naming schemes - such as RSA_PSS_2048_8192_SHA256

    Some enclaves do support key material to be passed through the API. If allowed, these are called Key blocks. Key blocks are how a key is formatted when passed into or out of the enclave. See here.

    In keeping with the drive for enclaves to be simple and hard to mess up, the proposal is to make key IDs in the enclave storage be simple UTF-8 string names, and leave the underlying provider implementation to deal with the complexities of translation, key rollover, duplication and so on. Each of these operations uses different parameters and the enclave should specify what it allows and what it does not. If code needs to discover the capabilities on the fly, it is much more efficient to query the enclave if it supports it rather than return a list of capabilities that are search externally.

    Enclaves also store metadata with each key. This metadata consists of

    1. Attributes - e.g. key id, label, tag
    2. Access Constraints - e.g. require passcode or biometric auth to use
    3. Access Controls - e.g. can decrypt, can verify, can sign, exportable when wrapped
    "},{"location":"concepts/0440-kms-architectures/#attributes","title":"Attributes","text":"

    Attributes describe a key and do not enforce any particular permission about it. Such attributes typically include

    1. Identifier - e.g. f12f149d-e8d9-427c-85c7-f116f87f2e70 or 5a71028a74f1ad9f3f39 or 9s5m7EEJq1zZyc or 158
    2. Alias - e.g. fcb9ec81-d613-4c0f-a023-470155f38f92 or 1BC6HUi1soNML. Useful to sharing key but using a different id. Audits would show which one was used.
    3. Label or Description - e.g. Alice to Bob's DID key
    4. Class - e.g. public, private, symmetric
    5. Tag - e.g. sign, verify, did
    6. Type - e.g. aes, rsa, ecdsa, ed25519
    7. Size in bits - e.g. 256, 2048
    8. Creator - i.e. the original creator/owner
    9. Creation date
    10. Last modification date
    11. Always sensitive - i.e. was created with sensitive flag and never removed this control
    12. Never exported - i.e. never left the enclave
    13. Derived from - e.g. use PBKDF2 on this value to generate the key, or reference another key as a seed derived using HKDF

    Most of these attributes cannot change like Size in bits and are read-only. Aliases, Labels, and Tags are the only Attributes that can change. Enclaves allow 1 to many aliases and tags but only one label. The enclave should specify how many tags and aliases may be used. A common number is 5 for both.

    "},{"location":"concepts/0440-kms-architectures/#access-constraints","title":"Access Constraints","text":"

    Constraints restrict key access to be done under certain conditions like who is accessing the key like the owner(s) or group(s), password or biometric authentication, when host is unlocked like in mobile devices, always accessible. Constraints must be honored by the enclave to consumers to have confidence and trust. Possible constraints are:

    1. Owner(s) - e.g. who is allowed to access and use the key.
    2. User presence - e.g. require additional authentication like passcode or biometric auth (TouchID/FaceID). Authentication is on a per key basis vs just owner. Must be owner and meet additional authentication requirements.
    3. Biometric - e.g. require biometric authentication
    4. Passcode - e.g. require passcode authentication
    5. Fresh Interval - e.g. allowed time between additional authentications, 10 minutes is acceptable for extra sensitive keys

    Each of these can be mixed in combinations of AND and OR conjunctions. For example, key ABC might have the following constraints

    OR ( AND (Owner, User presence, Passcode), AND(Owner, User presence, Biometric) ) This requires the owner to enter an additional passcode OR biometric authentication to access the key.

    "},{"location":"concepts/0440-kms-architectures/#access-controls","title":"Access Controls","text":"

    Enclaves use access controls to restrict what operations keys are allowed to perform and who is allowed to use them. Controls are set during key generation and may or may not be permitted to change depending on the vendor or settings. Controls are stored with the key and enforced by the enclave. Possible access controls are:

    1. Can Key Agreement - e.g. Diffie-Hellman
    2. Can Derive - e.g. serve as seed to other keys
    3. Can Decrypt - e.g. can decrypt data for private keys and symmetric keys
    4. Can Encrypt - e.g. can encrypt data for public keys and symmetric keys
    5. Can Wrap - e.g. can be used to wrap another key to be exported
    6. Can Unwrap - e.g. can be used to unwrap a key that was exported
    7. Can Sign - e.g. can create digital signatures for private keys and MACs for symmetric keys
    8. Can Verify - e.g. can verify digital signatures for public keys and MACs for symmetric keys
    9. Can Attest - e.g. can be used to prove information about the enclave
    10. Is Exportable - e.g. can be exported from the enclave.
    11. Is Sensitive - e.g. can only be exported in an encrypted format.
    12. Is Synchronizable - e.g. can only be exported directly to another enclave
    13. Is Modifiable - e.g. can any controls be changed
    14. Is Visible - e.g. is the key available to clients outside the enclave or used internally.
    15. Valid Until Date - e.g. can use until this date, afterwards the key cannot be used

    To mitigate certain attacks like key material leaking in derive and encrypt functions, keys should be as limited as possible to one task (see here and here). For example, allowing concatenation of a base key to another key should be discouraged as it has the potential to perform key extraction attacks (see Clulow. Clulow shows that a key with decrypt and wrap capabilities can export a key then be used to decrypt it. This applies to both symmetric keys with decrypt and wrap and its variant where the wrapping key is a public key and the decryption key is the corresponding private key). The correct mental model to follow for enclave implementors is to model an intruder than can call any API command in any order he likes using any values that he knows. Sensitive and unexportable keys should never be directly readable and should not be changeable into nonsensitive and exportable. If a key can wrap, it should not be allowed to decrypt. Some of these controls should be considered sticky\u2013\u2013cannot be changed to mitigate these attacks like sensitive. This is useful when combined with conflicting controls like wrap and decrypt.

    Exporting key requirements

    The most secure model is to not allow keys to leave an enclave. However, in practice, this is not always reasonable. Backups, replications must be made, keys must be shared between two enclaves for data to be shared by two parties. When a key is lifted from the enclave, its attributes, constraints and controls must correctly bind with it. When a key lands in another enclave, the enclave must honor the attributes, constraints and controls with which it came. The wrapping format should be used to correctly bind key attributes, constraints and controls to the key. This prevents attacks where the key is wrapped and unwrapped twice with conflicting metadata as described by Delaune et. al, Cachin et.al, Cortier and Steel and Bortolozzo.

    "},{"location":"concepts/0440-kms-architectures/#templates","title":"Templates","text":"

    Some enclaves support creating templates such that keys can be generated and wrapped following secure guidelines in a reproducible way. Define separate templates for key generation and key wrapping.

    "},{"location":"concepts/0440-kms-architectures/#final-enclave-notes","title":"Final enclave notes","text":"

    Hardware enclaves typically have limited storage space like a few megabytes. A hardware enclave could be used to protect a software enclave that has a much higher storage capacity. A KMS is not limited to just one enclave. Cloud access security brokers (cloud enclaves) like Hashicorp Vault, Amazon\u2019s Cloud HSM, Iron Core Labs, Azure Key Vault, and Box Keysafe require trusting a SaaS vendor will store them in a place that is not vulnerable to data breaches. Even then, there is no assurance that the vendor, or one of their partners, won\u2019t access the secret material. This doesn't belittle their value, it's just another point to consider when using SaaS enclaves. Keys should be shared as little as possible if at all. Keys should be as short lived as possible and have a single purpose. This limits the need to replicate keys to other agents, either for dual functionality or recovery purposes, as well as damage in the event of a compromise.

    "},{"location":"concepts/0440-kms-architectures/#persistence","title":"Persistence","text":"

    This layer is meant to store other data in a KMS like credentials and protocol state. This layer could be optional for static agents where they store very little if anything. Credentials that access the persistence layer should stored in the enclave layer or with LOX in keychains. For example, if the persistence layer is a postgres database, the username/password or keypair to authenticate to the database could be stored in the enclave rather than a config file. Upon a successful authentication to LOX, these credentials are retrieved to connect to postgres and put into a connection pool. This is more secure than storing credentials in config files, environment variables or prompting them from user input.

    The most common storage mechanism is a SQL database. Designers should consider their system requirements in terms of performance, environments, user access control, system administrator access, then read Kamara\u2019s blog on how to develop encrypted databases (see 1, 2, 3, 4). For example, mobile environments vs enterprise environments. Mobile environments probably won\u2019t have a network adversary when using secure storage or an Honest-but-curious adversary where as enterprise environments will. Should the query engine be able to decrypt data prior to filtering or should it run on encrypted data and the querier performs decryption? Neither of these is considered wrong per se, but each comes with a set of trade offs. In the case of query engine decryption, there is no need to write a separate query mechanism and databases can execute as they normally do with a slight decrease in performance from decryption and encryption. However, the data reader must trust the query engine to not leak data or encryption keys. If the querier performs data encryption/decryption, no trust is given to the query engine but additional work must be performed before handing data to the engine and this searchable encryption is still vulnerable to access pattern leakage. What is practical engineering vs design vs theory? Theory is about what can and can\u2019t be done and why. Design is about using efficient primitives and protocols. Engineering is about effective and secure implementation.

    Implementers should consider the following threats and adversary sections.

    Threats * Memory access patterns leakage - Attackers can infer contents and/or importance of data based on how it is accessed and frequency of access. Attacker learns the set of matching records. * Volume leakage - Attacker learns the number of records/responses. * Search pattern leakage - Attackers can infer the contents and/or importance of data based on search patterns\u2013attacker can easily judge whether any two queries are generated from the same keywords or not. * Rank leakage - Attacker can infer or learn which data was queried * Side channel leakage - Attackers can access or learn the value of the secret material used to protect data confidentiality and integrity through side channels like memory inspection, RAM scraping attacks with swap access, and timing attacks [8]. * Microarchitectural attacks - Attacker is able to learn secrets through covert channels that target processors (Spectre/Meltdown)

    Adversary * Network adversary - observes traffic on the network. In addition to snooping and lurking, they can also perform fingerprinting attacks to learn who are the endpoints and correlate them to identical entities. * Snapshot adversary - breaks into server, snapshots system memory or storage. Sees copy of encrypted data but does not see transcripts related to any queries. Trade off between functionality and security. * Persistent adversary - corrupts server for a period of time, sees all communication transcripts. * Honest but curious adversary - System Administrator that can watch accesses to the data

    Disk Encryption This feature can help with encryption-at-rest requirements but only protects the data if the disk is off. Once booted, then an attacker can read it as easily as a system administrator and thus provides very little protection. Data can and usually is stolen via other methods like software vulnerabilities or virus\u2019. Useful if the storage hardware is not virtualized like in the cloud and is mobile like a laptop, phone, USB drives. If storage is in the cloud or network, it's worth more to invest in host-based intrusion prevention, intrusion detection systems, cloud, and file encryption (see Sastry, Yegulalp, and Why Full Disk Encryption Isn't Enough).

    Application vs Database Encryption Databases provide varying levels of built-in encryption methods. SQL Server and SQLCipher are examples of databases that provide Transparent Data Encryption\u2013the users don\u2019t even know the data is encrypted, it\u2019s transparent to their knowledge. This works similar to Disk Encryption in that it mostly protects the database data at rest, but as soon as the user is connected, the data can be read in plaintext. A further abstraction is Postgres and SQL Server allowing database keys to be partially managed by the database to encrypt columns or cells and the user must supply either all or some of the keys and manage them separately. The last approach is for applications to manage all encryption. This has the advantage of being storage agnostic. Postgres permits the keys to be stored separately from the encrypted columns. When the data are queried, the key is passed to the query engine and the data are decrypted temporarily in memory to check the query.

    If the application is handling encryption, this means the query engine only operates on encrypted data vs allowing it decrypt and read the data directly. The tradeoff is a different query method/language will have to be used than provided by the persistence layer.

    Many databases are networked which requires another layer of protection like TLS or SSH.

    "},{"location":"concepts/0440-kms-architectures/#data-storage","title":"Data storage","text":"

    Metadata for data includes access controls and constraints. Controls dictate what can be done with the data, Constraints dictate who can access it. These could be managed by the underlying persistence application or it can be enforced by LOX before returning the data to the client. The constraints and controls indicate permissions to end clients and not necessarily to anything outside of the persistence layer. This is not like designing an enclave. Persistence is meant to be for more general purpose data that may or may not be sensitive. Metadata about the data includes attributes, constraints, and controls in a similar manner to the enclave.

    "},{"location":"concepts/0440-kms-architectures/#access-constraints_1","title":"Access constraints","text":"

    Constraints restrict data access to be done under certain conditions like who is accessing like the owner(s) or group(s). Constraints must be honored by the persistence layer or LOX to consumers to have confidence and trust. Possible constraints are:

    1. Identity or roles
    2. Context - i.e. what contexts can this data be used. For example, a credential may be restricted to be used only in a work environment if desired.
    "},{"location":"concepts/0440-kms-architectures/#access-controls_1","title":"Access controls","text":"
    1. Crypto Protection - e.g. which key id(s) and algorithm are used to protect this data. This allows data to re-encrypted, transformed via functional encryption in the future when keys are rotated or ciphers are deemed weak or insecure.
    2. Is Exportable - e.g. can the data leave the KMS
    3. Is Modifiable
    4. Can Delete
    5. Valid Until Date
    "},{"location":"concepts/0440-kms-architectures/#reference","title":"Reference","text":"

    Indy Wallet implements this in part and is one of the first attempts at this architecture. Indy Wallet doesn't use LOX yet, functions similar to an enclave in that it does not give direct access to private keys and uses key ids to execute operations. It supports a flexible persistence layer to be either SQLite or Postgres. The top layer encrypts data before it is queried or sent to the persistence layer and decrypted when returned. Aries Mayaguez is another implementation

    "},{"location":"concepts/0440-kms-architectures/#drawbacks","title":"Drawbacks","text":"

    There are additional complexities related to handling keys and other data as two distinct entities and it might be faster to combine them with a potential security tradeoff.

    "},{"location":"concepts/0440-kms-architectures/#prior-art","title":"Prior art","text":"

    PKCS#11 and KMIP were developed for key management strictly for enclaves. These design patterns are not limited to just key management but any sensitive data.

    "},{"location":"concepts/0440-kms-architectures/#unresolved-questions","title":"Unresolved questions","text":"

    Is providing access constraints to the persistence layer necessary? Could this be removed? What are the consequences? Are there any missing constraints and controls for the enclave or persistence layer?

    "},{"location":"concepts/0440-kms-architectures/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes

    Name / Link Implementation Notes"},{"location":"concepts/0441-present-proof-best-practices/","title":"0441: Prover and Verifier Best Practices for Proof Presentation","text":""},{"location":"concepts/0441-present-proof-best-practices/#summary","title":"Summary","text":"

    This work prescribes best practices for provers in credential selection (toward proof presentation), for verifiers in proof acceptance, and for both regarding non-revocation interval semantics in fulfilment of the Present Proof protocol RFC0037. Of particular instance is behaviour against presentation requests and presentations in their various non-revocation interval profiles.

    "},{"location":"concepts/0441-present-proof-best-practices/#motivation","title":"Motivation","text":"

    Agents should behave consistently in automatically selecting credentials and proving presentations.

    "},{"location":"concepts/0441-present-proof-best-practices/#tutorial","title":"Tutorial","text":"

    The subsections below introduce constructs and outline best practices for provers and verifiers.

    "},{"location":"concepts/0441-present-proof-best-practices/#presentation-requests-and-non-revocation-intervals","title":"Presentation Requests and Non-Revocation Intervals","text":"

    This section prescribes norms and best practices in formulating and interpreting non-revocation intervals on proof requests.

    "},{"location":"concepts/0441-present-proof-best-practices/#semantics-of-non-revocation-interval-presence-and-absence","title":"Semantics of Non-Revocation Interval Presence and Absence","text":"

    The presence of a non-revocation interval applicable to a requested item (see below) in a presentation request signifies that the verifier requires proof of non-revocation status of the credential providing that item.

    The absence of any non-revocation interval applicable to a requested item signifies that the verifier has no interest in its credential's non-revocation status.

    A revocable or non-revocable credential may satisfy a presentation request with or without a non-revocation interval. The presence of a non-revocation interval conveys that if the prover presents a revocable credential, the presentation must include proof of non-revocation. Its presence does not convey any restriction on the revocability of the credential to present: in many cases the verifier cannot know whether a prover's credential is revocable or not.

    "},{"location":"concepts/0441-present-proof-best-practices/#non-revocation-interval-applicability-to-requested-items","title":"Non-Revocation Interval Applicability to Requested Items","text":"

    A requested item in a presentation request is an attribute or a predicate, proof of which the verifier requests presentation. A non-revocation interval within a presentation request is specifically applicable, generally applicable, or inapplicable to a requested item.

    Within a presentation request, a top-level non-revocation interval is generally applicable to all requested items. A non-revocation interval defined particularly for a requested item is specifically applicable to that requested attribute or predicate but inapplicable to all others.

    A non-revocation interval specifically applicable to a requested item overrides any generally applicable non-revocation interval: no requested item may have both.

    For example, in the following (indy) proof request

    {\n    \"name\": \"proof-request\",\n    \"version\": \"1.0\",\n    \"nonce\": \"1234567890\",\n    \"requested_attributes\": {\n        \"legalname\": {\n            \"name\": \"legalName\",\n            \"restrictions\": [\n                {\n                    \"issuer_did\": \"WgWxqztrNooG92RXvxSTWv\"\n                }\n            ]\n        },\n        \"regdate\": {\n            \"name\": \"regDate\",\n            \"restrictions\": [\n                {\n                    \"issuer_did\": \"WgWxqztrNooG92RXvxSTWv\"\n                }\n            ],\n            \"non_revoked\": {\n                \"from\": 1600001000,\n                \"to\": 1600001000\n            }\n        }\n    },\n    \"requested_predicates\": {\n    },\n    \"non_revoked\": {\n        \"from\": 1600000000,\n        \"to\": 1600000000\n    }\n}\n

    the non-revocation interval on 1600000000 is generally applicable to the referent \"legalname\" while the non-revocation interval on 1600001000 specifically applicable to referent \"regdate\".

    "},{"location":"concepts/0441-present-proof-best-practices/#semantics-of-non-revocation-interval-endpoints","title":"Semantics of Non-Revocation Interval Endpoints","text":"

    A non-revocation interval contains \"from\" and \"to\" (integer) EPOCH times. For historical reasons, any timestamp within this interval is technically acceptable in a non-revocation subproof. However, these semantics allow for ambiguity in cases where revocation occurs within the interval, and in cases where the ledger supports reinstatement. These best practices require the \"from\" value, should the prover specify it, to equal the \"to\" value: this approach fosters deterministic outcomes.

    A missing \"from\" specification defaults to the same value as the interval's \"to\" value. In other words, the non-revocation intervals

    {\n    \"to\": 1234567890\n}\n

    and

    {\n    \"from\": 1234567890,\n    \"to\": 1234567890\n}\n

    are semantically equivalent.

    "},{"location":"concepts/0441-present-proof-best-practices/#verifier-non-revocation-interval-formulation","title":"Verifier Non-Revocation Interval Formulation","text":"

    The verifier MUST specify, as current INDY-HIPE 11 notes, the same integer EPOCH time for both ends of the interval, or else omit the \"from\" key and value. In effect, where the presentation request specifies a non-revocation interval, the verifier MUST request a non-revocation instant.

    "},{"location":"concepts/0441-present-proof-best-practices/#prover-non-revocation-interval-processing","title":"Prover Non-Revocation Interval Processing","text":"

    In querying the nodes for revocation status, given a revocation interval on a single instant (i.e., on \"from\" and \"to\" the same, or \"from\" absent), the prover MUST query the ledger for all germane revocation updates from registry creation through that instant (i.e., from zero through \"to\" value): if the credential has been revoked prior to the instant, the revocation necessarily will appear in the aggregate delta.

    "},{"location":"concepts/0441-present-proof-best-practices/#provers-presentation-proposals-and-presentation-requests","title":"Provers, Presentation Proposals, and Presentation Requests","text":"

    In fulfilment of the RFC0037 Present Proof protocol, provers may initiate with a presentation proposal or verifiers may initiate with a presentation request. In the former case, the prover has both a presentation proposal and a presentation request; in the latter case, the prover has only a presentation request.

    "},{"location":"concepts/0441-present-proof-best-practices/#credential-selection-best-practices","title":"Credential Selection Best Practices","text":"

    This section specifies a prover's best practices in matching a credential to a requested item. The specification pertains to automated credential selection: obviously, a human user may select any credential in response to a presentation request; it is up to the verifier to verify the resulting presentation as satisfactory or not.

    Note that where a prover selects a revocable credential for inclusion in response to a requested item with a non-revocation interval in the presentation request, the prover MUST create a corresponding sub-proof of non-revocation at a timestamp within that non-revocation interval (insofar as possible; see below).

    "},{"location":"concepts/0441-present-proof-best-practices/#with-presentation-proposal","title":"With Presentation Proposal","text":"

    If prover initiated the protocol with a presentation proposal specifying a value (or predicate threshold) for an attribute, and the presentation request does not require a different value for it, then the prover MUST select a credential matching the presentation proposal, in addition to following the best practices below regarding the presentation request.

    "},{"location":"concepts/0441-present-proof-best-practices/#preference-for-irrevocable-credentials","title":"Preference for Irrevocable Credentials","text":"

    In keeping with the specification above, presentation of an irrevocable credential ipso facto constitutes proof of non-revocation. Provers MUST always prefer irrevocable credentials to revocable credentials, when the wallet has both satisfying a requested item, whether the requested item has an applicable non-revocation interval or not. Note that if a non-revocation interval is applicable to a credential's requested item in the presentation request, selecting an irrevocable credential for presentation may lead to a missing timestamp at the verifier (see below).

    If only revocable credentials are available to satisfy a requested item with no applicable non-revocation interval, the prover MUST present such for proof. As per above, the absence of a non-revocation interval signifies that the verifier has no interest in its revocation status.

    "},{"location":"concepts/0441-present-proof-best-practices/#verifiers-presentations-and-timestamps","title":"Verifiers, Presentations, and Timestamps","text":"

    This section prescribes verifier best practices concerning a received presentation by its timestamps against the corresponding presentation request's non-revocation intervals.

    "},{"location":"concepts/0441-present-proof-best-practices/#timestamp-for-irrevocable-credential","title":"Timestamp for Irrevocable Credential","text":"

    A presentation's inclusion of a timestamp pertaining to an irrevocable credential evinces tampering: the verifier MUST reject such a presentation.

    "},{"location":"concepts/0441-present-proof-best-practices/#missing-timestamp","title":"Missing Timestamp","text":"

    A presentation with no timestamp for a revocable credential purporting to satisfy a requested item in the corresponding presentation request, where the requested item has an applicable non-revocation interval, evinces tampering: the verifier MUST reject such a presentation.

    It is licit for a presentation to have no timestamp for an irrevocable credential: the applicable non-revocation interval is superfluous in the presentation request.

    "},{"location":"concepts/0441-present-proof-best-practices/#timestamp-outside-non-revocation-interval","title":"Timestamp Outside Non-Revocation Interval","text":"

    A presentation may include a timestamp outside of a the non-revocation interval applicable to the requested item that a presented credential purports to satisfy. If the latest timestamp from the ledger for a presented credential's revocation registry predates the non-revocation interval, but the timestamp is not in the future (relative to the instant of presentation proof, with a reasonable allowance for clock skew), the verifier MUST log and continue the proof verification process.

    Any timestamp in the future (relative to the instant of presentation proof, with a reasonable allowance for clock skew) evinces tampering: the verifier MUST reject a presentation with a future timestamp. Similarly, any timestamp predating the creation of its corresponding credential's revocation registry on the ledger evinces tampering: the verifier MUST reject a presentation with such a timestamp.

    "},{"location":"concepts/0441-present-proof-best-practices/#dates-and-predicates","title":"Dates and Predicates","text":"

    This section prescribes issuer and verifier best practices concerning representing dates for use in predicate proofs (eg proving Alice is over 21 without revealing her birth date).

    "},{"location":"concepts/0441-present-proof-best-practices/#dates-in-credentials","title":"Dates in Credentials","text":"

    In order for dates to be used in a predicate proof they MUST be expressed as an Int32. While unix timestamps could work for this, it has several drawbacks including: can't represent dates outside of the years 1901-2038, isn't human readable, and is overly precise in that birth time down to the second is generally not needed for an age check. To address these issues, date attributes SHOULD be represented as integers in the form YYYYMMDD (eg 19991231). This addresses the issues with unix timestamps (or any seconds-since-epoch system) while still allowing date values to be compared with < > operators. Note that this system won't work for any general date math (eg adding or subtracting days), but it will work for predicate proofs which just require comparisons. In order to make it clear that this format is being used, the attribute name SHOULD have the suffix _dateint. Since most datetime libraries don't include this format, here are some examples of helper functions written in typescript.

    "},{"location":"concepts/0441-present-proof-best-practices/#dates-in-presentations","title":"Dates in Presentations","text":"

    When constructing a proof request, the verifier SHOULD express the minimum/maximum date as an integer in the form YYYYMMDD. For example if today is Jan 1, 2021 then the verifier would request that bithdate_dateint is before or equal to Jan 1 2000 so <= 20000101. The holder MUST construct a predicate proof with a YYYYMMDD represented birth date less than that value to satisfy the proof request.

    "},{"location":"concepts/0441-present-proof-best-practices/#reference","title":"Reference","text":""},{"location":"concepts/0441-present-proof-best-practices/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0478-coprotocols/","title":"Aries RFC 0478: Coprotocols","text":""},{"location":"concepts/0478-coprotocols/#summary","title":"Summary","text":"

    Explains how one protocol can invoke and interact with others, giving inputs and receiving outputs and errors.

    "},{"location":"concepts/0478-coprotocols/#motivation","title":"Motivation","text":"

    It's common for complex business workflows to be composed from smaller, configurable units of logic. It's also common for multiple processes to unfold in interrelated ways, such that a complex goal is choreagraphed from semi-independent tasks. Enabling flexible constructions like this is one of the major goals of protocols built atop DIDComm. We need a standard methodology for doing so.

    "},{"location":"concepts/0478-coprotocols/#tutorial","title":"Tutorial","text":"

    A protocol is any recipe for a stateful interaction. DIDComm itself is a protocol, as are many primitives atop which it is built, such as HTTP, Diffie-Hellman key exchange, and so forth. However, when we talk about protocols in decentralized identity, without any qualifiers, we usually mean application-level interactions like credential issuance, feature discovery, third-party introductions, and so forth. These protocols are message-based interactions that use DIDComm.

    We want these protocols to be composable. In the middle of issuing credentials, we may want to challenge the potential holder for proof -- and in the middle of challenging for proof, maybe we want to negotiate payment. We could build proving into issuing, and payment into proving, but this runs counter to the DRY principle and to general best practice in encapsulation. A good developer writing a script to issue credentials would probably isolate payment and proving logic in separate functions or libraries, and would strive for loose coupling so each could evolve independently.

    Agents that run protocols have goals like those of the script developer. How we achieve them is the subject of this RFC.

    "},{"location":"concepts/0478-coprotocols/#subroutines","title":"Subroutines","text":"

    In the world of computer science, a subroutine is a vital abstraction for complex flows. It breaks logic into small, reusable chunks that are easy for a human to understand and document, and it formalizes their interfaces. Code calls a subroutine by referencing it via name or address, providing specified arguments as input. The subroutine computes on this input, eventually producing an output; the details don't interest the caller. While the subroutine is busy, the caller typically waits. Callers can often avoid recompilation when details inside subroutines change. Subroutines can come from pluggable libraries. These can be written by different programmers in different programming languages, as long as a calling convention is shared.

    Thinking of protocols as analogs to subroutines suggests some interesting questions:

    "},{"location":"concepts/0478-coprotocols/#coroutines","title":"Coroutines","text":"

    Before we answer these questions, let's think about a generalization of subroutines that's slightly less familiar to some programmers: coroutines. Coroutines achieve the same encapsulation and reusability as subroutines, but as a category they are more flexible and powerful. Coroutines may be, but aren't required to be, call-stack \"children\" of their callers; they may have complex lifecycles that begin or end outside the caller's lifespan. Coroutines may receive inputs at multiple points, not just at launch. They may yield outputs at multiple points, too. Subroutines are just the simplest variant of coroutines.

    The flexiblity of coroutines gives options to programmers, and it explains why most programming languages evolve to offer them as first-class constructs when they encounter demanding requirements for asynchronicity, performance, or scale. For example, early versions of python lacked the concept of coroutines; if you wrote a loop over range(1, 1000000), python allocated and filled a container holding 1 million numbers, and then iterated over the container. When generators (a type of coroutine) were added to the language, the underlying logic changed. Now range(1, 1000000) is a coroutine invocation that trades execution state back and forth with its sibling caller routine. The first time it is invoked, it receives and stores its input values, then produces one output (the lower bound of the range). Each subsequent time through the loop it is invoked again; it increments its internal state and yields a new output back to the caller. No allocations occur, and an early break from the loop wastes nothing.

    If we want to choose one conceptual parallel for how protocols relate to one another, we should think of them as coroutines, not subroutines; doing so constrains us less. Although payment as a subroutine inside credential issuance sounds plausible at first glance, it turns out to be clumsy under deeper analysis. A payment protocol yields more than one output -- typically a preauthorization at an intermediate stage, then a final outcome when it completes. At the preauthorization stage, it should accept graceful cancellation (a second input, after launch). And high-speed, bulk issuance of credentials is likely to benefit from payment and issuance being partly parallelized instead of purely sequential.

    Similarly, a handshake protocol like DID Exchange or Connection is best framed as a coprotocol of Introduce; this makes it easy for Introduce to complete as soon as the handshake begins, instead of waiting for the handshake to finish as if it were a subroutine.

    By thinking of cross-protocol interactions like coroutine interactions, we get the best of both worlds: where the interaction is just subroutine-like, the model lets us simplify; where we need more flexibility and power, the model still fits.

    Protocols don't have to support the types of coprotocol interactions we're describing here; protocols developed by Aries developers have already proven their value even without it. But to unlock their full potential, adding coprotocol support to new and existing protocol definitions may be worthwhile. This requires only a modest update to a protocol RFC, and creates little extra work for implementers.

    "},{"location":"concepts/0478-coprotocols/#the-simple-approach-that-falls-apart","title":"The simple approach that falls apart","text":"

    When the DIDComm community first began thinking about one protocol invoking another, we imagined that the interface to the called coprotocol would simply be its first message. For example, if verfiable credential issuer Acme Corp wanted to demand payment for a credential during an issuance protocol with Bob, Acme would send to Bob a request_payment message that constituted the first message in a make_payment protocol. This would create an instance of the payment protocol running alongside issuance; issuance could then wait until it completed before proceeding. And Bob wouldn't need to lift a finger to make it work, if he already supported the payment protocol.

    Unfortunately, this approach looks less attractive after study:

    "},{"location":"concepts/0478-coprotocols/#general-interface-needs","title":"General Interface Needs","text":"

    What we want, instead, is a formal declaration of something a bit like a coprotocol's \"function signature.\" It needs to describe the inputs that launch the protocol, and the outputs and/or errors emitted as it finishes. It should hide implementation details and remain stable across irrelevant internal changes.

    We need to bind compatible coprotocols to one another using the metadata in these declarations. And since coprotocol discovery may have to satisfy a remote party, not just a local one, our binding needs to work well dynamically, and late, and with optional, possibly overlapping plugins providing implementations. This suggests that our declarations must be rich and flexible about binding criteria \u2014 it must be possible to match on something more than just a coprotocol name and/or arg count+type.

    An interesting divergence from the function signature parallel is that we may have to describe inputs and outputs (and errors) at multiple interaction points, not just the coprotocol's initial invocation.

    Another subtlety is that protocol interfaces need to be partitioned by role; the experience of a payer and a payee with respect to a payment protocol may be quite different. The interface offered by a coprotocol must vary by which role the invoked coprotocol instance embodies.

    Given all these considerations, we choose to describe coprotocol interfaces using a set of function-like signatures, not just one. We use a function-like notation to make them as terse and intuitive as possible for developers.

    "},{"location":"concepts/0478-coprotocols/#example","title":"Example","text":"

    Suppose we are writing a credential issuance protocol, and we want to use coprotocols to add support for situations where the issuer expects payment partway through the overall flow. We'd like it to be possible for our payment step to use Venmo/Zelle, or cryptocurrency, or traditional credit cards, or anything else that issuers and holders agree upon. So we want to encapsulate the payment problem as a pluggable, discoverable, negotiable coprotocol.

    We do a little research and discover that many DIDComm-based payment protocols exist. Three of them advertise support for the same coprotocol interface:

    goal: aries.buy.make-payment\npayee:\n  get:\n      - invoke(amount: float, currency: str, bill_of_sale: str) @ null\n      - proceed(continue: bool) @ requested:, waiting-for-commit\n  give:\n      - preauth(code: str) @ waiting-for-commit\n      - return(confirmation_code: str) @ finalizing\n

    In plain English, the declared coprotocol semantics are:

    This is a coprotocol interface for protocols that facilitate the aries.buy.make-payment goal code. The payee role in this coprotocol gets input at two interaction points, \"invoke\" and \"proceed\". Invoke happens when state is null (at launch); \"proceed\" happens when state is \"requested\" or \"waiting-for-commit.\" At invoke, the caller of the co-protocol provides 3 inputs: an amount, a currency, and a bill of sale. At proceed, the caller decides whether to continue. Implementations of this coprotocol interface also give output at two interaction points, \"preauth\" and \"return.\" At preauth, the output is a string that's a preauth code; at return, the output is a confirmation code.

    "},{"location":"concepts/0478-coprotocols/#simplified-description-only","title":"Simplified description only","text":"

    It's important to understand that this interface is NOT the same as the protocol's direct interface (the message family and state machine that a protocol impl must provide to implement the protocol as documented). It is, instead, a simplified encapuslation -- just like a function signature is a simplified encapsulation of a coroutine. A function impl can rename its args for internal use. It can have steps that the caller doesn't know about. The same is true for protocols: their role names, state names, message types and versions, and field names in messages don't need to be exposed directly in a coprotocol interface; they just need a mapping that the protocol understands internally. The specific payment protocol implementation might look like this (don't worry about details; the point is just that some might exist):

    When we describe this as a coprotocol, we omit most of its details, and we change some verbiage. The existence of the payee, gateway and blockchain roles is suppressed (though we now have an implicit new role -- the caller of the coprotocol that gives what the protocol gets, and gets what the protocol gives). Smart contracts disappear. The concept of handle to pending txn is mapped to the coprotocol's preauth construct, and txn hash is mapped to the coprotocol's confirmation_code. As a coprotocol, the payee can interact according to a far simpler understanding, where the caller asks the payee to engage in a payment protocol, expose some simple hooks, and notify on completion:

    "},{"location":"concepts/0478-coprotocols/#calling-convention","title":"Calling Convention","text":"

    More details are needed to understand exactly how the caller and the coprotocol communicate. There are two sources of such details:

    1. Proprietary methods
    2. Standard Aries-style DIDComm protocol

    Proprietary methods allow aggressive optimization. They may be appropriate when it's known that the caller and the coprotocol will share the same process space on a single device, and the code for both will come from a single codebase. In such cases, there is no need to use DIDComm to communicate.

    Answer 2 may be more chatty, but is better when the coprotocol might be invoked remotely (e.g., Acme's server A is in the middle of issuance and wants to invoke payment to run on server B), or where the codebases for each party to the interaction need some independence.

    The expectation is that co-protocols share a compatible trust domain; that is, coprotocol interactions occur within the scope of one identity rather than across identity boundaries. Thus, interoperability is not a strong requirement. Nonetheless, approaching this question as a standard protocol problem leads to a clean, loosely couple architecture with little incremental cost in an agent. Therefore, a protocol for coprotocol coordination has been developed. This is the subject of sister document Aries RFC 0482: Coprotocol Protocol.

    "},{"location":"concepts/0478-coprotocols/#reference","title":"Reference","text":"

    More about optional fields and syntax in a coprotocol declaration.

    How to add a coprotocol decl to a protocol.

    "},{"location":"concepts/0478-coprotocols/#drawbacks","title":"Drawbacks","text":"

    Why should we not do this?

    "},{"location":"concepts/0478-coprotocols/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0478-coprotocols/#prior-art","title":"Prior art","text":"

    Coroutines \u2014 the computer science scaffolding against which coprotocols are modeled \u2014 are extensively discussed in the literature of various compiler developer communities. The discussion about adding support for this feature in Rust is particularly good background reading: https://users.rust-lang.org/t/coroutines-and-rust/9058

    "},{"location":"concepts/0478-coprotocols/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0478-coprotocols/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0519-goal-codes/","title":"0519: Goal Codes","text":""},{"location":"concepts/0519-goal-codes/#summary","title":"Summary","text":"

    Explain how different parties in an SSI ecosystem can communicate about their intentions in a way that is understandable by humans and by automated software.

    "},{"location":"concepts/0519-goal-codes/#motivation","title":"Motivation","text":"

    Agents exist to achieve the intents of their owners. Those intents largely unfold through protocols. Sometimes intelligent action in these protocols depends on a party declaring their intent. We need a standard way to do that.

    "},{"location":"concepts/0519-goal-codes/#tutorial","title":"Tutorial","text":"

    Our early learnings in SSI focused on VC-based proving with a very loose, casual approach to context. We did demos where Alice connects with a potential employer, Acme Corp -- and we assumed that each of the interacting parties had a shared understanding of one another's needs and purposes.

    But in a mature SSI ecosystem, where unknown agents can contact one another for arbitrary reasons, this context is not always easy to deduce. Acme Corp's agent may support many different protocols, and Alice may interact with Acme in the capacity of customer or potential employee or vendor. Although we have feature discovery to learn what's possible, and we have machine-readable governance frameworks to tell us what rules might apply in a given context, we haven't had a way to establish the context in the first place. When Alice contacts Acme, a context is needed before a governance framework is selectable, and before we know which ../../features are desirable.

    The key ingredient in context is intent. If Alice says to Acme, \"I'd like to connect,\", Acme wants to be able to trigger different behavior depending on whether Alice's intent is to be a customer, apply for a job, or audit Acme's taxes. This is the purpose of a goal code.

    "},{"location":"concepts/0519-goal-codes/#the-goal-code-datatype","title":"The goal code datatype","text":"

    To express intent, this RFC formally introduces the goal code datatype. When a field in a DIDComm message contains a goal code, its semantics and format match the description given here. (Goal codes are often declared via the ~thread decorator, but may also appear in ordinary message fields. See the Scope section below. Convention is to name this field \"goal_code\" where possible; however, this is only a convention, and individual protocols may adapt to it however they wish.)

    TODO: should we make a decorator out of this, so protocols don't have to declare it, and so any message can have a goal code? Or should we just let protocols declare a field in whatever message makes sense?

    Protocols use fields of this type as a way to express the intent of the message sender, thus coloring the larger context. In a sense, goal codes are to DIDComm what the subject: field is to email -- except that goal codes have formalized meanings to make them recognizable to automation.

    Goal codes use a standard format. They are lower-cased, kebab-punctuated strings. ASCII and English are recommended, as they are intended to be read by the software developer community, not by human beings; however, full UTF-8 is allowed. They support hierarchical dotted notation, where more general categories are to the left of a dot, and more specific categories are to the right. Some example goal codes might be:

    Goals are inherently self-attested. Thus, goal codes don't represent objective fact that a recipient can rely upon in a strong sense; subsequent interactions can always yield surprises. Even so, goal codes let agents triage interactions and find misalignments early; there's no point in engaging if their goals are incompatible. This has significant benefits for spam prevention, among other things.

    "},{"location":"concepts/0519-goal-codes/#verbs","title":"Verbs","text":"

    Notice the verbs in the examples: sell, date, hire, and arrange. Goals typically involve action; a complete goal code should have one or more verbs in it somewhere. Turning verbs into nouns (e.g., employment.references instead of employment.check-references) is considered bad form. (Some namespaces may put the verbs at the end; some may put them in the middle. That's a purely stylistic choice.)

    "},{"location":"concepts/0519-goal-codes/#directionality","title":"Directionality","text":"

    Notice, too, that the verbs may imply directionality. A goal with the sell verb implies that the person announcing the goal is a would-be seller, not a buyer. We could imagine a more general verb like engage-in-commerce that would allow either behavior. However, that would often be a mistake. The value of goal codes is that they let agents align around intent; announcing that you want to engage in general commerce without clarifying whether you intend to sell or buy may be too vague to help the other party make decisions.

    It is conceivable that this would lead to parallel branchs of a goal ontology that differ only in the direction of their verb. Thus, we could imagine sell.A and sell.B being shadowed by buy.A and buy.B. This might be necessary if a family of protocols allow either party to initiate an interaction and declare the goal, and if both parties view the goals as perfect mirror images. However, practical considerations may make this kind of parallelism unlikely. A random party contacting an individual to sell something may need to be quite clear about the type of selling they intend, to make it past a spam filter. In contrast, a random individual arriving at the digital storefront of a mega retailer may be quite vague about the type of buying they intend. Thus, the buy.* side of the namespace may need much less detail than the sell.* side.

    "},{"location":"concepts/0519-goal-codes/#goals-for-others","title":"Goals for others","text":"

    Related to directionality, it may occasionally be desirable to propose goals to others, rather than adovcating your own: \"Let <parties = us = Alice, Bob, and Carol> <goal = hold an auction> -- I nominate Carol to be the <role = auctioneer> and get us started.\" The difference between a normal message and an unusual one like this is not visible in the goal code; it should be exposed in additional fields that associate the goal with a particular identifier+role pair. Essentially, you are proposing a goal to another party, and these extra fields clarify who should receive the proposal, and what role/perspective they might take with respect to the goal.

    Making proposals like this may be a feature in some protocols. Where it is, the protocols determine the message field names for the goal code, the role, and the DID associated with the role and goal.

    "},{"location":"concepts/0519-goal-codes/#matching","title":"Matching","text":"

    The goal code cci.healthcare is considered a more general form of the code cci.healthcare.procedure, which is more general than cci.healthcare.procedure.schedule. Because these codes are hierarchical, wildcards and fuzzy matching are possible for either a sender or a recipient of a message. Filename-style globbing semantics are used.

    A sender agent can specify that their owner's goal is just meetupcorp.personal without clarifying more; this is like specifying that a file is located under a folder named \"meetupcorp/personal\" without specifying where; any file \"under\" that folder -- or the folder itself -- would match the pattern. A recipient agent can have a policy that says, \"Reject any attempts to connect if the goal code of the other party is aries.sell.*. Notice how this differs from aries.sell*; the first looks for things \"inside\" aries.sell; the latter looks for things \"inside\" aries that have names beginning with sell.

    "},{"location":"concepts/0519-goal-codes/#scope","title":"Scope","text":"

    When is a declared goal known to color interactions, and when is it undefined?

    We previously noted that goal codes are a bit like the subject: header on an email; they contextualize everything that follows in that thread. We don't generally want to declare a goal outside of a thread context, because that would prevent an agent from engaging in two goals at the same time.

    Given these two observations, we can say that a goal applies as soon as it is declared, and it continues to apply to all messages in the same thread. It is also inherited by implication through a thread's pthid field; that is, a parent thread's goal colors the child thread unless/until overridden.

    "},{"location":"concepts/0519-goal-codes/#namespacing","title":"Namespacing","text":"

    To avoid collision and ambiguity in code values, we need to support namespacing in our goal codes. Since goals are only a coarse-grained alignment mechanism, however, we don't need perfect decentralized precision. Confusion isn't much more than an annoyance; the worst that could happen is that two agents discover one or two steps into a protocol that they're not as aligned as they supposed. They need to be prepared to tolerate that outcome in any case.

    Thus, we follow the same general approach that's used in java's packaging system, where organizations and communities use a self-declared prefix for their ecosystem as the leftmost segment or segments of a family of identifiers (goal codes) they manage. Unlike java, though, these need not be tied to DNS in any way. We recommend a single segment namespace that is a unique string, and that is an alias for a URI identifying the origin ecosystem. (In other words, you don't need to start with \"com.yourcorp.yourproduct\" -- \"yourcorp\" is probably fine.)

    The aries namespace alias is reserved for goal codes defined in Aries RFCs. The URI aliased by this name is TBD. See the Reference section for more details.

    "},{"location":"concepts/0519-goal-codes/#versioning","title":"Versioning","text":"

    Semver-style semantics don't map to goals in an simple way; it is not obvious what constitutes a \"major\" versus a \"minor\" difference in a goal, or a difference that's not worth tracking at all. The content of a goal \u2014 the only thing that might vary across versions \u2014 is simply its free-form description, and that varies according to human judgment. Many different versions of a protocol are likely to share the goal to make a payment or to introduce two strangers. A goal is likely to be far more stable than the details of how it is accomplished.

    Because of these considerations, goal codes do not impose an explicit versioning mechanism. However, one is reserved for use, in the unusual cases where it may be helpful. It is to append -v plus a numeric suffix: my-goal-code-v1, my-goal-code-v2, etc. Goal codes that vary only by this suffix should be understood as ordered-by-numeric-suffix evolutions of one another, and goal codes that do not intend to express versioning should not use this convention for something else. A variant of the goal code without any version suffix is equivalent to a variant with the -v1 suffix. This allows human intuition about the relatedness of different codes, and it allows useful wildcard matching across versions. It also treats all version-like changes to a goal as breaking (semver \"major\") changes, which is probably a safe default.

    Families of goal codes are free to use this convention if they need it, or to invent a non-conflicting one of their own. However, we repeat our observation that versioning in goal codes is often inappropriate and unnecessary.

    "},{"location":"concepts/0519-goal-codes/#declaring-goal-codes","title":"Declaring goal codes","text":""},{"location":"concepts/0519-goal-codes/#standalone-rfcs-or-similar-sources","title":"Standalone RFCs or Similar Sources","text":"

    Any URI-referencable document can declare famlies or ontologies of goal codes. In the context of Aries, we encourage standalone RFCs for this purpose if the goals seem likely to be relevant in many contexts. Other communities may of course document goal codes in their own specs -- either dedicated to goal codes, or as part of larger topics. The following block is a sample of how we recommend that such goal codes be declared. Note that each code is individually hyperlink-able, and each is associated with a brief human-friendly description in one or more languages. This description may be used in menuing mechanisms such as the one described in Action Menu Protocol.

    "},{"location":"concepts/0519-goal-codes/#goal-codes","title":"goal codes","text":""},{"location":"concepts/0519-goal-codes/#ariessell","title":"aries.sell","text":"

    en: Sell something. Assumes two parties (buyer/seller). es: Vender algo. Asume que dos partes participan (comprador/vendedor).

    "},{"location":"concepts/0519-goal-codes/#ariessellgoodsconsumer","title":"aries.sell.goods.consumer","text":"

    en: Sell tangible goods of interest to general consumers.

    "},{"location":"concepts/0519-goal-codes/#ariessellservicesconsumer","title":"aries.sell.services.consumer","text":"

    en: Sell services of interest to general consumers.

    "},{"location":"concepts/0519-goal-codes/#ariessellservicesenterprise","title":"aries.sell.services.enterprise","text":"

    en: Sell services of interest to enterprises.

    "},{"location":"concepts/0519-goal-codes/#in-didcomm-based-protocol-specs","title":"In DIDComm-based Protocol Specs","text":"

    Occasionally, goal codes may have meaning only within the context of a specific protocol. In such cases, it may be appropriate to declare the goal codes directly in a protocol spec. This can be done using a section of the RFC as described above.

    More commonly, however, a protocol will accomplish one or more goals (e.g., when the protocol is fulfilling a co-protocol interface), or will require a participant to identify a goal at one or more points in a protocol flow. In such cases, the goal codes are probably declared external to the protocol. If they can be enumerated, they should still be referenced (hyperlinked to their respective definitions) in the protocol RFC.

    "},{"location":"concepts/0519-goal-codes/#in-governance-frameworks","title":"In Governance Frameworks","text":"

    Goal codes can also be (re-)declared in a machine-readable governance framework.

    "},{"location":"concepts/0519-goal-codes/#reference","title":"Reference","text":""},{"location":"concepts/0519-goal-codes/#known-namespace-aliases","title":"Known Namespace Aliases","text":"

    No central registry of namespace aliases is maintained; you need not register with an authority to create a new one. Just pick an alias with good enough uniqueness, and socialize it within your community. For convenience of collision avoidance, however, we maintain a table of aliases that are typically used in global contexts, and welcome PRs from anyone who wants to update it.

    alias used by URI aries Hyperledger Aries Community TBD"},{"location":"concepts/0519-goal-codes/#well-known-goal-codes","title":"Well-known goal codes","text":"

    The following goal codes are defined here because they already have demonstrated utility, based on early SSI work in Aries and elsewhere.

    "},{"location":"concepts/0519-goal-codes/#ariesvc","title":"aries.vc","text":"

    Participate in some form of VC-based interaction.

    "},{"location":"concepts/0519-goal-codes/#ariesvcissue","title":"aries.vc.issue","text":"

    Issue a verifiable credential.

    "},{"location":"concepts/0519-goal-codes/#ariesvcverify","title":"aries.vc.verify","text":"

    Verify or validate VC-based assertions.

    "},{"location":"concepts/0519-goal-codes/#ariesvcrevoke","title":"aries.vc.revoke","text":"

    Revoke a VC.

    "},{"location":"concepts/0519-goal-codes/#ariesrel","title":"aries.rel","text":"

    Create, maintain, or end something that humans would consider a relationship. This may be accomplished by establishing, updating or deleting a DIDComm messaging connection that provides a secure communication channel for the relationship. The DIDComm connection itself is not the relationship, but would be used to carry out interactions between the parties to facilitate the relationship.

    "},{"location":"concepts/0519-goal-codes/#ariesrelbuild","title":"aries.rel.build","text":"

    Create a relationship. Carries the meaning implied today by a LinkedIn invitation to connect or a Facebook \"Friend\" request. Could be as limited as creating a DIDComm Connection.

    "},{"location":"concepts/0519-goal-codes/#ariesvcverifieronce","title":"aries.vc.verifier.once","text":"

    Create a DIDComm connection for the sole purpose of doing the one-time execution of a Present Proof protocol. Once the protocol execution is complete, both sides SHOULD delete the connection, as it will not be used again by either side.

    The purpose of the goal code flow is to accomplish the equivalent of a \"connection-less\" present proof by having the agents establish a DIDComm connection, execute the present proof protocol, and delete the connection. The need for this goal code is when an actual connection-less present proof cannot be used because the out-of-band (OOB) message (including the presentation request) is too large for the transport being used--most often a QR code (although it may be useful for Bluetooth scenarios as well)--and a URL shortner option is not available. By using a one-time connection, the OOB message is small enough to fit into easily into a QR code, the present proof protocol can be executed using the established connection, and at the end of the interaction, no connection remains for either side to use or manage.

    "},{"location":"concepts/0519-goal-codes/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0559-pppu/","title":"Aries RFC 0559: Privacy-Preserving Proof of Uniqueness","text":""},{"location":"concepts/0559-pppu/#summary","title":"Summary","text":"

    Documents two techniques that, while preserving holder privacy, can guarantee a single use of a verifiable credential by any given unique holder -- the so-called \"one person one vote\" outcome that's often desirable in VC use cases.

    "},{"location":"concepts/0559-pppu/#motivation","title":"Motivation","text":"

    Many actions need to be constrained such that a given actor (usually, a human being) can only perform the action once. In government and stockholder elections, we want each voter to cast a single vote. At national borders, we want a visa to allow entrance only a single time before a visitor leaves. In refugee camps, homeless shelters, and halfway houses, we want each guest to access food or medication a single time per distribution event.

    Solving this problem without privacy is relatively straightforward. We require credentials that disclose a person\u2019s identity, and we track the identities to make sure each is authorized once. This pattern can be used with physical credentials, or with their digital equivalent.

    The problem is that each actor\u2019s behavior is tracked with this method, because it requires the recording of identity. Instead of just enforcing one-person-one-vote, we create a history of every instance when person X voted, which voting station they attended, what time they cast their vote, and so forth. We create similar records about personal travel or personal medication usage. Such information can be abused to surveil, to harass, to intrude, or to spam.

    What we need is a way to prove that an action is associated with a unique actor, and thus enforce the one-actor-one-action constraint, without disclosing that actor\u2019s identity in a way that erodes privacy. Although we began with examples of privacy for humans, we also want a solution for groups or institutions wishing to remain anonymous, or for devices, software entities, or other internet-of-things actors that have a similar need.

    "},{"location":"concepts/0559-pppu/#tutorial","title":"Tutorial","text":""},{"location":"concepts/0559-pppu/#solution-1","title":"Solution 1","text":"

    This solution allows uniqueness to be imposed on provers during an arbitrary context chosen by the verifier, with no unusual setup at issuance time. For example, a verifier could decide to constrain a particular credential holder to proving something only once per hour, or once during a given contest or election. The price of this flexibility is that the credential holder must have a digital credential that already has important uniqueness guarantees (e.g., a driver's license, a passport, etc).

    In contrast, solution 2 imposes uniqueness at issuance time, but requires no other credential with special guarantees.

    "},{"location":"concepts/0559-pppu/#components","title":"Components","text":"

    The following components are required to solve this problem:

    "},{"location":"concepts/0559-pppu/#a","title":"A","text":"

    one issuance to identified holder \u2014 A trustworthy process that issues verifiable credentials exactly once to an identified holder. (This is not new. Governments have such processes today to prevent issuing two driver\u2019s licenses or two passports to the same person.)

    "},{"location":"concepts/0559-pppu/#b","title":"B","text":"

    one issuance to anonymous holder \u2014 A method of issuing a credential only once to an anonymous holder. (This is not new. Scanning a biometric from an anonymous party, and then checking it against a list of known patterns with no additional metadata, is one way to do this. There are other, more cryptographic methods, as discussed below.)

    "},{"location":"concepts/0559-pppu/#c","title":"C","text":"

    strong binding \u2014 A mechanism for strongly associating credentials with a specific credential holder, such that they are not usable by anyone other than the proper holder. (This is not new. Embedding a biometric such as a fingerprint or a photo in a credential is a simple example of such a mechanism.)

    "},{"location":"concepts/0559-pppu/#d","title":"D","text":"

    linking mechanism \u2014 A mechanism for proving that it is valid to combine information from multiple credentials because they describe the same credential holder, without revealing the common link between those credentials. (An easy and familiar way to prove combinability is to embed a common characteristic in each credential. For example, two credentials that are both about a person with the same social security number can be assumed to describe the same person. What is required here goes one step further--we need a way to prove that two credentials contain the same data, or were built from the same data, without revealing that data at all. This is also not new. Cryptographic commitments provide one possible answer.)

    "},{"location":"concepts/0559-pppu/#e","title":"E","text":"

    proving without revealing \u2014 A method for proving the correctness of information derived from a credential, without sharing the credential information itself. (This is not new. In cryptographic circles, one such technique is known as a zero-knowledge proof. It allows Alice to hold a credential that contains her birthdate, but to prove she is over 65 years old instead of revealing the birthdate itself.)

    "},{"location":"concepts/0559-pppu/#walkthru","title":"Walkthru","text":"

    We will describe how this solution uses components A-E to help a fictional voter, Alice, in an interaction with a fictional government, G. Alice wishes to remain anonymous but still cast her vote in an election; G wishes to guarantee one-citizen-one-vote, but to retain no additional information that would endanger Alice\u2019s privacy. Extrapolating from a voting scenario to other situations that require uniqueness is left as an exercise for the reader.

    The solution works like this:

    1. Alice receives a voter credential, C1, from G. C1 strongly identifies Alice, perhaps containing her name, address, birthdate, and so forth. It is possession of this credential that proves a right to vote. G issues only one such credential to each actor. (component A)

    2. C1 is bound to Alice so it can\u2019t be used by anyone else. (component C)

    3. C1 also contains data provided by Alice, and derived from a secret that only Alice knows, such that Alice can link C1 to other credentials with similarly derived data because she knows the secret. (component D)

      Steps 1-3: Alice receives a voter credential from G.

    4. Alice arrives to vote and asserts her privilege to a different government agency, G\u2019, that administers the election.

    5. G\u2019 chooses a random identifier, X, for the anonymous person (Alice) that wants to vote.

    6. G\u2019 asks this anonymous voter (Alice) to provide data suitable for embedding in a new credential, such that the new credential and her old credential can be proved combinable. (component D).

    7. G\u2019 verifies that it has not issued a credential to this anonymous person previously. (component B)

    8. G\u2019 issues a new credential, C2, to the anonymous voter. C2 contains the random identifier X, plus the data that Alice provided in step 6. (This means the party playing the role of Verifier temporarily becomes a JIT Issuer.)

      Steps 4-8: Anonymous (Alice) receives a unique credential from G\u2019.

    9. G\u2019 asks the anonymous voter to prove, without revealing any identifying information from C1 (component E) the following assertions:

      • They possess a C1 and C2 that are combinable (component D)
      • The C2 possessed by this anonymous voter contains the randomly-generated value X that was just chosen and embedded in the C2 issued by G\u2019. At this point X is revealed.

      Step 9: Alice proves C1 and C2 are combinable and C2 contains X.

    This solves the problem because:

    Both credentials are required. If a person only has C1, then there is no way to enforce single usage while remaining anonymous. If a person only has C2, then there is no reason to believe the unique person who shows up to vote actually deserves the voting privilege. It is the combination that proves uniqueness of the person in the voting event, plus privilege to cast a vote. Zero-knowledge proving is also required, or else the strongly identifying information in C1 may leak.

    As mentioned earlier, this same mechanism can be applied to scenarios besides voting, and can be used by actors other than individual human beings. G (the issuer of C1) and G\u2019 (the verifier of C1 and issuer of C2) do not need to be related entities, as long as G1 trusts G. What is common to all applications of the technique is that uniqueness is proved in a context chosen by the verifier, privilege is based on previously issued and strongly identifying credentials, and yet the anonymity of the credential holder is preserved.

    "},{"location":"concepts/0559-pppu/#building-in-aries","title":"Building in Aries","text":"

    Ingredients to build this solution are available in Aries or other Hyperledger projects (Ursa, Indy) today:

    "},{"location":"concepts/0559-pppu/#solution-2","title":"Solution 2","text":"

    This is another solution that accomplishes approximately the same goal as solution 1. It is particularly helpful in voting. It has much in common with the earlier approach, but differs in that uniqueness must be planned for at time of issuance (instead of being imposed just in time at verification). The issuer signs a serial number to each unique holder, and the holder then makes a Pedersen Commitment to their unique serial number while the voting is open. The holder cannot vote twice or change their vote. The voter\u2019s privacy is preserved.

    "},{"location":"concepts/0559-pppu/#walkthru_1","title":"Walkthru","text":"

    Suppose a poll is being conducted with p number of options as m1, m2, m3,... mp and each poll has a unique id I. Acme Corp is conducting the poll and Alice is considered an eligible voter by Acme Corp because Alice has a credential C from Acme Corp.

    "},{"location":"concepts/0559-pppu/#goals","title":"Goals","text":"

    Additional condition: In some cases the poll conduction entity, Acme Corp, in this case, may be accused of creating Sybil identities vote to influence the poll. This can be mitigated if an additional constraint is enforced where only those are eligible to vote who can prove that their credential C was issued before the poll started (or at least some time t before the poll started), i.e. Alice should be able to prove to anyone that her credential C was issued to anyone.

    "},{"location":"concepts/0559-pppu/#setup","title":"Setup","text":"

    Acme corp has hosted an application AS that maintains a merkle tree and the application follows some rules to update the tree. This application should be auditable meaning that anyone should be able to check whether the application is updating the tree as per the rules and the incoming data. Thus this application could be hosted on a blockchain, or within a trusted execution environment like SGX. Also the application server maintains a dynamic set where set membership check is efficient. The merkle tree is readable by all poll participants.

    There are 2 different functions defined F1 and F2 both of which take 2 inputs and return one output and they are not invertible, even knowing one input and output should not reveal other input. The output of both on same input must be different. Thus these can be modeled as different hash functions like SHA2 and SHA3 or SHA2 with domain separation. But we want these functions to R1CS friendly so we choose a hash function like MiMC with domain separation.

    "},{"location":"concepts/0559-pppu/#basic-idea","title":"Basic idea","text":"

    Alice generates a serial number and gets a blind signature from Acme Corp over the serial number. Then Alice creates her vote and sends the \"encrypted\" vote with the serial number and signature to the application server. Application server accepts the vote if the signature is valid and it has not seen that serial number before. It will then update the merkle tree with the \"encrypted\" vote and return a signed proof to Alice of the update. When the poll terminates, Alice will then submit the decryption key to the application server which can then decrypt the vote and do the tally.

    "},{"location":"concepts/0559-pppu/#detailed-description","title":"Detailed description","text":""},{"location":"concepts/0559-pppu/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"concepts/0559-pppu/#prior-art","title":"Prior art","text":""},{"location":"concepts/0559-pppu/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"concepts/0559-pppu/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0566-issuer-hosted-custodidal-agents/","title":"0566: Issuer-Hosted Custodial Agents","text":"

    In the fully realized world of Self Soverign Identity, credential holders are equipped with capable agents to help them manage credentials and other SSI interactions. Before we arrive in that world, systems that facilitate the transition from the old model of centralized systems to the new decentralized models will be necessary and useful.

    One of the common points for a transition system is the issuance of credentials. Today's centralized systems contain information within an information silo. Issuing credentials requires the recipient to have an agent capable of receiving and managing the credential. Until the SSI transition is complete, some users will not have an agent of their own.

    Some users don't have the technology or the skills to use an agent, and there may be users who don't want to participate.

    In spite of the difficulties, there are huge advantages to transition to a decentralized system. Even when users don't understand the technology, they do care about the benefits it provides.

    This situation leaves the issuer with a choice: Maintain both a centralized system AND a decentralized SSI one, or enable their users to participate in the decentralized world.

    This paper addresses the second option: How to facilitate a transition to a decentralized world by providing issuer-hosted custodial agents.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#issuer-hosted-custodial-agents","title":"Issuer-Hosted Custodial Agents","text":"

    A custodial agent is an agent hosted on behalf of someone else. This model is common in the cryptocurrency space. An Issuer-Hosted Custodial Agent is exactly what it sounds like: an agent hosted for the holder of a credential by the issuer of the credential.

    This custodial arrangement involves managing the credentials for the user, but also managing the keys for the user. Key management on behalf of another is often called guardianship.

    An alternative to hosting the agent directly is to pay for the hosting by a third party provider. This arrangement addresses some, but not all, of the issues in this paper.

    This custodial arrangement is only necessary for the users without their own agents. Users running their own agents (often a mobile app), will manage their own keys and their own credentials.

    For the users with their own agents, the decentralized world has taken full effect: they have their own data, and can participate fully in the SSI ecosystem.

    For the users with hosted custodial agents, they have only made a partial transition. The data is still hosted by the issuer. With appropriate limits, this storage model is no worse than a centralized system. Despite the data storage being the same, a hosted agent provides the ability to migrate to another agent if the user desires.

    Hosting agents for users might sound like a costly endeavor, but hosted agents contain an advantage. Most hosted agents will only be used by their owners for a small amount of time, most likely similar to their interaction with the centralized system it replaces. This means that the costs are substantially lower than hosting a full agent.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#hosted-agent-interaction","title":"Hosted Agent Interaction","text":"

    Hosted agents have some particular challenges in providing effective user interaction. Detailed below are several options that can be used alone or in combination.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#browser-based","title":"Browser Based","text":"

    Providing a browser based user interface for a user is a common solution when the user will have access to a computer. Authentication will likely use something familiar like a username and password.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#authorizing-actions","title":"Authorizing Actions","text":"

    The user will often need a way to authorize actions that their agent will perform. A good option for this is via the use of a basic cell phone through SMS text messages or voice prompts. Less urgent actions can use an email sent to the user, prompting the user to login and authorize the actions.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#offline-paper-based","title":"Offline / Paper based","text":"

    At times the user will have no available technology for their use. In this case, providing QR codes printed on paper with accompanying instructions will allow the user to facilitate verifier (and perhaps another issuer) access to their cloud agent. QR Codes, such as those detailed in the Out Of Band Protocol, can contain both information for connecting to agent AND an interaction to perform. Presenting the QR code for scanning can serve as a form of consent for the prescribed action within the QR code. Printed QR codes can be provided by the issuer at the time of custodial agent creation, or from within a web interface available to the user.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#kiosk-based","title":"Kiosk based","text":"

    Kiosks can be useful to provide onsite interaction with a hosted agent. Kiosk authentication might take place via username and password, smartcard, or USB crypto key, with the possible inclusion of a biometric. Kiosks must be careful to fully remove any cached data when a session closes. Any biometric data used must be carefully managed between the kiosk and the hosted agent.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#smartphone-app","title":"Smartphone App","text":"

    While it is common for a smartphone app to be an agent by itself, there are cases where a smartphone app can act as a remote for the hosted agent. In this iteraction, keys, credentials, and other wallet related data is held in the custodial agent. The mobile app acts as a remote viewer and a way for the user to authorize actions taken by the custodial agent.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#best-practices","title":"Best Practices","text":"

    The following best practices should be followed to ensure proper operation and continued transition to a fully realized SSI architecture. Most of these practices depend upon and support one another.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#defend-the-ssi-architecture","title":"Defend the SSI architecture","text":"

    When issuers host custodial agents, care must be taken to avoid shortcuts that would violate SSI architecture. Deviations will frequently lead to incompatibilities.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#didcomm-protocol-based-integration","title":"DIDComm Protocol based Integration","text":"

    Communication between hosted agents and credential issuing agent must be based on published DIDComm protocols. Any communication which eliminates the use of a DID must be avoided. Whenever possible, these should be well adopted community protocols. If the case a new protocol is needed for a particular interaction, this must be fully documented and published, to allow other agents to become compatible by adopting the new protocol.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#allow-bring-your-own-agents","title":"Allow bring-your-own agents","text":"

    The onboarding process must allow users to bring their own compatible agents. This will be possible as long as any communication is protocol based. No ../../features available to hosted agents should be blocked from user provided agents.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#limit-wallet-scope-to-data-originating-from-the-issuer","title":"Limit wallet scope to data originating from the issuer","text":"

    Issuer hosted agents should have limits placed on them to prevent general use. This will prevent the agent from accepting additional credentials and data outside the scope of the issuer, therefore introducing responsibility for data that was never intended. This limtation must not limit the user in how they use the credentials issued, only in the acceptance of credentials and data from other issuers or parties. The use of policy and filters should be used to limit the types of credentials that can be held, which issuers should be allowed, and which protocols are enabled. None of these restrictions are necessary for bring-your-own agents provided by users.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#allow-migrate-from-hosted-to-bring-your-own","title":"Allow migrate from hosted to bring-your-own","text":"

    Users must be allowed to transition from an issuer-hosted agent to an agent of their choosing. This can happen either via a backup in a standard format, or via re-issuing relevant credentials.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#transparent-to-the-verifier","title":"Transparent to the verifier","text":"

    A verifier should not be able to tell the difference between a custodial hosted agent vs a bring-your-own agent.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#action-log","title":"Action Log","text":"

    All actions taken by the wallet should be preserved in a log viewable to the user. This includes how actions were authorized, such as a named policy or confirmation via text message.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#encrypted-wallets","title":"Encrypted Wallets","text":"

    Hosted wallet data should be encrypted at rest.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#independant-key-management","title":"Independant key management","text":"

    Keys used for hosted agents should have key mangement isolated from the issuer keys. Access to the keys for hosted agents should be carefully limited to the minimum required personnel. All key access should be logged.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#hosted-agent-isolation","title":"Hosted Agent Isolation","text":"

    Agents must be sufficiently isolated from each other to prevent a malicious user from accessing another user's agent or data or causing interruptions to the operation of another agent.

    "},{"location":"concepts/0566-issuer-hosted-custodidal-agents/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0700-oob-through-redirect/","title":"Aries RFC 0700: Out-of-Band through redirect","text":""},{"location":"concepts/0700-oob-through-redirect/#summary","title":"Summary","text":"

    Describes how one party can redirect to another party by passing out-of-band message as a query string and also recommends how to redirect back once protocol is over.

    "},{"location":"concepts/0700-oob-through-redirect/#motivation","title":"Motivation","text":"

    In current day e-commerce applications, while performing checkout users are usually presented with various payment options, like direct payment options or through some payment gateways. User then chooses an option, gets redirected to a payment application and then gets redirected back once transaction is over.

    Similarly, sending an out-of-band invitation through redirect plays an important role in web based applications where an inviter who is aware of invitee application or a selection service should be able to send invitation through redirect. Once invitee accepts the invitation and protocol gets over, then invitee should also be able to redirect back to URL shared through DIDComm message during protocol execution. The redirect can happen within the same device (ex: clicking a link) or between devices (ex: scanning a QR code).

    "},{"location":"concepts/0700-oob-through-redirect/#scenario","title":"Scenario","text":"

    Best example scenario would be how an issuer or a verifier applications trying to connect to holder applications for performing present proof or issuer credential protocol. A user who visits an issuer application can click on a link or scan a QR code to redirect to a holder application with an out-of-band message in query string (or redirect to a selection service showing available holder applications to choose from). User's holder application decodes invitation from query string, performs issue credential protocol and redirects user back to URL it received through DIDComm message from issuer during execution of protocol.

    "},{"location":"concepts/0700-oob-through-redirect/#tutorial","title":"Tutorial","text":"

    There are 2 roles in this flow,

    "},{"location":"concepts/0700-oob-through-redirect/#redirect-invitation-url","title":"Redirect Invitation URL","text":"

    A redirect URL from inviter to connect can consist of following elements,

    "},{"location":"concepts/0700-oob-through-redirect/#sample-1-redirect-invitation","title":"Sample 1: redirect invitation","text":"

    Invitation:

    {\n  \"@type\": \"https://didcomm.org/out-of-band/1.0/invitation\",\n  \"@id\": \"69212a3a-d068-4f9d-a2dd-4741bca89af3\",\n  \"label\": \"Faber College\",\n  \"goal_code\": \"issue-vc\",\n  \"goal\": \"To issue a Faber College Graduate credential\",\n  \"handshake_protocols\": [\"https://didcomm.org/didexchange/1.0\", \"https://didcomm.org/connections/1.0\"],\n  \"services\": [\"did:sov:LjgpST2rjsoxYegQDRm7EL\"]\n}\n

    Whitespace removed:

    {\"@type\":\"https://didcomm.org/out-of-band/1.0/invitation\",\"@id\":\"69212a3a-d068-4f9d-a2dd-4741bca89af3\",\"label\":\"Faber College\",\"goal_code\":\"issue-vc\",\"goal\":\"To issue a Faber College Graduate credential\",\"handshake_protocols\":[\"https://didcomm.org/didexchange/1.0\",\"https://didcomm.org/connections/1.0\"],\"services\":[\"did:sov:LjgpST2rjsoxYegQDRm7EL\"]}\n

    Base 64 URL Encoded:

    eyJAdHlwZSI6Imh0dHBzOi8vZGlkY29tbS5vcmcvb3V0LW9mLWJhbmQvMS4wL2ludml0YXRpb24iLCJAaWQiOiI2OTIxMmEzYS1kMDY4LTRmOWQtYTJkZC00NzQxYmNhODlhZjMiLCJsYWJlbCI6IkZhYmVyIENvbGxlZ2UiLCAiZ29hbF9jb2RlIjoiaXNzdWUtdmMiLCJnb2FsIjoiVG8gaXNzdWUgYSBGYWJlciBDb2xsZWdlIEdyYWR1YXRlIGNyZWRlbnRpYWwiLCJoYW5kc2hha2VfcHJvdG9jb2xzIjpbImh0dHBzOi8vZGlkY29tbS5vcmcvZGlkZXhjaGFuZ2UvMS4wIiwiaHR0cHM6Ly9kaWRjb21tLm9yZy9jb25uZWN0aW9ucy8xLjAiXSwic2VydmljZSI6WyJkaWQ6c292OkxqZ3BTVDJyanNveFllZ1FEUm03RUwiXX0\n

    Example URL: targeting recipient 'recipient.example.com'

    http://recipient.example.com/handle?oob=eyJAdHlwZSI6Imh0dHBzOi8vZGlkY29tbS5vcmcvb3V0LW9mLWJhbmQvMS4wL2ludml0YXRpb24iLCJAaWQiOiI2OTIxMmEzYS1kMDY4LTRmOWQtYTJkZC00NzQxYmNhODlhZjMiLCJsYWJlbCI6IkZhYmVyIENvbGxlZ2UiLCAiZ29hbF9jb2RlIjoiaXNzdWUtdmMiLCJnb2FsIjoiVG8gaXNzdWUgYSBGYWJlciBDb2xsZWdlIEdyYWR1YXRlIGNyZWRlbnRpYWwiLCJoYW5kc2hha2VfcHJvdG9jb2xzIjpbImh0dHBzOi8vZGlkY29tbS5vcmcvZGlkZXhjaGFuZ2UvMS4wIiwiaHR0cHM6Ly9kaWRjb21tLm9yZy9jb25uZWN0aW9ucy8xLjAiXSwic2VydmljZSI6WyJkaWQ6c292OkxqZ3BTVDJyanNveFllZ1FEUm03RUwiXX0\n

    Out-of-band invitation redirect URLs can be transferred via text message, email, SMS, posting on a website, or QR Code.

    Example URL encoded as a QR Code:

    "},{"location":"concepts/0700-oob-through-redirect/#sample-2-redirect-invitation-url","title":"Sample 2: redirect invitation URL","text":"

    Invitation URL from requestor which resolves to an out-of-band invitation:

    https://requestor.example.com/ssi?id=5f0e3ffb-3f92-4648-9868-0d6f8889e6f3\n

    Base 64 URL Encoded:

    aHR0cHM6Ly9yZXF1ZXN0b3IuZXhhbXBsZS5jb20vc3NpP2lkPTVmMGUzZmZiLTNmOTItNDY0OC05ODY4LTBkNmY4ODg5ZTZmMw==\n

    Example URL: targeting recipient 'recipient.example.com'

    http://recipient.example.com/handle?oobid=aHR0cHM6Ly9yZXF1ZXN0b3IuZXhhbXBsZS5jb20vc3NpP2lkPTVmMGUzZmZiLTNmOTItNDY0OC05ODY4LTBkNmY4ODg5ZTZmMw==\n

    Out-of-band invitation redirect URLs can be transferred via text message, email, SMS, posting on a website, or QR Code.

    Example URL encoded as a QR Code:

    "},{"location":"concepts/0700-oob-through-redirect/#web-redirect-decorator","title":"~web-redirect Decorator","text":"

    In some scenarios, requestor would require recipient to redirect back after completion of protocol execution to proceed with further processing. For example, a verifier would request a holder application to redirect back once present-proof protocol is over so that it can show credential verification results to user and navigate the user to further steps.

    The optional ~web-redirect SHOULD be used in DIDComm message sent by requestor during protocol execution to send redirect information to recipient if required.

    This decorator may not be needed in many cases where requestor has control over the flow of application based on protocol status. But this will be helpful in cases where an application has little or no control over user's navigation. For example, in a web browser where user redirected from a verifier web application to his wallet application in a same window through some wallet selection wizard and some third party logins. In this case once the protocol execution is over, verifier can send a URL to wallet application requesting redirect. This decorator is also useful switching from wallet mobile app to verifier mobile app in a mobile device.

    \"~web-redirect\": {\n  \"status\": \"OK\",\n  \"url\": \"https://example.com/handle-success/51e63a5f-93e1-46ac-b269-66bb22591bfa\"\n}\n

    where,

    Some of the DIDComm messages which can use ~web-redirect details to send redirect request.

    "},{"location":"concepts/0700-oob-through-redirect/#putting-all-together","title":"Putting all together","text":""},{"location":"concepts/0700-oob-through-redirect/#sending-invitation-to-recipient-through-redirect","title":"Sending Invitation to Recipient through redirect","text":""},{"location":"concepts/0700-oob-through-redirect/#sending-invitation-to-selection-service-through-redirect","title":"Sending Invitation to Selection Service through redirect","text":"

    This flow is similar to previous flow but target domain and path of invitation redirect URL will be selection service which presents user with various options to choose recipient application of choice. So in Step 3, user redirects to a selection service which guides user to select right recipient. For example a scenario where user is presented with various holder application providers to choose from while sharing/saving his or her verifiable credentials.

    "},{"location":"concepts/0700-oob-through-redirect/#reference","title":"Reference","text":""},{"location":"concepts/0700-oob-through-redirect/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    How different recipient applications registers with a selection service and establishing trust between requestor, recipient and selection service are out of scope of this RFC.

    "},{"location":"concepts/0700-oob-through-redirect/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"concepts/0757-push-notification/","title":"0757: Push Notification","text":""},{"location":"concepts/0757-push-notification/#summary","title":"Summary","text":"

    This RFC Describes the general concept of push notification as it applies to Aries Agents. There are a variety of push notification systems and methods, each of which is described in it's own feature RFC.

    Note: These protocols operate only between a mobile app and it's mediator(s). There is no requirement to use these protocols when mobile apps and mediator services are provided as a bundle. These protocols exist to facilitate cooperation between open source mediators and mobile apps not necessarily developed between the same parties.

    "},{"location":"concepts/0757-push-notification/#motivation","title":"Motivation","text":"

    Mobile agents typically require the use of Mediators to receive DIDComm Messages. When messages arrive at a mediator, it is optimal to send a push notification to the mobile device to signal that a message is waiting. This provides a good user experience and allows mobile agents to be responsive without sacrificing battery life by routinly checking for new messages.

    "},{"location":"concepts/0757-push-notification/#tutorial","title":"Tutorial","text":"

    Though push notification is common mobile platforms, there are a variety of different systems with various requirements and mecanisms. Most of them follow a familiar pattern:

    "},{"location":"concepts/0757-push-notification/#setup-phase","title":"Setup Phase","text":"
    1. Notification Sender (mediator) registers with a push notification service. This typically involves some signup procedure.
    2. Notification Recipient (mobile app) registers with the push notification service. This typically involves some signup procedure. For some platforms, or for a mediator and mobile app by the same vendor, this will be accomplished in step 1.
    3. Notification Recipient (mobile app) adds code (with config values obtained in step 2) to connect to the push notification service.
    4. Notification Recipient (mobile app) communicates necessary information to the Notification Sender (mediator) for use in sending notifications.
    "},{"location":"concepts/0757-push-notification/#notification-phase","title":"Notification Phase","text":"
    1. A message arrives at the Notification Sender (mediator) destined for the Notification Recipient (mobile app).
    2. Notification Sender (mediator) calls an API associated with the push notification service with notification details, typically using the information obtained in step 4.
    3. Notification Recipient (mobile app) is notified (typically via a callback function) of the notification details.
    4. Notification Recipient (mobile app) then connects to the Notification Sender (mediator) and receives waiting messages.

    In spite of the flow similarities between the push notification platforms, the implementations, libraries used, and general code paths vary substantially. Each push notification method is described in it's own protocol. This allows the protocol to fit the specific needs and terminology of the notification method it enables. Feature Discovery can be used between the Notification Sender and the Notification Recipient to discover push notification compatibility.

    "},{"location":"concepts/0757-push-notification/#public-mediators","title":"Public Mediators","text":"

    Some push notification methods require matching keys or secrets to be used in both sending and receiving notifications. This requirement makes these push notification methods unusable by public mediators.

    Public mediators SHOULD only implement push notification methods that do not require sharing secrets or keys with application implementations.

    "},{"location":"concepts/0757-push-notification/#push-notification-protcols","title":"Push Notification Protcols","text":"

    0699 - Push Notification APNS 1.0 (Apple Push Notification Service)

    "},{"location":"concepts/0757-push-notification/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"concepts/0799-long-term-support/","title":"0799: Aries Long Term Support Releases","text":"

    Long Term Support Releases of Aries projects will assist those using the software to integrate within their development processes.

    "},{"location":"concepts/0799-long-term-support/#motivation","title":"Motivation","text":"

    Long Term Support releases allow stable use of projects without frequent code updates. Designating LTS releases frees projects to develop ../../features without worry of disrupting those seeking feature stable deployments.

    "},{"location":"concepts/0799-long-term-support/#project-lts-releases","title":"Project LTS Releases","text":""},{"location":"concepts/0799-long-term-support/#lts-release-tagging","title":"LTS Release Tagging","text":""},{"location":"concepts/0799-long-term-support/#lts-support-timeline","title":"LTS Support Timeline","text":""},{"location":"concepts/0799-long-term-support/#lts-release-updates","title":"LTS Release Updates","text":""},{"location":"concepts/0799-long-term-support/#references","title":"References","text":"

    This policy is inspired by the Fabric LTS Policy https://hyperledger.github.io/fabric-rfcs/text/0005-lts-release-strategy.html

    "},{"location":"features/0015-acks/","title":"Aries RFC 0015: ACKs","text":""},{"location":"features/0015-acks/#summary","title":"Summary","text":"

    Explains how one party can send acknowledgment messages (ACKs) to confirm receipt and clarify the status of complex processes.

    "},{"location":"features/0015-acks/#change-log","title":"Change Log","text":""},{"location":"features/0015-acks/#motivation","title":"Motivation","text":"

    An acknowledgment or ACK is one of the most common procedures in protocols of all types. We need a flexible, powerful, and easy way to send such messages in agent-to-agent interactions.

    "},{"location":"features/0015-acks/#tutorial","title":"Tutorial","text":"

    Confirming a shared understanding matters whenever independent parties interact. We buy something on Amazon; moments later, our email client chimes to tell us of a new message with subject \"Thank you for your recent order.\" We verbally accept a new job, but don't rest easy until we've also emailed the signed offer letter back to our new boss. We change a password on an online account, and get a text at our recovery phone number so both parties know the change truly originated with the account's owner.

    When formal acknowledgments are missing, we get nervous. And rightfully so; most of us have a story of a package that was lost in the mail, or a web form that didn't submit the way we expected.

    Agents interact in very complex ways. They may use multiple transport mechanisms, across varied protocols, through long stretches of time. While we usually expect messages to arrive as sent, and to be processed as expected, a vital tool in the agent communication repertoire is the receipt of acknowledgments to confirm a shared understanding.

    "},{"location":"features/0015-acks/#implicit-acks","title":"Implicit ACKs","text":"

    Message threading includes a lightweight, automatic sort of ACK in the form of the ~thread.received_orders field. This allows Alice to report that she has received Bob's recent message that had ~thread.sender_order = N. We expect threading to be best practice in many use cases, and we expect interactions to often happen reliably enough and quickly enough that implicit ACKs provide high value. If you are considering ACKs but are not familiar with that mechanism, make sure you understand it, first. This RFC offers a supplement, not a replacement.

    "},{"location":"features/0015-acks/#explicit-acks","title":"Explicit ACKs","text":"

    Despite the goodness of implicit ACKs, there are many circumstances where a reply will not happen immediately. Explicit ACKs can be vital here.

    Explicit ACKS may also be vital at the end of an interaction, when work is finished: a credential has been issued, a proof has been received, a payment has been made. In such a flow, an implicit ACK meets the needs of the party who received the final message, but the other party may want explicit closure. Otherwise they can't know with confidence about the final outcome of the flow.

    Rather than inventing a new \"interaction has been completed successfully\" message for each protocol, an all-purpose ack message type is recommended. It looks like this:

    {\n  \"@type\": \"https://didcomm.org/notification/1.0/ack\",\n  \"@id\": \"06d474e0-20d3-4cbf-bea6-6ba7e1891240\",\n  \"status\": \"OK\",\n  \"~thread\": {\n    \"thid\": \"b271c889-a306-4737-81e6-6b2f2f8062ae\",\n    \"sender_order\": 4,\n    \"received_orders\": {\"did:sov:abcxyz\": 3}\n  }\n}\n

    It may also be appropriate to send an ack at other key points in an interaction (e.g., when a key rotation notice is received).

    "},{"location":"features/0015-acks/#adopting-acks","title":"Adopting acks","text":"

    As discussed in 0003: Protocols, a protocol can adopt the ack message into its own namespace. This allows the type of an ack to change from: https://didcomm.org/notification/1.0/ack to something like: https://didcomm.org/otherProtocol/2.0/ack. Thus, message routing logic can see the ack as part of the other protocol, and send it to the relevant handler--but still have all the standardization of generic acks.

    "},{"location":"features/0015-acks/#ack-status","title":"ack status","text":"

    The status field in an ack tells whether the ack is final or not with respect to the message being acknowledged. It has 2 predefined values: OK (which means an outcome has occurred, and it was positive); and PENDING, which acknowledges that no outcome is yet known.

    There is not an ack status of FAIL. In the case of a protocol failure a Report Problem message must be used to inform the other party(ies). For more details, see the next section.

    In addition, more advanced ack usage is possible. See the details in the Reference section.

    "},{"location":"features/0015-acks/#relationship-to-problem-report","title":"Relationship to problem-report","text":"

    Negative outcomes do not necessarily mean that something bad happened; perhaps Alice comes to hope that Bob rejects her offer to buy his house because she's found something better--and Bob does that, without any error occurring. This is not a FAIL in a problem sense; it's a FAIL in the sense that the offer to buy did not lead to the outcome Alice intended when she sent it.

    This raises the question of errors. Any time an unexpected problem arises, best practice is to report it to the sender of the message that triggered the problem. This is the subject of the problem reporting mechanism.

    A problem_report is inherently a sort of ACK. In fact, the ack message type and the problem_report message type are both members of the same notification message family. Both help a sender learn about status. Therefore, a requirement for an ack is that a status of FAIL be satisfied by a problem_report message.

    However, there is some subtlety in the use of the two types of messages. Some acks may be sent before a final outcome, so a final problem_report may not be enough. As well, an ack request may be sent after a previous ack or problem_report was lost in transit. Because of these caveats, developers whose code creates or consumes acks should be thoughtful about where the two message types overlap, and where they do not. Carelessness here is likely to cause subtle, hard-to-duplicate surprises from time to time.

    "},{"location":"features/0015-acks/#custom-acks","title":"Custom ACKs","text":"

    This mechanism cannot address all possible ACK use cases. Some ACKs may require custom data to be sent, and some acknowledgment schemes may be more sophisticated or fine-grained that the simple settings offered here. In such cases, developers should write their own ACK message type(s) and maybe their own decorators. However, reusing the field names and conventions in this RFC may still be desirable, if there is significant overlap in the ../../concepts.

    "},{"location":"features/0015-acks/#reference","title":"Reference","text":""},{"location":"features/0015-acks/#ack-message","title":"ack message","text":""},{"location":"features/0015-acks/#status","title":"status","text":"

    Required, values OK or PENDING. As discussed above, this tells whether the ack is final or not with respect to the message being acknowledged.

    "},{"location":"features/0015-acks/#threadthid","title":"~thread.thid","text":"

    Required. This links the ack back to the message that requested it.

    All other fields in an ack are present or absent per requirements of ordinary messages.

    "},{"location":"features/0015-acks/#drawbacks-and-alternatives","title":"Drawbacks and Alternatives","text":"

    None identified.

    "},{"location":"features/0015-acks/#prior-art","title":"Prior art","text":"

    See notes above about the implicit ACK mechanism in ~thread.received_orders.

    "},{"location":"features/0015-acks/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0015-acks/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes RFC 0036: Issue Credential Protocol ACKs are adopted by this protocol. RFC 0037: Present Proof Protocol ACKs are adopted by this protocol. RFC 0193: Coin Flip Protocol ACKs are adopted as a subprotocol. Aries Cloud Agent - Python Contributed by the Government of British Columbia."},{"location":"features/0019-encryption-envelope/","title":"Aries RFC 0019: Encryption Envelope","text":""},{"location":"features/0019-encryption-envelope/#summary","title":"Summary","text":"

    There are two layers of messages that combine to enable interoperable self-sovereign agent-to-agent communication. At the highest level are DIDComm Plaintext Messages - messages sent between identities to accomplish some shared goal (e.g., establishing a connection, issuing a verifiable credential, sharing a chat). DIDComm Plaintext Messages are delivered via the second, lower layer of messaging - DIDComm Encrypted Envelopes. A DIDComm Encrypted Envelope is a wrapper (envelope) around a plaintext message to permit secure sending and routing. A plaintext message going from its sender to its receiver passes through many agents, and an encryption envelope is used for each hop of the journey.

    This RFC describes the DIDComm Encrypted Envelope format and the pack() and unpack() functions that implement this format.

    "},{"location":"features/0019-encryption-envelope/#motivation","title":"Motivation","text":"

    Encryption envelopes use a standard format built on JSON Web Encryption - RFC 7516. This format is not captive to Aries; it requires no special Aries worldview or Aries dependencies to implement. Rather, it is a general-purpose solution to the question of how to encrypt, decrypt, and route messages as they pass over any transport(s). By documenting the format here, we hope to provide a point of interoperability for developers of agents inside and outside the Aries ecosystem.

    We also document how Aries implements its support for the DIDComm Encrypted Envelope format through the pack() and unpack() functions. For developers of Aries, this is a sort of design doc; for those who want to implement the format in other tech stacks, it may be a useful reference.

    "},{"location":"features/0019-encryption-envelope/#tutorial","title":"Tutorial","text":""},{"location":"features/0019-encryption-envelope/#assumptions","title":"Assumptions","text":"

    We assume that each sending agent knows:

    The assumptions can be made because either the message is being sent to an agent within the sending agent's domain and so the sender knows the internal configuration of agents, or the message is being sent outside the sending agent's domain and interoperability requirements are in force to define the sending agent's behaviour.

    "},{"location":"features/0019-encryption-envelope/#example-scenario","title":"Example Scenario","text":"

    The example of Alice and Bob's sovereign domains is used for illustrative purposes in defining this RFC.

    In the diagram above:

    For the purposes of this discussion we are defining the Encryption Envelope agent message flow to be:

    1 \u2192 2 \u2192 8 \u2192 9 \u2192 3 \u2192 4

    However, that flow is just one of several that could match this configuration. What we know for sure is that:

    "},{"location":"features/0019-encryption-envelope/#encrypted-envelopes","title":"Encrypted Envelopes","text":"

    An encrypted envelope is used to transport any plaintext message from one agent directly to another. In our example message flow above, there are five encrypted envelopes sent, one for each hop in the flow. The process to send an encrypted envelope consists of the following steps:

    This is repeated with each hop, but the encrypted envelopes are nested, such that the plaintext is never visible until it reaches its final recipient.

    "},{"location":"features/0019-encryption-envelope/#implementation","title":"Implementation","text":"

    We will describe the pack and unpack algorithms, and their output, in terms of Aries' initial implementation, which may evolve over time. Other implementations could be built, but they would need to emit and consume similar inputs and outputs.

    The data structures emitted and consumed by these algorithms are described in a formal schema.

    "},{"location":"features/0019-encryption-envelope/#authcrypt-mode-vs-anoncrypt-mode","title":"Authcrypt mode vs. Anoncrypt mode","text":"

    When packing and unpacking are done in a way that the sender is anonymous, we say that we are in anoncrypt mode. When the sender is revealed, we are in authcrypt mode. Authcrypt mode reveals the sender to the recipient only; it is not the same as a non-repudiable signature. See the RFC about non-repudiable signatures, and this discussion about the theory of non-repudiation.

    "},{"location":"features/0019-encryption-envelope/#pack-message","title":"Pack Message","text":""},{"location":"features/0019-encryption-envelope/#pack_message-interface","title":"pack_message() interface","text":"

    packed_message = pack_message(wallet_handle, message, receiver_verkeys, sender_verkey)

    "},{"location":"features/0019-encryption-envelope/#pack_message-params","title":"pack_message() Params:","text":""},{"location":"features/0019-encryption-envelope/#pack_message-return-value-authcrypt-mode","title":"pack_message() return value (Authcrypt mode)","text":"

    This is an example of an outputted message encrypting for two verkeys using Authcrypt.

    {\n    \"protected\": \"eyJlbmMiOiJ4Y2hhY2hhMjBwb2x5MTMwNV9pZXRmIiwidHlwIjoiSldNLzEuMCIsImFsZyI6IkF1dGhjcnlwdCIsInJlY2lwaWVudHMiOlt7ImVuY3J5cHRlZF9rZXkiOiJMNVhEaEgxNVBtX3ZIeFNlcmFZOGVPVEc2UmZjRTJOUTNFVGVWQy03RWlEWnl6cFJKZDhGVzBhNnFlNEpmdUF6IiwiaGVhZGVyIjp7ImtpZCI6IkdKMVN6b1d6YXZRWWZOTDlYa2FKZHJRZWpmenRONFhxZHNpVjRjdDNMWEtMIiwiaXYiOiJhOEltaW5zdFhIaTU0X0otSmU1SVdsT2NOZ1N3RDlUQiIsInNlbmRlciI6ImZ0aW13aWlZUkc3clJRYlhnSjEzQzVhVEVRSXJzV0RJX2JzeERxaVdiVGxWU0tQbXc2NDE4dnozSG1NbGVsTThBdVNpS2xhTENtUkRJNHNERlNnWkljQVZYbzEzNFY4bzhsRm9WMUJkREk3ZmRLT1p6ckticUNpeEtKaz0ifX0seyJlbmNyeXB0ZWRfa2V5IjoiZUFNaUQ2R0RtT3R6UkVoSS1UVjA1X1JoaXBweThqd09BdTVELTJJZFZPSmdJOC1ON1FOU3VsWXlDb1dpRTE2WSIsImhlYWRlciI6eyJraWQiOiJIS1RBaVlNOGNFMmtLQzlLYU5NWkxZajRHUzh1V0NZTUJ4UDJpMVk5Mnp1bSIsIml2IjoiRDR0TnRIZDJyczY1RUdfQTRHQi1vMC05QmdMeERNZkgiLCJzZW5kZXIiOiJzSjdwaXU0VUR1TF9vMnBYYi1KX0pBcHhzYUZyeGlUbWdwWmpsdFdqWUZUVWlyNGI4TVdtRGR0enAwT25UZUhMSzltRnJoSDRHVkExd1Z0bm9rVUtvZ0NkTldIc2NhclFzY1FDUlBaREtyVzZib2Z0d0g4X0VZR1RMMFE9In19XX0=\",\n    \"iv\": \"ZqOrBZiA-RdFMhy2\",\n    \"ciphertext\": \"K7KxkeYGtQpbi-gNuLObS8w724mIDP7IyGV_aN5AscnGumFd-SvBhW2WRIcOyHQmYa-wJX0MSGOJgc8FYw5UOQgtPAIMbSwVgq-8rF2hIniZMgdQBKxT_jGZS06kSHDy9UEYcDOswtoLgLp8YPU7HmScKHSpwYY3vPZQzgSS_n7Oa3o_jYiRKZF0Gemamue0e2iJ9xQIOPodsxLXxkPrvvdEIM0fJFrpbeuiKpMk\",\n    \"tag\": \"kAuPl8mwb0FFVyip1omEhQ==\"\n}\n

    The base64URL encoded protected decodes to this:

    {\n    \"enc\": \"xchacha20poly1305_ietf\",\n    \"typ\": \"JWM/1.0\",\n    \"alg\": \"Authcrypt\",\n    \"recipients\": [\n        {\n            \"encrypted_key\": \"L5XDhH15Pm_vHxSeraY8eOTG6RfcE2NQ3ETeVC-7EiDZyzpRJd8FW0a6qe4JfuAz\",\n            \"header\": {\n                \"kid\": \"GJ1SzoWzavQYfNL9XkaJdrQejfztN4XqdsiV4ct3LXKL\",\n                \"iv\": \"a8IminstXHi54_J-Je5IWlOcNgSwD9TB\",\n                \"sender\": \"ftimwiiYRG7rRQbXgJ13C5aTEQIrsWDI_bsxDqiWbTlVSKPmw6418vz3HmMlelM8AuSiKlaLCmRDI4sDFSgZIcAVXo134V8o8lFoV1BdDI7fdKOZzrKbqCixKJk=\"\n            }\n        },\n        {\n            \"encrypted_key\": \"eAMiD6GDmOtzREhI-TV05_Rhippy8jwOAu5D-2IdVOJgI8-N7QNSulYyCoWiE16Y\",\n            \"header\": {\n                \"kid\": \"HKTAiYM8cE2kKC9KaNMZLYj4GS8uWCYMBxP2i1Y92zum\",\n                \"iv\": \"D4tNtHd2rs65EG_A4GB-o0-9BgLxDMfH\",\n                \"sender\": \"sJ7piu4UDuL_o2pXb-J_JApxsaFrxiTmgpZjltWjYFTUir4b8MWmDdtzp0OnTeHLK9mFrhH4GVA1wVtnokUKogCdNWHscarQscQCRPZDKrW6boftwH8_EYGTL0Q=\"\n            }\n        }\n    ]\n}\n

    "},{"location":"features/0019-encryption-envelope/#pack-output-format-authcrypt-mode","title":"pack output format (Authcrypt mode)","text":"
        {\n        \"protected\": \"b64URLencoded({\n            \"enc\": \"xchachapoly1305_ietf\",\n            \"typ\": \"JWM/1.0\",\n            \"alg\": \"Authcrypt\",\n            \"recipients\": [\n                {\n                    \"encrypted_key\": base64URLencode(libsodium.crypto_box(my_key, their_vk, cek, cek_iv))\n                    \"header\": {\n                          \"kid\": \"base58encode(recipient_verkey)\",\n                           \"sender\" : base64URLencode(libsodium.crypto_box_seal(their_vk, base58encode(sender_vk)),\n                            \"iv\" : base64URLencode(cek_iv)\n                }\n            },\n            ],\n        })\",\n        \"iv\": <b64URLencode(iv)>,\n        \"ciphertext\": b64URLencode(encrypt_detached({'@type'...}, protected_value_encoded, iv, cek),\n        \"tag\": <b64URLencode(tag)>\n    }\n
    "},{"location":"features/0019-encryption-envelope/#authcrypt-pack-algorithm","title":"Authcrypt pack algorithm","text":"
    1. generate a content encryption key (symmetrical encryption key)
    2. encrypt the CEK for each recipient's public key using Authcrypt (steps below)
      1. set encrypted_key value to base64URLencode(libsodium.crypto_box(my_key, their_vk, cek, cek_iv))
        • Note it this step we're encrypting the cek, so it can be decrypted by the recipient
      2. set sender value to base64URLencode(libsodium.crypto_box_seal(their_vk, sender_vk_string))
        • Note in this step we're encrypting the sender_verkey to protect sender anonymity
      3. base64URLencode(cek_iv) and set to iv value in the header
        • Note the cek_iv in the header is used for the encrypted_key where as iv is for ciphertext
    3. base64URLencode the protected value
    4. encrypt the message using libsodium.crypto_aead_chacha20poly1305_ietf_encrypt_detached(message, protected_value_encoded, iv, cek) this is the ciphertext.
    5. base64URLencode the iv, ciphertext, and tag then serialize the format into the output format listed above.

    For a reference implementation, see https://github.com/hyperledger/indy-sdk/blob/master/libindy/src/commands/crypto.rs

    "},{"location":"features/0019-encryption-envelope/#pack_message-return-value-anoncrypt-mode","title":"pack_message() return value (Anoncrypt mode)","text":"

    This is an example of an outputted message encrypted for two verkeys using Anoncrypt.

    {\n    \"protected\": \"eyJlbmMiOiJ4Y2hhY2hhMjBwb2x5MTMwNV9pZXRmIiwidHlwIjoiSldNLzEuMCIsImFsZyI6IkFub25jcnlwdCIsInJlY2lwaWVudHMiOlt7ImVuY3J5cHRlZF9rZXkiOiJYQ044VjU3UTF0Z2F1TFcxemdqMVdRWlEwV0RWMFF3eUVaRk5Od0Y2RG1pSTQ5Q0s1czU4ZHNWMGRfTlpLLVNNTnFlMGlGWGdYRnZIcG9jOGt1VmlTTV9LNWxycGJNU3RqN0NSUHNrdmJTOD0iLCJoZWFkZXIiOnsia2lkIjoiR0oxU3pvV3phdlFZZk5MOVhrYUpkclFlamZ6dE40WHFkc2lWNGN0M0xYS0wifX0seyJlbmNyeXB0ZWRfa2V5IjoiaG5PZUwwWTl4T3ZjeTVvRmd0ZDFSVm05ZDczLTB1R1dOSkN0RzRsS3N3dlljV3pTbkRsaGJidmppSFVDWDVtTU5ZdWxpbGdDTUZRdmt2clJEbkpJM0U2WmpPMXFSWnVDUXY0eVQtdzZvaUE9IiwiaGVhZGVyIjp7ImtpZCI6IjJHWG11Q04ySkN4U3FNUlZmdEJITHhWSktTTDViWHl6TThEc1B6R3FRb05qIn19XX0=\",\n    \"iv\": \"M1GneQLepxfDbios\",\n    \"ciphertext\": \"iOLSKIxqn_kCZ7Xo7iKQ9rjM4DYqWIM16_vUeb1XDsmFTKjmvjR0u2mWFA48ovX5yVtUd9YKx86rDVDLs1xgz91Q4VLt9dHMOfzqv5DwmAFbbc9Q5wHhFwBvutUx5-lDZJFzoMQHlSAGFSBrvuApDXXt8fs96IJv3PsL145Qt27WLu05nxhkzUZz8lXfERHwAC8FYAjfvN8Fy2UwXTVdHqAOyI5fdKqfvykGs6fV\",\n    \"tag\": \"gL-lfmD-MnNj9Pr6TfzgLA==\"\n}\n

    The protected data decodes to this:

    {\n    \"enc\": \"xchacha20poly1305_ietf\",\n    \"typ\": \"JWM/1.0\",\n    \"alg\": \"Anoncrypt\",\n    \"recipients\": [\n        {\n            \"encrypted_key\": \"XCN8V57Q1tgauLW1zgj1WQZQ0WDV0QwyEZFNNwF6DmiI49CK5s58dsV0d_NZK-SMNqe0iFXgXFvHpoc8kuViSM_K5lrpbMStj7CRPskvbS8=\",\n            \"header\": {\n                \"kid\": \"GJ1SzoWzavQYfNL9XkaJdrQejfztN4XqdsiV4ct3LXKL\"\n            }\n        },\n        {\n            \"encrypted_key\": \"hnOeL0Y9xOvcy5oFgtd1RVm9d73-0uGWNJCtG4lKswvYcWzSnDlhbbvjiHUCX5mMNYulilgCMFQvkvrRDnJI3E6ZjO1qRZuCQv4yT-w6oiA=\",\n            \"header\": {\n                \"kid\": \"2GXmuCN2JCxSqMRVftBHLxVJKSL5bXyzM8DsPzGqQoNj\"\n            }\n        }\n    ]\n}\n
    "},{"location":"features/0019-encryption-envelope/#pack-output-format-anoncrypt-mode","title":"pack output format (Anoncrypt mode)","text":"
        {\n         \"protected\": \"b64URLencoded({\n            \"enc\": \"xchachapoly1305_ietf\",\n            \"typ\": \"JWM/1.0\",\n            \"alg\": \"Anoncrypt\",\n            \"recipients\": [\n                {\n                    \"encrypted_key\": base64URLencode(libsodium.crypto_box_seal(their_vk, cek)),\n                    \"header\": {\n                        \"kid\": base58encode(recipient_verkey),\n                    }\n                },\n            ],\n         })\",\n         \"iv\": b64URLencode(iv),\n         \"ciphertext\": b64URLencode(encrypt_detached({'@type'...}, protected_value_encoded, iv, cek),\n         \"tag\": b64URLencode(tag)\n    }\n
    "},{"location":"features/0019-encryption-envelope/#anoncrypt-pack-algorithm","title":"Anoncrypt pack algorithm","text":"
    1. generate a content encryption key (symmetrical encryption key)
    2. encrypt the CEK for each recipient's public key using Anoncrypt (steps below)
      1. set encrypted_key value to base64URLencode(libsodium.crypto_box_seal(their_vk, cek))
        • Note it this step we're encrypting the cek, so it can be decrypted by the recipient
    3. base64URLencode the protected value
    4. encrypt the message using libsodium.crypto_aead_chacha20poly1305_ietf_encrypt_detached(message, protected_value_encoded, iv, cek) this is the ciphertext.
    5. base64URLencode the iv, ciphertext, and tag then serialize the format into the output format listed above.

    For a reference implementation, see https://github.com/hyperledger/indy-sdk/blob/master/libindy/src/commands/crypto.rs

    "},{"location":"features/0019-encryption-envelope/#unpack-message","title":"Unpack Message","text":""},{"location":"features/0019-encryption-envelope/#unpack_message-interface","title":"unpack_message() interface","text":"

    unpacked_message = unpack_message(wallet_handle, jwe)

    "},{"location":"features/0019-encryption-envelope/#unpack_message-params","title":"unpack_message() Params","text":""},{"location":"features/0019-encryption-envelope/#unpack-algorithm","title":"Unpack Algorithm","text":"
    1. seralize data, so it can be used
      • For example, in rust-lang this has to be seralized as a struct.
    2. Lookup the kid for each recipient in the wallet to see if the wallet possesses a private key associated with the public key listed
    3. Check if a sender field is used.
      • If a sender is included use auth_decrypt to decrypt the encrypted_key by doing the following:
        1. decrypt sender verkey using libsodium.crypto_box_seal_open(my_private_key, base64URLdecode(sender))
        2. decrypt cek using libsodium.crypto_box_open(my_private_key, sender_verkey, encrypted_key, cek_iv)
        3. decrypt ciphertext using libsodium.crypto_aead_chacha20poly1305_ietf_open_detached(base64URLdecode(ciphertext_bytes), base64URLdecode(protected_data_as_bytes), base64URLdecode(nonce), cek)
        4. return message, recipient_verkey and sender_verkey following the authcrypt format listed below
      • If a sender is NOT included use anon_decrypt to decrypt the encrypted_key by doing the following:
        1. decrypt encrypted_key using libsodium.crypto_box_seal_open(my_private_key, encrypted_key)
        2. decrypt ciphertext using libsodium.crypto_aead_chacha20poly1305_ietf_open_detached(base64URLdecode(ciphertext_bytes), base64URLdecode(protected_data_as_bytes), base64URLdecode(nonce), cek)
        3. return message and recipient_verkey following the anoncrypt format listed below

    NOTE: In the unpack algorithm, the base64url decode implementation used MUST correctly decode padded and unpadded base64URL encoded data.

    For a reference unpack implementation, see https://github.com/hyperledger/indy-sdk/blob/master/libindy/src/commands/crypto.rs

    "},{"location":"features/0019-encryption-envelope/#unpack_message-return-values-authcrypt-mode","title":"unpack_message() return values (authcrypt mode)","text":"
    {\n    \"message\": \"{ \\\"@id\\\": \\\"123456780\\\",\\\"@type\\\":\\\"https://didcomm.org/basicmessage/1.0/message\\\",\\\"sent_time\\\": \\\"2019-01-15 18:42:01Z\\\",\\\"content\\\": \\\"Your hovercraft is full of eels.\\\"}\",\n    \"recipient_verkey\": \"HKTAiYM8cE2kKC9KaNMZLYj4GS8uWCYMBxP2i1Y92zum\",\n    \"sender_verkey\": \"DWwLsbKCRAbYtfYnQNmzfKV7ofVhMBi6T4o3d2SCxVuX\"\n}\n
    "},{"location":"features/0019-encryption-envelope/#unpack_message-return-values-anoncrypt-mode","title":"unpack_message() return values (anoncrypt mode)","text":"
    {\n    \"message\": \"{ \\\"@id\\\": \\\"123456780\\\",\\\"@type\\\":\\\"https://didcomm.org/basicmessage/1.0/message\\\",\\\"sent_time\\\": \\\"2019-01-15 18:42:01Z\\\",\\\"content\\\": \\\"Your hovercraft is full of eels.\\\"}\",\n    \"recipient_verkey\": \"2GXmuCN2JCxSqMRVftBHLxVJKSL5bXyzM8DsPzGqQoNj\"\n}\n
    "},{"location":"features/0019-encryption-envelope/#additional-notes","title":"Additional Notes","text":""},{"location":"features/0019-encryption-envelope/#drawbacks","title":"Drawbacks","text":"

    The current implementation of the pack() message is currently Hyperledger Aries specific. It is based on common crypto libraries (NaCl), but the wrappers are not commonly used outside of Aries. There's currently work being done to fine alignment on a cross-ecosystem interoperable protocol, but this hasn't been achieved yet. This work will hopefully bridge this gap.

    "},{"location":"features/0019-encryption-envelope/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    As the JWE standard currently stands, it does not follow this format. We're actively working with the lead writer of the JWE spec to find alignment and are hopeful the changes needed can be added.

    We've also looked at using the Message Layer Security (MLS) specification. This specification shows promise for adoption later on with more maturity. Additionally because they aren't hiding metadata related to the sender (Sender Anonymity), we would need to see some changes made to the specification before we could adopt this spec.

    "},{"location":"features/0019-encryption-envelope/#prior-art","title":"Prior art","text":"

    The JWE family of encryption methods.

    "},{"location":"features/0019-encryption-envelope/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0019-encryption-envelope/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community Aries Framework - .NET .NET framework for building agents of all types Streetcred.id Commercial mobile and web app built using Aries Framework - .NET Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases. Aries Framework - Go For building agents, hubs and other DIDComm ../../features in GoLang. Aries Protocol Test Suite"},{"location":"features/0019-encryption-envelope/schema/","title":"Schema","text":"

    This spec is according JSON Schema v0.7

    {\n    \"id\": \"https://github.com/hyperledger/indy-agent/wiremessage.json\",\n    \"$schema\": \"http://json-schema.org/draft-07/schema#\",\n    \"title\": \"Json Web Message format\",\n    \"type\": \"object\",\n    \"required\": [\"ciphertext\", \"iv\", \"protected\", \"tag\"],\n    \"properties\": {\n        \"protected\": {\n            \"type\": \"object\",\n            \"description\": \"Additional authenticated message data base64URL encoded, so it can be verified by the recipient using the tag\",\n            \"required\": [\"enc\", \"typ\", \"alg\", \"recipients\"],\n            \"properties\": {\n                \"enc\": {\n                    \"type\": \"string\",\n                    \"enum\": [\"xchacha20poly1305_ietf\"],\n                    \"description\": \"The authenticated encryption algorithm used to encrypt the ciphertext\"\n                },\n                \"typ\": { \n                    \"type\": \"string\",\n                    \"description\": \"The message type. Ex: JWM/1.0\"\n                },\n                \"alg\": {\n                    \"type\": \"string\",\n                    \"enum\": [ \"authcrypt\", \"anoncrypt\"]\n                },\n                \"recipients\": {\n                    \"type\": \"array\",\n                    \"description\": \"A list of the recipients who the message is encrypted for\",\n                    \"items\": {\n                        \"type\": \"object\",\n                        \"required\": [\"encrypted_key\", \"header\"],\n                        \"properties\": {\n                            \"encrypted_key\": {\n                                \"type\": \"string\",\n                                \"description\": \"The key used for encrypting the ciphertext. This is also referred to as a cek\"\n                            },\n                            \"header\": {\n                                \"type\": \"object\",\n                                \"required\": [\"kid\"],\n                                \"description\": \"The recipient to whom this message will be sent\",\n                                \"properties\": {\n                                    \"kid\": {\n                                        \"type\": \"string\",\n                                        \"description\": \"base58 encoded verkey of the recipient.\"\n                                    }\n                                }\n                            }\n                        }\n                    }\n                 },     \n            },\n        },\n        \"iv\": {\n            \"type\": \"string\",\n            \"description\": \"base64 URL encoded nonce used to encrypt ciphertext\"\n        },\n        \"ciphertext\": {\n            \"type\": \"string\",\n            \"description\": \"base64 URL encoded authenticated encrypted message\"\n        },\n        \"tag\": {\n            \"type\": \"string\",\n            \"description\": \"Integrity checksum/tag base64URL encoded to check ciphertext, protected, and iv\"\n        }\n    }\n}\n

    "},{"location":"features/0023-did-exchange/","title":"Aries RFC 0023: DID Exchange v1","text":""},{"location":"features/0023-did-exchange/#summary","title":"Summary","text":"

    This RFC describes the protocol to exchange DIDs between agents when establishing a DID based relationship.

    "},{"location":"features/0023-did-exchange/#motivation","title":"Motivation","text":"

    Aries agent developers want to create agents that are able to establish relationships with each other and exchange secure information using keys and endpoints in DID Documents. For this to happen there must be a clear protocol to exchange DIDs.

    "},{"location":"features/0023-did-exchange/#version-change-log","title":"Version Change Log","text":""},{"location":"features/0023-did-exchange/#version-11-signed-rotations-without-did-documents","title":"Version 1.1 - Signed Rotations without DID Documents","text":"

    Added the optional did_rotate~attach attachment for provenance of rotation without an attached DID Document.

    "},{"location":"features/0023-did-exchange/#tutorial","title":"Tutorial","text":"

    We will explain how DIDs are exchanged, with the roles, states, and messages required.

    "},{"location":"features/0023-did-exchange/#roles","title":"Roles","text":"

    The DID Exchange Protocol uses two roles: requester and responder.

    The requester is the party that initiates this protocol after receiving an invitation message (using RFC 0434 Out of Band) or by using an implied invitation from a public DID. For example, a verifier might get the DID of the issuer of a credential they are verifying, and use information in the DIDDoc for that DID as the basis for initiating an instance of this protocol.

    Since the requester receiving an explicit invitation may not have an Aries agent, it is desirable, but not strictly, required that sender of the invitation (who has the responder role in this protocol) have the ability to help the requester with the process and/or costs associated with acquiring an agent capable of participating in the ecosystem. For example, the sender of an invitation may often be sponsoring institutions.

    The responder, who is the sender of an explicit invitation or the publisher of a DID with an implicit invitation, must have an agent capable of interacting with other agents via DIDComm.

    In cases where both parties already possess SSI capabilities, deciding who plays the role of requester and responder might be a casual matter of whose phone is handier.

    "},{"location":"features/0023-did-exchange/#states","title":"States","text":""},{"location":"features/0023-did-exchange/#requester","title":"Requester","text":"

    The requester goes through the following states per the State Machine Tables below

    "},{"location":"features/0023-did-exchange/#responder","title":"Responder","text":"

    The responder goes through the following states per the State Machine Tables below

    "},{"location":"features/0023-did-exchange/#state-machine-tables","title":"State Machine Tables","text":"

    The following are the requester and responder state machines.

    The invitation-sent and invitation-received are technically outside this protocol, but are useful to show in the state machine as the invitation is the trigger to start the protocol and is referenced from the protocol as the parent thread (pthid). This is discussed in more detail below.

    The abandoned and completed states are terminal states and there is no expectation that the protocol can be continued (or even referenced) after reaching those states.

    "},{"location":"features/0023-did-exchange/#errors","title":"Errors","text":"

    After receiving an explicit invitation, the requester may send a problem-report to the responder using the information in the invitation to either restart the invitation process (returning to the start state) or to abandon the protocol. The problem-report may be an adopted Out of Band protocol message or an adopted DID Exchange protocol message, depending on where in the processing of the invitation the error was detected.

    During the request / response part of the protocol, there are two protocol-specific error messages possible: one for an active rejection and one for an unknown error. These errors are sent using a problem_report message type specific to the DID Exchange Protocol. These errors do not transition the protocol to the abandoned state. The following list details problem-codes that may be sent in these cases:

    request_not_accepted - The error indicates that the request message has been rejected for a reason listed in the error_report. Typical reasons include not accepting the method of the provided DID, unknown endpoint protocols, etc. The request can be resent after the appropriate corrections have been made.

    request_processing_error - This error is sent when the responder was processing the request with the intent to accept the request, but some processing error occurred. This error indicates that the request should be resent as-is.

    response_not_accepted - The error indicates that the response has been rejected for a reason listed in the error_report. Typical reasons include not accepting the method of the provided DID, unknown endpoint protocols, invalid signature, etc. The response can be resent after the appropriate corrections have been made.

    response_processing_error - This error is sent when the requester was processing the response with the intent to accept the response, but some processing error occurred. This error indicates that the response should be resent as-is.

    If other errors occur, the corresponding party may send a problem-report to inform the other party they are abandoning the protocol.

    No errors are sent in timeout situations. If the requester or responder wishes to retract the messages they sent, they record so locally and return a request_not_accepted or response_not_accepted error when the other party sends a request or response.

    "},{"location":"features/0023-did-exchange/#error-message-example","title":"Error Message Example","text":"
    {\n  \"@type\": \"https://didcomm.org/didexchange/1.1/problem_report\",\n  \"@id\": \"5678876542345\",\n  \"~thread\": { \"thid\": \"<@id of message related to problem>\" },\n  \"~l10n\": { \"locale\": \"en\"},\n  \"problem-code\": \"request_not_accepted\", // matches codes listed above\n  \"explain\": \"Unsupported DID method for provided DID.\"\n}\n
    "},{"location":"features/0023-did-exchange/#error-message-attributes","title":"Error Message Attributes","text":""},{"location":"features/0023-did-exchange/#flow-overview","title":"Flow Overview","text":""},{"location":"features/0023-did-exchange/#implicit-and-explicit-invitations","title":"Implicit and Explicit Invitations","text":"

    The DID Exchange Protocol is preceded by - either knowledge of a resolvable DID (an implicit invitation) - or by a out-of-band/%VER/invitation message from the Out Of Band Protocols RFC.

    The information needed to construct the request message to start the protocol is used - either from the resolved DID Document - or the service element of the handshake_protocols attribute of the invitation.

    "},{"location":"features/0023-did-exchange/#1-exchange-request","title":"1. Exchange Request","text":"

    The request message is used to communicate the DID document of the requester to the responder using the provisional service information present in the (implicit or explicit) invitation.

    The requester may provision a new DID according to the DID method spec. For a Peer DID, this involves creating a matching peer DID and key. The newly provisioned DID and DID Doc is presented in the request message as follows:

    "},{"location":"features/0023-did-exchange/#request-message-example","title":"Request Message Example","text":"
    {\n  \"@id\": \"5678876542345\",\n  \"@type\": \"https://didcomm.org/didexchange/1.1/request\",\n  \"~thread\": { \n      \"thid\": \"5678876542345\",\n      \"pthid\": \"<id of invitation>\"\n  },\n  \"label\": \"Bob\",\n  \"goal_code\": \"aries.rel.build\",\n  \"goal\": \"To create a relationship\",\n  \"did\": \"B.did@B:A\",\n  \"did_doc~attach\": {\n      \"@id\": \"d2ab6f2b-5646-4de3-8c02-762f553ab804\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n         \"base64\": \"eyJ0eXAiOiJKV1Qi... (bytes omitted)\",\n         \"jws\": {\n            \"header\": {\n               \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n            },\n            \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n            \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n            }\n      }\n   }\n}\n
    "},{"location":"features/0023-did-exchange/#request-message-attributes","title":"Request Message Attributes","text":"

    The label property was intended to be declared as an optional property, but was added to the RFC as a required property. If an agent wishes to not use a label in the request, an empty string (\"\") or the set value Unspecified may be used to indicate a non-value. This approach ensures existing AIP 2.0 implementations do not break.

    "},{"location":"features/0023-did-exchange/#correlating-requests-to-invitations","title":"Correlating requests to invitations","text":"

    An invitation is presented in one of two forms:

    When a request responds to an explicit invitation, its ~thread.pthid MUST be equal to the @id property of the invitation as described in the out-of-band RFC.

    When a request responds to an implicit invitation, its ~thread.pthid MUST contain a DID URL that resolves to the specific service on a DID document that contains the invitation.

    "},{"location":"features/0023-did-exchange/#example-referencing-an-explicit-invitation","title":"Example Referencing an Explicit Invitation","text":"
    {\n  \"@id\": \"a46cdd0f-a2ca-4d12-afbf-2e78a6f1f3ef\",\n  \"@type\": \"https://didcomm.org/didexchange/1.1/request\",\n  \"~thread\": { \n      \"thid\": \"a46cdd0f-a2ca-4d12-afbf-2e78a6f1f3ef\",\n      \"pthid\": \"032fbd19-f6fd-48c5-9197-ba9a47040470\" \n  },\n  \"label\": \"Bob\",\n  \"goal_code\": \"aries.rel.build\",\n  \"goal\": \"To create a relationship\",\n  \"did\": \"B.did@B:A\",\n  \"did_doc~attach\": {\n      \"@id\": \"d2ab6f2b-5646-4de3-8c02-762f553ab804\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n         \"base64\": \"eyJ0eXAiOiJKV1Qi... (bytes omitted)\",\n         \"jws\": {\n            \"header\": {\n               \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n            },\n            \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n            \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n            }\n      }\n   }\n}\n
    "},{"location":"features/0023-did-exchange/#example-referencing-an-implicit-invitation","title":"Example Referencing an Implicit Invitation","text":"
    {\n  \"@id\": \"a46cdd0f-a2ca-4d12-afbf-2e78a6f1f3ef\",\n  \"@type\": \"https://didcomm.org/didexchange/1.1/request\",\n  \"~thread\": { \n      \"thid\": \"a46cdd0f-a2ca-4d12-afbf-2e78a6f1f3ef\",\n      \"pthid\": \"did:example:21tDAKCERh95uGgKbJNHYp#didcomm\" \n  },\n  \"label\": \"Bob\",\n  \"goal_code\": \"aries.rel.build\",\n  \"goal\": \"To create a relationship\",\n  \"did\": \"B.did@B:A\",\n  \"did_doc~attach\": {\n      \"@id\": \"d2ab6f2b-5646-4de3-8c02-762f553ab804\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n         \"base64\": \"eyJ0eXAiOiJKV1Qi... (bytes omitted)\",\n         \"jws\": {\n            \"header\": {\n               \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n            },\n            \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n            \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n            }\n      }\n   }\n}\n
    "},{"location":"features/0023-did-exchange/#request-transmission","title":"Request Transmission","text":"

    The request message is encoded according to the standards of the Encryption Envelope, using the recipientKeys present in the invitation.

    If the routingKeys attribute was present and non-empty in the invitation, each key must be used to wrap the message in a forward request, then encoded in an Encryption Envelope. This processing is in order of the keys in the list, with the last key in the list being the one for which the serviceEndpoint possesses the private key.

    The message is then transmitted to the serviceEndpoint.

    The requester is in the request-sent state. When received, the responder is in the request-received state.

    "},{"location":"features/0023-did-exchange/#request-processing","title":"Request processing","text":"

    After receiving the exchange request, the responder evaluates the provided DID and DID Doc according to the DID Method Spec.

    The responder should check the information presented with the keys used in the wire-level message transmission to ensure they match.

    The responder MAY look up the corresponding invitation identified in the request's ~thread.pthid to determine whether it should accept this exchange request.

    If the responder wishes to continue the exchange, they will persist the received information in their wallet. They will then either update the provisional service information to rotate the key, or provision a new DID entirely. The choice here will depend on the nature of the DID used in the invitation.

    The responder will then craft an exchange response using the newly updated or provisioned information.

    "},{"location":"features/0023-did-exchange/#request-errors","title":"Request Errors","text":"

    See Error Section above for message format details.

    "},{"location":"features/0023-did-exchange/#request-rejected","title":"Request Rejected","text":"

    Possible reasons:

    "},{"location":"features/0023-did-exchange/#request-processing-error","title":"Request Processing Error","text":""},{"location":"features/0023-did-exchange/#2-exchange-response","title":"2. Exchange Response","text":"

    The exchange response message is used to complete the exchange. This message is required in the flow, as it updates the provisional information presented in the invitation.

    "},{"location":"features/0023-did-exchange/#response-message-example","title":"Response Message Example","text":"
    {\n  \"@type\": \"https://didcomm.org/didexchange/1.1/response\",\n  \"@id\": \"12345678900987654321\",\n  \"~thread\": {\n    \"thid\": \"<The Thread ID is the Message ID (@id) of the first message in the thread>\"\n  },\n  \"did\": \"B.did@B:A\",\n  \"did_doc~attach\": {\n      \"@id\": \"d2ab6f2b-5646-4de3-8c02-762f553ab804\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n         \"base64\": \"eyJ0eXAiOiJKV1Qi... (bytes omitted)\",\n         \"jws\": {\n            \"header\": {\n               \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n            },\n            \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n            \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n            }\n      }\n   },\n   \"did_rotate~attach\": {\n      \"mime-type\": \"text/string\",\n      \"data\": {\n         \"base64\": \"Qi5kaWRAQjpB\",\n         \"jws\": {\n         \"header\": {\n            \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n         },\n         \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n         \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n         }\n      }\n   }\n}\n

    The invitation's recipientKeys should be dedicated to envelopes authenticated encryption throughout the exchange. These keys are usually defined in the KeyAgreement DID verification relationship.

    "},{"location":"features/0023-did-exchange/#response-message-attributes","title":"Response Message Attributes","text":"

    In addition to a new DID, the associated DID Doc might contain a new endpoint. This new DID and endpoint are to be used going forward in the relationship.

    "},{"location":"features/0023-did-exchange/#response-transmission","title":"Response Transmission","text":"

    The message should be packaged in the encrypted envelope format, using the keys from the request, and the new keys presented in the internal did doc.

    When the message is sent, the responder are now in the response-sent state. On receipt, the requester is in the response-received state.

    "},{"location":"features/0023-did-exchange/#response-processing","title":"Response Processing","text":"

    When the requester receives the response message, they will decrypt the authenticated envelope which confirms the source's authenticity. After decryption validation, the signature on the did_doc~attach or did_rotate~attach MUST be validated, if present. The key used in the signature MUST match the key used in the invitation. After attachment signature validation, they will update their wallet with the new information, and use that information in sending the complete message.

    "},{"location":"features/0023-did-exchange/#response-errors","title":"Response Errors","text":"

    See Error Section above for message format details.

    "},{"location":"features/0023-did-exchange/#response-rejected","title":"Response Rejected","text":"

    Possible reasons:

    "},{"location":"features/0023-did-exchange/#response-processing-error","title":"Response Processing Error","text":""},{"location":"features/0023-did-exchange/#3-exchange-complete","title":"3. Exchange Complete","text":"

    The exchange complete message is used to confirm the exchange to the responder. This message is required in the flow, as it marks the exchange complete. The responder may then invoke any protocols desired based on the context expressed via the pthid in the DID Exchange protocol.

    "},{"location":"features/0023-did-exchange/#complete-message-example","title":"Complete Message Example","text":"
    {\n  \"@type\": \"https://didcomm.org/didexchange/1.1/complete\",\n  \"@id\": \"12345678900987654321\",\n  \"~thread\": {\n    \"thid\": \"<The Thread ID is the Message ID (@id) of the first message in the thread>\",\n    \"pthid\": \"<pthid used in request message>\"\n  }\n}\n

    The pthid is required in this message, and must be identical to the pthid used in the request message.

    After a complete message is sent, the requester is in the completed terminal state. Receipt of the message puts the responder into the completed state.

    "},{"location":"features/0023-did-exchange/#complete-errors","title":"Complete Errors","text":"

    See Error Section above for message format details.

    "},{"location":"features/0023-did-exchange/#complete-rejected","title":"Complete Rejected","text":"

    This is unlikely to occur with other than an unknown processing error (covered below), so no possible reasons are listed. As experience is gained with the protocol, possible reasons may be added.

    "},{"location":"features/0023-did-exchange/#complete-processing-error","title":"Complete Processing Error","text":""},{"location":"features/0023-did-exchange/#next-steps","title":"Next Steps","text":"

    The exchange between the requester and the responder has been completed. This relationship has no trust associated with it. The next step should be to increase the trust to a sufficient level for the purpose of the relationship, such as through an exchange of proofs.

    "},{"location":"features/0023-did-exchange/#peer-did-maintenance","title":"Peer DID Maintenance","text":"

    When Peer DIDs are used in an exchange, it is likely that both the requester and responder will want to perform some relationship maintenance such as key rotations. Future RFC updates will add these maintenance ../../features.

    "},{"location":"features/0023-did-exchange/#reference","title":"Reference","text":""},{"location":"features/0023-did-exchange/#drawbacks","title":"Drawbacks","text":"

    N/A at this time

    "},{"location":"features/0023-did-exchange/#prior-art","title":"Prior art","text":""},{"location":"features/0023-did-exchange/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0023-did-exchange/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Trinsic.id Commercial mobile and web app built using Aries Framework - .NET MISSING test results"},{"location":"features/0024-didcomm-over-xmpp/","title":"Aries RFC 0024: DIDComm over XMPP","text":""},{"location":"features/0024-didcomm-over-xmpp/#summary","title":"Summary","text":"

    While DIDComm leaves its users free to choose any underlying communication protocol, for peer-to-peer DID relationships with one or both parties behind a firewall actually getting the messages to the other party is not straightforward.

    Fortunately this is a classical problem, encountered by all realtime communication protocols, and it is therefore natural to use one of these protocols to deal with the obstacles posed by firewalls. The DIDComm-over-XMPP feature provides an architecture to exchange DIDComm connection protocol messages over XMPP, using XMPP to solve any firewall issues.

    DIDComm-over-XMPP enables:

    and all of this in spite of the presence of firewalls.

    Editor's note: A reference should be added to Propose HIPE: Transports #94

    "},{"location":"features/0024-didcomm-over-xmpp/#motivation","title":"Motivation","text":"

    Currently, all examples of service endpoint in the W3C DID specification use HTTP. This assumes that the endpoint is running an HTTP server and firewalls have been opened to allow this traffic to pass through. This assumption typically fails for DIDComm agents behind LAN firewalls or using cellular networks. As a consequence, such DIDComm agents can be expected to be unavailable for incoming DIDComm messages, whereas several use cases require this. The following is an example of this.

    A consumer contacts a customer service agent of his health insurance company, and is subsequently asked for proof of identity before getting answers to his personal health related questions. DIDcom could be of use here, replacing privacy sensitive and time consuming questions in order to establish the consumers' identity with an exchange of verifiable credentials using DIDcom. In that case, the agent would just send a DIDComm message to the caller to link the ongoing human-to-human communication session to a DIDComm agent-to-agent communication session. The DIDComm connection protocol would then enable the setting up and maintenance of a trusted electronic relationship, to be used to exchange verifiable credentials. Replace insurance company with any sizeable business to consumer company and one realizes that this use case is far from insignificant.

    Unfortunately, by itself, the parties DIDcom agents will be unable to bypass the firewalls involved and exchange DIDcom messages. Therefore XMPP is called to the rescue to serve as the transport protocol which is capable with firewalls. Once the firewalls issue is solved, DIDcom can be put to use in all of these cases.

    The XMPP protocol is a popular protocol for chat and messaging. It has a client-server structure that bypasses any firewall issues.

    "},{"location":"features/0024-didcomm-over-xmpp/#tutorial","title":"Tutorial","text":"

    The DIDComm-over-XMPP feature provides an architecture for the transport of DIDComm messages over an XMPP network, using XMPP to bypass any firewalls at the receiving side.

    "},{"location":"features/0024-didcomm-over-xmpp/#didcomm","title":"DIDComm","text":"

    The DIDComm wire message format is specified in HIPE 0028-wire-message-format. It can carry among others the DIDComm connection protocol, as specified in Hyperledger Indy Hipe 0031. The purpose of the latter protocol is to set up a trusted electronic relationship between two parties (natural person, legal person, ...). Technically, the trust relationship involves the following

    W3C specifies Data Model and Syntaxes for Decentralized Identifiers (DIDs). This specification introduces Decentralized Identifiers, DIDs, for identification. A DID can be resolved into a DID Document that contains the associated keys and service endpoints, see also W3C's A Primer for Decentralized Identifiers. W3C provides a DID Method Registry for a complete list of all known DID Method specifications. Many of the DID methods use an unambiguous source of truth to resolve a DID Document, e.g. a well governed public blockchain. An exception is the Peer DID method that relies on the peers, i.e. parties in the trusted electronic relationship to maintain the DID Document.

    "},{"location":"features/0024-didcomm-over-xmpp/#xmpp","title":"XMPP","text":"

    Extensible Messaging and Presence Protocol (XMPP) is a communication protocol for message-oriented middleware based on XML (Extensible Markup Language). It enables the near-real-time exchange of structured yet extensible data between any two or more network entities. Designed to be extensible, the protocol has been used also for publish-subscribe systems, signalling for VoIP, video, file transfer, gaming, the Internet of Things applications such as the smart grid, and social networking services.

    Unlike most instant messaging protocols, XMPP is defined in an open standard and uses an open systems approach of development and application, by which anyone may implement an XMPP service and interoperate with other organizations' implementations. Because XMPP is an open protocol, implementations can be developed using any software license and many server, client, and library implementations are distributed as free and open-source software. Numerous freeware and commercial software implementations also exist.

    XMPP uses 3 types of messages:

    Message Type Description PRESENSE Inform listeners that agent is online MESSAGE Sending message to other agent IQ MESSAGE Asking for response from other agent

    "},{"location":"features/0024-didcomm-over-xmpp/#didcomm-over-xmpp","title":"DIDComm over XMPP","text":""},{"location":"features/0024-didcomm-over-xmpp/#use-of-message-normative","title":"Use of MESSAGE (normative)","text":"

    A DIDComm wire message shall be sent send as plaintext XMPP MESSAGE, without any additional identifiers.

    "},{"location":"features/0024-didcomm-over-xmpp/#service-endpoint-normative","title":"Service endpoint (normative)","text":"

    A DIDComm-over-XMPP service shall comply to the following.

    1. The id shall have a DID fragment \"#xmpp\".
    2. The type shall be \"XmppService\".
    3. The serviceEndpoint
    4. shall not have a resource part (i.e. \"/...resource...\")
    5. shall comply to the following ABNF.
    xmpp-service-endpoint = \"xmpp:\" userpart \"@did.\" domainpart\n  userpart = 1\\*CHAR\n  domainpart = 1\\*CHAR 1\\*(\".\" 1\\*char)\n  CHAR = %x01-7F\n

    The reason for not allowing a resources part is that DIDComm messages are addressed to the person/entity associated with the DID, and not to any particular device.

    A receiving XMPP client shall identify an incoming XMPP message as a DIDComm message, if the serviceEndpoint complies to the above. It shall pass any DIDComm message to its DIDComm agent.

    The following is an example of a complient DIDComm-over-XMPP service endpoint.

    {\n  \"service\": [{\n    \"id\": \"did:example:123456789abcdefghi#xmpp\",\n    \"type\": \"XmppService\",\n    \"serviceEndpoint\": \"xmpp:bob@did.bar.com\"\n  }]\n}\n
    "},{"location":"features/0024-didcomm-over-xmpp/#userpart-generation-informative","title":"Userpart generation (informative)","text":"

    There are multiple methods how the userpart of the DIDComm-over-XMPP serviceEndpoint may be generated.

    Editor's note: Should the description below be interpreted as informative, or should there be any signalling to indicate which userpart-generating method was used?

    Method 1: Same userpart as for human user

    In this method, the userpart is the same as used for human-to-human XMPP-based chat, and the resource part is removed. Here is an example.

    Human-to-human XMPP address: xmpp:alice@foo.com/phone\n-->\nDIDComm-over-XMPP serviceEndpoint: xmpp:alice@did.foo.com\n

    The advantage of this method is its simplicity. An XMPP servicer needs to be configured only once to support this convention. No further registration actions are needed by any of the the users for its XMPP clients.

    The disadvantage of this method is that it creates a strong correlation point, which may conflict with privacy requirements.

    Editor's note: More advantages or disadvantages?

    A typical application of Method 1 is when there is an ongoing human-to-human (or human-to-bot) chat session that uses XMPP and the two parties what to set up a pairwise DID relationship. One can skip Step 0 \"Invitation to Connect\" (HIPE 0031) and immediately perform Step 1 \"Connection Request\".

    Method 2: Random userpart

    In this method, the userpart is randomly generated by either the XMPP client or the XMPP server, and it is rotated at a regular basis. Here is an example.

    DIDComm-over-XMPP serviceEndpoint: xmpp:RllH91rcFdE@did.foo.com\n

    The advantage of this method is low correlation and hence high privacy. If the DIDComm-over-XMPP serviceEndpoint is rotated after each set of XMPP exchange (\"session\"), then it cannot be correlated with subsequent XMPP exchanges.

    The disadvantage of this method is the high operational complexity of this method. It requires a client to keep a reserve of random XMPP addresses with the XMPP server. It significantly increases the routing tables of the XMPP server. It also places a burden on both DIDComm agents, because of the rapid rotation of DID Documents.

    Editor's note: More advantages or disadvantages?

    "},{"location":"features/0024-didcomm-over-xmpp/#reference","title":"Reference","text":"

    For use of XMPP, it is recommended to use Openfire Server open source project, including 2 plugins to enable server caching and message carbon copy. This will enable sending DIDcom to mulitple endpoints of the same person.

    Editor's note: Add references to the 2 plugins

    XMPP servers handle messages sent to a user@host (or \"bare\") XMPP address with no resource by delivering that message only to the resource with the highest priority for the target user. Some server implementations, however, have chosen to send these messages to all of the online resources for the target user. If the target user is online with multiple resources when the original message is sent, a conversation ensues on one of the user's devices; if the user subsequently switches devices, parts of the conversation may end up on the alternate device, causing the user to be confused, misled, or annoyed.

    To solve this is is recommended to use the plugin \"Message Carbons\". It will ensure that all of target user devices get both sides of all conversations in order to avoid user confusion. As a pleasant side-effect, information about the current state of a conversation is shared between all of a user's clients that implement this protocol.

    Editor's note: Add reference to \"Message Carbons\"

    "},{"location":"features/0024-didcomm-over-xmpp/#drawbacks","title":"Drawbacks","text":"

    Editor's note: Add drawbacks

    "},{"location":"features/0024-didcomm-over-xmpp/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    All service endpoint examples from W3C's Data Model and Syntaxes for Decentralized Identifiers (DIDs) are HTTP. So if a consumer would want to be reachable for incoming DIDComm messages, then it should run an HTTP service on its consumer device and take actions to open firewalls (and handle network-address translations) towards its device. Such scenario is technically completely unrealistic, not to mention the security implications of such scenario.

    XMPP was specifically designed for incoming messages to consumer devices. XMPP's client-server structure overcomes any firewall issues.

    "},{"location":"features/0024-didcomm-over-xmpp/#prior-art","title":"Prior art","text":"

    Editor's note: Add prior art

    "},{"location":"features/0024-didcomm-over-xmpp/#unresolved-questions","title":"Unresolved questions","text":"

    Editor's note: Any unresolved questions?

    "},{"location":"features/0024-didcomm-over-xmpp/#security-considerations","title":"Security considerations","text":"

    Editor's note: Add security considerations

    "},{"location":"features/0024-didcomm-over-xmpp/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0025-didcomm-transports/","title":"Aries RFC 0025: DIDComm Transports","text":""},{"location":"features/0025-didcomm-transports/#summary","title":"Summary","text":"

    This RFC Details how different transports are to be used for Agent Messaging.

    "},{"location":"features/0025-didcomm-transports/#motivation","title":"Motivation","text":"

    Agent Messaging is designed to be transport independent, including message encryption and agent message format. Each transport does have unique ../../features, and we need to standardize how the transport ../../features are (or are not) applied.

    "},{"location":"features/0025-didcomm-transports/#reference","title":"Reference","text":"

    Standardized transport methods are detailed here.

    "},{"location":"features/0025-didcomm-transports/#https","title":"HTTP(S)","text":"

    HTTP(S) is the first, and most used transport for DID Communication that has received heavy attention.

    While it is recognized that all DIDComm messages are secured through strong encryption, making HTTPS somewhat redundant, it will likely cause issues with mobile clients because venders (Apple and Google) are limiting application access to the HTTP protocol. For example, on iOS 9 or above where [ATS])(https://developer.apple.com/documentation/bundleresources/information_property_list/nsapptransportsecurity) is in effect, any URLs using HTTP must have an exception hard coded in the application prior to uploading to the iTunes Store. This makes DIDComm unreliable as the agent initiating the the request provides an endpoint for communication that the mobile client must use. If the agent provides a URL using the HTTP protocol it will likely be unusable due to low level operating system limitations.

    As a best practice, when HTTP is used in situations where a mobile client (iOS or Android) may be involved it is highly recommended to use the HTTPS protocol, specifically TLS 1.2 or above.

    Other important notes on the subject of using HTTP(S) include:

    "},{"location":"features/0025-didcomm-transports/#known-implementations","title":"Known Implementations","text":"

    Aries Cloud Agent - Python Aries Framework - .NET

    "},{"location":"features/0025-didcomm-transports/#websocket","title":"Websocket","text":"

    Websockets are an efficient way to transmit multiple messages without the overhead of individual requests.

    "},{"location":"features/0025-didcomm-transports/#known-implementations_1","title":"Known Implementations","text":"

    Aries Cloud Agent - Python Aries Framework - .NET

    "},{"location":"features/0025-didcomm-transports/#xmpp","title":"XMPP","text":"

    XMPP is an effective transport for incoming DID-Communication messages directly to mobile agents, like smartphones.

    "},{"location":"features/0025-didcomm-transports/#known-implementations_2","title":"Known Implementations","text":"

    XMPP is implemented in the Openfire Server open source project. Integration with DID Communication agents is work-in-progress.

    "},{"location":"features/0025-didcomm-transports/#other-transports","title":"Other Transports","text":"

    Other transports may be used for Agent messaging. As they are developed, this RFC should be updated with appropriate standards for the transport method. A PR should be raised against this doc to facilitate discussion of the proposed additions and/or updates. New transports should highlight the common elements of the transport (such as an HTTP response code for the HTTP transport) and how they should be applied.

    "},{"location":"features/0025-didcomm-transports/#message-routing","title":"Message Routing","text":"

    The transports described here are used between two agents. In the case of message routing, a message will travel across multiple agent connections. Each intermediate agent (see Mediators and Relays) may use a different transport. These transport details are not made known to the sender, who only knows the keys of Mediators and the first endpoint of the route.

    "},{"location":"features/0025-didcomm-transports/#message-context","title":"Message Context","text":"

    The transport used from a previous agent can be recorded in the message trust context. This is particularly true of controlled network environments, where the transport may have additional security considerations not applicable on the public internet. The transport recorded in the message context only records the last transport used, and not any previous routing steps as described in the Message Routing section of this document.

    "},{"location":"features/0025-didcomm-transports/#transport-testing","title":"Transport Testing","text":"

    Transports which operate on IP based networks can be tested by an Agent Test Suite through a transport adapter. Some transports may be more difficult to test in a general sense, and may need specialized testing frameworks. An agent with a transport not yet supported by any testing suites may have non-transport testing performed by use of a routing agent.

    "},{"location":"features/0025-didcomm-transports/#drawbacks","title":"Drawbacks","text":"

    Setting transport standards may prevent some uses of each transport method.

    "},{"location":"features/0025-didcomm-transports/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0025-didcomm-transports/#prior-art","title":"Prior art","text":"

    Several agent implementations already exist that follow similar conventions.

    "},{"location":"features/0025-didcomm-transports/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0025-didcomm-transports/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0028-introduce/","title":"Aries RFC 0028: Introduce Protocol 1.0","text":""},{"location":"features/0028-introduce/#summary","title":"Summary","text":"

    Describes how a go-between can introduce two parties that it already knows, but that do not know each other.

    "},{"location":"features/0028-introduce/#change-log","title":"Change Log","text":""},{"location":"features/0028-introduce/#motivation","title":"Motivation","text":"

    Introductions are a fundamental activity in human relationships. They allow us to bootstrap contact information and trust. They are also a source of virality. We need a standard way to do introductions in an SSI ecosystem, and it needs to be flexible, secure, privacy-respecting, and well documented.

    "},{"location":"features/0028-introduce/#tutorial","title":"Tutorial","text":""},{"location":"features/0028-introduce/#name-and-version","title":"Name and Version","text":"

    This is the Introduce 1.0 protocol. It is uniquely identified by the URI:

    \"https://didcomm.org/introduce/1.0\"\n
    "},{"location":"features/0028-introduce/#key-concepts","title":"Key Concepts","text":""},{"location":"features/0028-introduce/#basic-use-case","title":"Basic Use Case","text":"

    Introductions target scenarios like this:

    Alice knows Bob and Carol, and can talk to each of them. She wants to introduce them in a way that allows a relationship to form.

    This use case is worded carefully; it is far more adaptable than it may appear at first glance. The Advanced Use Cases section later in the doc explores many variations. But the early part of this document focuses on the simplest reading of the use case.

    "},{"location":"features/0028-introduce/#goal","title":"Goal","text":"

    When we introduce two friends, we may hope that a new friendship ensues. But technically, the introduction is complete when we provide the opportunity for a relationship--what the parties do with that opportunity is a separate question.

    Likewise, the goal of our formal introduction protocol should be crisply constrained. Alice wants to gather consent and contact information from Bob and Carol; then she wants to invite them to connect. What they do with her invitation after that is not under her control, and is outside the scope of the introduction.

    This suggests an important insight about the relationship between the introduce protocol and the Out-Of-Band protocols: they overlap. The invitation to form a relationship, which begins the Out-Of-Band protocols, is also the final step in an introduction.

    Said differently, the goal of the introduce protocol is to start the Out-Of-Band protocols.

    "},{"location":"features/0028-introduce/#transferring-trust","title":"Transferring Trust","text":"

    [TODO: talk about how humans do introductions instead of just introducing themselves to strangers because it raises trust. Example of Delta Airlines introducing you to Heathrow Airport; you trust that you're really talking to Heathrow based on Delta's asertion.]

    "},{"location":"features/0028-introduce/#roles","title":"Roles","text":"

    There are three [TODO:do we want to support introducing more than 2 at a time?] participants in the protocol, but only two roles.

    The introducer begins the process and must know the other two parties. Alice is the introducer in the diagram above. The other two participants are both introducees.

    "},{"location":"features/0028-introduce/#states","title":"States","text":"

    In a successful introduction, the introducer state progresses from [start] -> arranging -> delivering -> confirming (optional) -> [done]. Confirming is accomplished with an ACK to an introducee to let them know that their out-of-band message was forwarded.

    Meanwhile, each introducee progresses from [start] -> deciding -> waiting -> [done].

    Of course, errors and optional choices complicate the possibilities. The full state machine for each party are:

    The subtleties are explored in the Advanced Use Cases section.

    "},{"location":"features/0028-introduce/#messages","title":"Messages","text":""},{"location":"features/0028-introduce/#proposal","title":"proposal","text":"

    This message informs an introducee that an introducer wants to perform an introduction, and requests approval to do so. It works the same way that proposals do in double-opt-in introductions in the non-agent world:

    The DIDComm message looks like this:

    {\n  \"@type\": \"https://didcomm.org/introduce/1.0/proposal\",\n  \"@id\": \"df3b699d-3aa9-4fd0-bb67-49594da545bd\",\n  \"to\": {\n    \"name\": \"Bob\"\n  }\n}\n

    The to field contains an introducee descriptor that provides context about the introduction, helping the party receiving the proposal to evaluate whether they wish to accept it. Depending on how much context is available between introducer and introducee independent of the formal proposal message, this can be as simple as a name, or something fancier (see Advanced Use Cases below).

    "},{"location":"features/0028-introduce/#response","title":"response","text":"

    A standard example of the message that an introducee sends in response to an introduction proposal would be:

    {\n  \"@type\": \"https://didcomm.org/introduce/1.0/response\",\n  \"@id\": \"283e15b5-a3f7-43e7-bac8-b75e4e7a0a25\",\n  \"~thread\": {\"thid\": \"df3b699d-3aa9-4fd0-bb67-49594da545bd\"},\n  \"approve\": true,\n  \"oob-message\": {\n    \"@type\": \"https://didcomm.org/out-of-band/1.0/invitation\",\n    \"@id\": \"12345678900987654321\",\n    \"label\": \"Robert\",\n    \"goal\": \"To issue a Faber College Graduate credential\",\n    \"goal_code\": \"issue-vc\",\n    \"handshake_protocols\": [\n      \"https://didcomm.org/didexchange/1.0\",\n      \"https://didcomm.org/connections/1.0\"\n    ],\n    \"service\": [\"did:sov:LjgpST2rjsoxYegQDRm7EL\"]\n  }\n}\n

    A simpler response, also valid, might look like this:

    {\n  \"@type\": \"https://didcomm.org/introduce/1.0/response\",\n  \"@id\": \"283e15b5-a3f7-43e7-bac8-b75e4e7a0a25\",\n  \"~thread\": {\"thid\": \"df3b699d-3aa9-4fd0-bb67-49594da545bd\"},\n  \"approve\": true\n}\n

    The difference between the two forms is whether the response contains a valid out-of-band message (see RFC 0434). Normally, it should--but sometimes, an introducee may not be able to (or may not want to) share a DIDComm endpoint to facilitate the introduction. In such cases, the stripped-down variant may be the right choice. See the Advanced Use Cases section for more details.

    At least one of the more complete variants must be received by an introducer to successfully complete the introduction, because the final step in the protocol is to begin one of the Out-Of-Band protocols by forwarding the message from one introducee to the other.

    "},{"location":"features/0028-introduce/#note-on-the-ouf-of-band-messages","title":"Note on the ouf-of-band messages","text":"

    These messages are not a member of the introductions/1.0 protocol; they are not even adopted. They belong to the out-of-band protocols, and are no different from the message that two parties would generate when one invites the other with no intermediary, except that:

    "},{"location":"features/0028-introduce/#request","title":"request","text":"

    This message asks for an introduction to be made. This message also uses the introducee descriptor block, to tell the potential introducer which introducee is the object of the sender's interest:

    {\n  \"@type\": \"https://didcomm.org/introduce/1.0/request\",\n  \"@id\": \"df3b699d-3aa9-4fd0-bb67-49594da545bd\",\n  \"please_introduce_to\": {\n    \"name\": \"Carol\",\n    \"description\": \"The woman who spoke after you at the PTA meeting last night.\",\n    \"expected\": true\n  },\n  \"nwise\": false,\n  \"~timing\": { \"expires_time\": \"2019-04-23 18:00Z\" }\n}\n

    The recipient can choose whether or not to honor it in their own way, on their own schedule. However, a problem_report could be returned if the recipient chooses not to honor it.

    "},{"location":"features/0028-introduce/#advanced-use-cases","title":"Advanced Use Cases","text":"

    Any of the parties can be an organization or thing instead of a person.

    Bob and Carol may actually know each other already, without Alice realizing it. The introduction may be rejected. It may create a new pairwise relationship between Bob and Carol that is entirely invisible to Alice. Or it may create an n-wise relationship in which Alice, Bob, and Carol know one another by the same identifiers.

    Some specific examples follow.

    "},{"location":"features/0028-introduce/#one-introducee-cant-do-didcomm","title":"One introducee can't do DIDComm","text":"

    The Out-Of-Band Protocols allow the invited party to be onboarded (acquire software and an agent) as part of the workflow.

    Introductions support this use case, too. In such a case, the introducer sends a standard proposal to the introducee that DOES have DIDComm capabilities, but conveys the equivalent of a proposal over a non-DIDComm channel to the other introducee. The response from the DIDComm-capable introducee must include an out-of-band message with a deep link for onboarding, and this is sent to the introducee that needs onboarding.

    "},{"location":"features/0028-introduce/#neither-introducee-can-do-didcomm","title":"Neither introducee can do DIDComm","text":"

    In this case, the introducer first goes through onboarding via one of the Out-Of-Band protocols with one introducee. Once that introducee can do DIDComm, the previous workflow is used.

    "},{"location":"features/0028-introduce/#introducer-doesnt-have-didcomm-capabilities","title":"Introducer doesn't have DIDComm capabilities","text":"

    This might happen if AliceCorp wants to connect two of its customers. AliceCorp may not be able to talk to either of its customers over DIDComm channels, but it doesn't know whether they can talk to each other that way.

    In this case, the introducer conveys the same information that a proposal would contain, using non-DIDComm channels. As long as one of the introducees sends back some kind of response that includes approval and an out-of-band message, the message can be delivered. The entire interaction is DIDComm-less.

    "},{"location":"features/0028-introduce/#one-introducee-has-a-public-did-with-a-standing-invitation","title":"One introducee has a public DID with a standing invitation","text":"

    This might happen if Alice wants to introduce Bob to CarolCorp, and CarolCorp has published a connection-invitation for general use.

    As introducer, Alice simply has to forward CarolCorp's connection-invitation to Bob. No proposal message needs to be sent to CarolCorp; this is the skip proposal event shown in the introducer's state machine.

    "},{"location":"features/0028-introduce/#introducee-requests-introduction","title":"Introducee requests introduction","text":"

    Alice still acts as the introducer, but Bob now asks Alice to introduce him to a candidate introducee discovered a priori with the help-me-discover protocol:

    {\n  \"@type\": \"https://didcomm.org/introduce/1.0/request\",\n  \"@id\": \"df3b699d-3aa9-4fd0-bb67-49594da545bd\",\n  \"please_introduce_to\": {\n      \"discovered\": \"didcomm:///5f2396b5-d84e-689e-78a1-2fa2248f03e4/.candidates%7B.id+%3D%3D%3D+%22Carol%22%7D\"\n  },\n  \"~timing\": { \"expires_time\": \"2019-04-23 18:00Z\" }\n}\n

    This request message includes a discovered property with a linkable message path that uniquely identifies the candidate introducee.

    "},{"location":"features/0028-introduce/#requesting-confirmation","title":"Requesting confirmation","text":"

    [TODO: A field in the response where an introducee asks to be notified that the introduction has been made?]

    "},{"location":"features/0028-introduce/#other-stuff","title":"Other stuff","text":"

    [TODO: What if Alice is introducing Bob, a public entity with no connection to her, to Carol, a private person? Can she just relay Bob's invitation that he published on his website? Are there security or privacy implications? What if she is introducing 2 public entities and has a connection to neither?]

    "},{"location":"features/0028-introduce/#reference","title":"Reference","text":""},{"location":"features/0028-introduce/#proposal_1","title":"proposal","text":"

    In the tutorial narrative, only a simple proposal was presented. A fancier version might be:

    {\n  \"@type\": \"https://didcomm.org/introduce/1.0/proposal\",\n  \"@id\": \"df3b699d-3aa9-4fd0-bb67-49594da545bd\",\n  \"to\": {\n    \"name\": \"Kaiser Hospital\",\n    \"description\": \"Where I want to schedule your MRI. NOTE: NOT the one downtown!\",\n    \"description~l10n\": { \"locale\": \"en\", \"es\": \"Donde se toma el MRI; no en el centro\"},\n    \"where\": \"@34.0291739,-118.3589892,12z\",\n    \"img~attach\": {\n      \"description\": \"view from Marina Blvd\",\n      \"mime-type\": \"image/png\",\n      \"filename\": \"kaiser_culver_google.jpg\",\n      \"content\": {\n        \"link\": \"http://bit.ly/2FKkby3\",\n        \"byte_count\": 47738,\n        \"sha256\": \"cd5f24949f453385c89180207ddb1523640ac8565a214d1d37c4014910a4593e\"\n      }\n    },\n    \"proposed\": false\n  },\n  \"nwise\": true,\n  \"~timing\": { \"expires_time\": \"2019-04-23 18:00Z\" }\n}\n

    This adds a number of fields to the introducee descriptor. Each is optional and may be appropriate in certain circumstances. Most should be self-explanatory, but the proposed field deserves special comment. This tells whether the described introducee has received a proposal of their own, or will be introduced without that step.

    This example also adds the nwise field to the proposal. When nwise is present and its value is true, the proposal is to establish an nwise relationship in which the introducer participates, as opposed to a pairwise relationship in which only the introducees participate.

    [TODO: do we care about having a response signed? Security? MITM?]

    "},{"location":"features/0028-introduce/#errors","title":"Errors","text":"

    [TODO: What can go wrong.]

    "},{"location":"features/0028-introduce/#localization","title":"Localization","text":"

    [TODO: the description field in an introducee descriptor. Error codes/catalog.]

    "},{"location":"features/0028-introduce/#drawbacks","title":"Drawbacks","text":"

    Why should we not do this?

    "},{"location":"features/0028-introduce/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0028-introduce/#prior-art","title":"Prior art","text":"

    Discuss prior art, both the good and the bad, in relation to this proposal. A few examples of what this can include are:

    This section is intended to encourage you as an author to think about the lessons from other implementers, provide readers of your proposal with a fuller picture. If there is no prior art, that is fine - your ideas are interesting to us whether they are brand new or if they are an adaptation from other communities.

    Note that while precedent set by other communities is some motivation, it does not on its own motivate an enhancement proposal here. Please also take into consideration that Indy sometimes intentionally diverges from common identity ../../features.

    "},{"location":"features/0028-introduce/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0028-introduce/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0030-sync-connection/","title":"Aries RFC 0030: Sync Connection Protocol 1.0","text":""},{"location":"features/0030-sync-connection/#summary","title":"Summary","text":"

    Define a set of non-centralized protocols (that is, ones that do not involve a common store of state like a blockchain), whereby parties using peer DIDs can synchronize the state of their shared relationship by direct communication with one another.

    "},{"location":"features/0030-sync-connection/#change-log","title":"Change Log","text":""},{"location":"features/0030-sync-connection/#motivation","title":"Motivation","text":"

    For Alice and Bob to interact, they must establish and maintain state. This state includes all the information in a DID Document: endpoint, keys, and associated authorizations.

    The DID exchange protocol describes how these DID Docs are initially exchanged as a relationship is built. However, its mandate ends when a connection is established. This RFC focuses on how peers maintain their relationship thereafter, as DID docs evolve.

    "},{"location":"features/0030-sync-connection/#tutorial","title":"Tutorial","text":"

    Note 1: This RFC assumes you are thoroughly familiar with terminology and constructs from the peer DID method spec. Check there if you need background.

    Note 2: Most protocols between identity owners deal only with messages that cross a domain boundary--what Alice sends to Bob, or vice versa. What Alice does internally is generally none of Bob's business, since interoperability is a function of messages that are passed to external parties, not events that happen inside one's own domain. However, this protocol has some special requirements. Alice may have multiple agents, and Bob's behavior must account for the possibility that each of them has a different view of current relationship state. Alice has a responsibility to share and harmonize the view of state among her agents. Bob doesn't need to know exactly how she does it--but he does need to know that she's doing it, somehow--and he may need to cooperate with Alice to intelligently resolve divergences. For this reason, we describe the protocol as if it involved message passing within a domain in addition to message passing across domains. This is a simplification. The true, precise requirement for compliance is that implementers must pass messages across domains as described here, and they must appear to an outside observer as if they were passing messages within their domain as the protocol stipulates--but if they achieve the intra-domain results using some other mechanism besides DIDComm message passing, that is fine.

    "},{"location":"features/0030-sync-connection/#name-and-version","title":"Name and Version","text":"

    This RFC defines the sync_connection protocol, version 1.x, as identified by the following PIURI:

    https://didcomm.org/sync_connection/1.0\n

    Of course, subsequent evolutions of the protocol will replace 1.0 with an appropriate update per semver rules.

    A related, minor protocol is also defined in subdocs of this RFC:

    "},{"location":"features/0030-sync-connection/#roles","title":"Roles","text":"

    The only role defined in this protocol is peer. However, see this note in the peer DID method spec for some subtleties.

    "},{"location":"features/0030-sync-connection/#states","title":"States","text":"

    This is a steady-state protocol, meaning that the state of participants does not change. Instead, all participants are continuously in a syncing state.

    "},{"location":"features/0030-sync-connection/#messages","title":"Messages","text":""},{"location":"features/0030-sync-connection/#sync_state","title":"sync_state","text":"

    This message announces that the sender wants to synchronize state with the recipient. This could happen because the sender suspects they are out of sync, or because the sender wants to change the state by announcing new, never-before-seen information. The recipient can be another agent within the same sovereign domain, or it can be an agent on the other side of the relationship. A sample looks like this:

    {\n  \"@type\": \"https://didcomm.org/sync-connection/1.0/sync_state\",\n  \"@id\": \"e61586dd-f50e-4ed5-a389-716a49817207\",\n  \"for\": \"did:peer:11-479cbc07c3f991725836a3aa2a581ca2029198aa420b9d99bc0e131d9f3e2cbe\",\n  \"base_hash\": \"d48f058771956a305e12a3b062a3ac81bd8653d7b1a88dd07db8f663f37bf8e0\",\n  \"base_hash_time\": \"2019-07-23 18:05:06.123Z\",\n  \"deltas\": [\n    {\n      \"id\": \"040aaa5e-1a27-40d8-8d53-13a00b82d235\",\n      \"change\": \"ewogICJwdWJsaWNLZXkiOiBbCiAgICB...ozd1htcVBWcGZrY0pDd0R3biIKICAgIH0KICBdCn0=\",\n      \"by\": [ {\"key\": \"H3C2AVvL\", \"sig\": \"if8ooA+32YZc4SQBvIDDY9tgTa...i4VvND87PUqq5/0vsNFEGIIEDA==\"} ],\n      \"when\": \"2019-07-18T15:49:22.03Z\"\n    }\n  ]\n}\n

    Note that the values in the change and sig fields have been shortened for readability.

    The properties in this message include:

    *for: Identifies which state is being synchronized. * base_hash: Identifies a shared state against which deltas should be applied. See State Hashes for more details. * base_hash_time: An ISO 8601-formatted UTC timestamp, identifying when the sender believes that the base hash became the current state. This value need not be highly accurate, and different agents in Alice and Bob's ecosystem may have different opinions about an appropriate timestamp for the selected base hash. Like timestamps in email headers, it merely provides a rough approximation of timeframe. * deltas: Gives a list of deltas that should be applied to the DID doc, beginning at the specified state.

    When this message is received, the following processing happens:

    "},{"location":"features/0030-sync-connection/#state-hashes","title":"State Hashes","text":"

    To reliably describe the state of a DID doc at any given moment, we need a quick way to characterize its content. We could do this with a merkle tree, but the strong ordering of that structure is problematic--different participants may receive different deltas in different orders, and this is okay. What matters is whether they have applied the same set of deltas.

    To achieve this goal, the id properties of all received deltas are sorted and concatenated, and then the string undergoes a SHA256 hash. This produces a state hash.

    "},{"location":"features/0030-sync-connection/#best-practices","title":"Best Practices","text":"

    The following best practices will dramatically improve the robustness of state synchronization, both within and across domains. Software implementing this protocol is not required to do any of these things, but they are strongly recommended.

    "},{"location":"features/0030-sync-connection/#the-state-decorator","title":"The ~state decorator","text":"

    Agents using peer DIDs should attach the ~state decorator to messages to help each other discover when state synchronization is needed. This decorator has the following format:

    \"~state\": [\n  {\"did\": \"<my did>\", \"state_hash\": \"<my state hash>\"},\n  {\"did\": \"<your did>\", \"state_hash\": \"<your state hash>\"}\n]\n

    In n-wise relationships, there may be more than 2 entries in the list.

    The goal is to always describe the current known state hashes for each domain. It is also best practice for the recipient of the message to send a sync_state message back to the sender any time it detects a discrepancy.

    "},{"location":"features/0030-sync-connection/#pending-commits","title":"Pending Commits","text":"

    Agents should never commit to a change of state until they know that at least one other agent (on either side of the relationship) agrees to the change. This will significantly decrease the likelihood of merge conflicts. For example, an agent that wants to rotate a key should report the key rotation to someone, and receive an ACK, before it commits to use the new key. This guarantees that there will be gravitas and confirmation of the change, and is a reasonable requirement, since a change that nobody knows about is useless, anyway.

    "},{"location":"features/0030-sync-connection/#routing-cloud-agent-rules","title":"Routing (Cloud) Agent Rules","text":"

    It is best practice for routing agents (typically in the cloud) to enforce the following rules:

    "},{"location":"features/0030-sync-connection/#proactive-sync","title":"Proactive Sync","text":"

    Any time that an agent has reason to suspect that it may be out of sync, it should attempt to reconcile. For example, if a mobile device has been turned off for an extended period of time, it should check with other agents to see if state has evolved, once it is able to communicate again.

    "},{"location":"features/0030-sync-connection/#test-cases","title":"Test Cases","text":"

    Because this protocol encapsulates a lot of potential complexity, and many corner cases, it is particularly important that implementations exercise the full range of scenarios in the Test Cases doc. Community members are encouraged to submit new test cases if they find situations that are not covered.

    "},{"location":"features/0030-sync-connection/#reference","title":"Reference","text":""},{"location":"features/0030-sync-connection/#state-and-sequence-rules","title":"State and Sequence Rules","text":"

    [TODO: create state machine matrices that show which messages can be sent in which states, causing which transitions]

    "},{"location":"features/0030-sync-connection/#message-type-detail","title":"Message Type Detail","text":"

    [TODO: explain every possible field of every possible message type]

    "},{"location":"features/0030-sync-connection/#localized-message-catalog","title":"Localized Message Catalog","text":"

    [TODO: define some localized strings that could be used with these messages, in errors or at other generally useful points?]

    "},{"location":"features/0030-sync-connection/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0030-sync-connection/test_cases/","title":"Test Cases for Sync Connection Protocol","text":""},{"location":"features/0030-sync-connection/test_cases/#given","title":"Given","text":"

    Let us assume that Alice and Bob each have 4 agents (A.1-A.4 and B.1-B.4, respectively), and that each of these agents possesses one key pair that's authorized to authenticate and do certain things in the DID Doc.

    A.1 and B.1 are routing (cloud) agents, where A.2-4 and B.2-4 run on edge devices that are imperfectly connected. A.1 and B.1 do not appear in the authentication section of their respective DID Docs, and thus cannot login on Alice and Bob's behalf.

    Let us further assume that Alice and Bob each have two \"recovery keys\": A.5 and A.6; B.5 and B.6. These keys are not held by agents, but are printed on paper and held in a vault, or are sharded to friends. They are highly privileged but very difficult to use, since they would have to be digitized or unsharded and given to an agent before they would be useful.

    \"Admin\" operations like adding keys and granting privileges to them require either one of the privileged recovery keys, or 2 of the other agent keys to agree.

    Let us further assume that the initial state of Alice's domain, as described above, is known as A.state[0], and that Bob's state is B.state[0].

    These states may be represented by the following authorization section of each DID Doc:

    [TODO]

    "},{"location":"features/0030-sync-connection/test_cases/#scenarios-each-starts-over-at-the-initial-conditions","title":"Scenarios (each starts over at the initial conditions)","text":"
    1. A.1 attempts to rotate its key by sending a sync_state message to A.2. Expected outcome: Should receive ACK, and A.2's state should be updated. Once A.1 receives the ACK, it should commit the pending change in its own key. Until it receives the ACK, it should NOT commit the pending change.

    2. Like #1, except that message goes to B.1 and B.1's state is what should be updated.

    3. A.1 attempts to send a message to B.1, using the ~relstate decorator, claiming states with hash(A.state[0]) and hash(B.state[0]). Expected outcome: B.1 accepts the message.

    4. As #3, except that A.1 claims the current states are random hashes. Expected outcome: B.1 sends back a problem report, plus two sync_state messages (one with who = \"me\" and one with who = \"you\"). Each has an empty deltas array and base_state = the correct base state hash.

    5. A.1 attempts to rotate the key for A.2 by sending a sync_state message to any other agent. Expected outcome: change is rejected with a problem report that points out that A.1 is not authorized to rotate any key other than itself.

    "},{"location":"features/0030-sync-connection/abandon-connection-protocol/","title":"Abandon Connection Protocol 1.0","text":""},{"location":"features/0030-sync-connection/abandon-connection-protocol/#summary","title":"Summary","text":"

    Describes how parties using peer DIDs can notify one another that they are abandoning the connection.

    "},{"location":"features/0030-sync-connection/abandon-connection-protocol/#motivation","title":"Motivation","text":"

    We need a way to tell another party that we are abandoning the connection. This is not strictly required, but it is good hygiene.

    "},{"location":"features/0030-sync-connection/abandon-connection-protocol/#tutorial","title":"Tutorial","text":""},{"location":"features/0030-sync-connection/abandon-connection-protocol/#name-and-version","title":"Name and Version","text":"

    This RFC defines the abandon_connection protocol, version 1.x, as identified by the following PIURI:

    https://didcomm.org/abandon_connection/1.0\n

    Of course, subsequent evolutions of the protocol will replace 1.0 with an appropriate update per semver rules.

    "},{"location":"features/0030-sync-connection/abandon-connection-protocol/#roles","title":"Roles","text":"

    This is a classic one-step notification, so it uses the predefined roles of notifier and notified.

    "},{"location":"features/0030-sync-connection/abandon-connection-protocol/#state-machines","title":"State Machines","text":"

    No state changes during this protocol, although overarching state could change once it completes. Therefore no state machines are required.

    "},{"location":"features/0030-sync-connection/abandon-connection-protocol/#messages","title":"Messages","text":""},{"location":"features/0030-sync-connection/abandon-connection-protocol/#announce","title":"announce","text":"

    This message is used to announce that a party is abandoning the relationship. In a self-sovereign paradigm, abandoning a relationship can be done unilaterally, and does not require formal announcement. Indeed, sometimes a formal announcement is impossible, if one of the parties is offline. So while using this message is encouraged and best practice, it is not mandatory.

    An announce message from Alice to Bob looks like this:

    {\n  \"@type\": \"https://didcomm.org/abandon_connection/1.0/announce\",\n  \"@id\": \"c17147d2-ada6-4d3c-a489-dc1e1bf778ab\"\n}\n

    If Bob receives a message like this, he should assume that Alice no longer considers herself part of \"us\", and take appropriate action. This could include destroying data about Alice that he has accumulated over the course of their relationship, removing her peer DID and its public key(s) and endpoints from his wallet, and so forth. The nature of the relationship, the need for a historical audit trail, regulatory requirements, and many other factors may influence what's appropriate; the protocol simply requires that the message be understood to have permanent termination semantics.

    "},{"location":"features/0031-discover-features/","title":"Aries RFC 0031: Discover Features Protocol 1.0","text":""},{"location":"features/0031-discover-features/#summary","title":"Summary","text":"

    Describes how agents can query one another to discover which ../../features it supports, and to what extent.

    "},{"location":"features/0031-discover-features/#motivation","title":"Motivation","text":"

    Though some agents will support just one protocol and will be statically configured to interact with just one other party, many exciting uses of agents are more dynamic and unpredictable. When Alice and Bob meet, they won't know in advance which ../../features are supported by one another's agents. They need a way to find out.

    "},{"location":"features/0031-discover-features/#tutorial","title":"Tutorial","text":"

    This RFC introduces a protocol for discussing the protocols an agent can handle. The identifier for the message family used by this protocol is discover-features, and the fully qualified URI for its definition is:

    https://didcomm.org/discover-features/1.0\n

    This protocol is now superseded by v2.0 in RFC 0557. Prefer the new version where practical.

    "},{"location":"features/0031-discover-features/#roles","title":"Roles","text":"

    There are two roles in the discover-features protocol: requester and responder. The requester asks the responder about the protocols it supports, and the responder answers. Each role uses a single message type.

    "},{"location":"features/0031-discover-features/#states","title":"States","text":"

    This is a classic two-step request~response interaction, so it uses the predefined state machines for any requester and responder:

    "},{"location":"features/0031-discover-features/#messages","title":"Messages","text":""},{"location":"features/0031-discover-features/#query-message-type","title":"query Message Type","text":"

    A discover-features/query message looks like this:

    {\n  \"@type\": \"https://didcomm.org/discover-features/1.0/query\",\n  \"@id\": \"yWd8wfYzhmuXX3hmLNaV5bVbAjbWaU\",\n  \"query\": \"https://didcomm.org/tictactoe/1.*\",\n  \"comment\": \"I'm wondering if we can play tic-tac-toe...\"\n}\n

    Query messages say, \"Please tell me what your capabilities are with respect to the protocols that match this string.\" This particular example asks if another agent knows any 1.x versions of the tictactoe protocol.

    The query field may use the * wildcard. By itself, a query with just the wildcard says, \"I'm interested in anything you want to share with me.\" But usually, this wildcard will be to match a prefix that's a little more specific, as in the example that matches any 1.x version.

    Any agent may send another agent this message type at any time. Implementers of agents that intend to support dynamic relationships and rich ../../features are strongly encouraged to implement support for this message, as it is likely to be among the first messages exchanged with a stranger.

    "},{"location":"features/0031-discover-features/#disclose-message-type","title":"disclose Message Type","text":"

    A discover-features/disclose message looks like this:

    {\n  \"@type\": \"https://didcomm.org/discover-features/1.0/disclose\",\n  \"~thread\": { \"thid\": \"yWd8wfYzhmuXX3hmLNaV5bVbAjbWaU\" },\n  \"protocols\": [\n    {\n      \"pid\": \"https://didcomm.org/tictactoe/1.0\",\n      \"roles\": [\"player\"]\n    }\n  ]\n}\n

    The protocols field is a JSON array of protocol support descriptor objects that match the query. Each descriptor has a pid that contains a protocol version (fully qualified message family identifier such as https://didcomm.org/tictactoe/1.0), plus a roles array that enumerates the roles the responding agent can play in the associated protocol.

    Response messages say, \"Here are some protocols I support that matched your query, and some things I can do with each one.\"

    "},{"location":"features/0031-discover-features/#sparse-responses","title":"Sparse Responses","text":"

    Responses do not have to contain exhaustive detail. For example, the following response is probably just as good:

    {\n  \"@type\": \"https://didcomm.org/discover-features/1.0/disclose\",\n  \"~thread\": { \"thid\": \"yWd8wfYzhmuXX3hmLNaV5bVbAjbWaU\" },\n  \"protocols\": [\n    {\"pid\": \"https://didcomm.org/tictactoe/1.0\"}\n  ]\n}\n

    The reason why less detail probably suffices is that agents do not need to know everything about one another's implementations in order to start an interaction--usually the flow will organically reveal what's needed. For example, the outcome message in the tictactoe protocol isn't needed until the end, and is optional anyway. Alice can start a tictactoe game with Bob and will eventually see whether he has the right idea about outcome messages.

    The missing roles in this response does not say, \"I support no roles in this protocol.\" It says, \"I support the protocol but I'm providing no detail about specific roles.\"

    Even an empty protocols map does not say, \"I support no protocols that match your query.\" It says, \"I'm not telling you that I support any protocols that match your query.\" An agent might not tell another that it supports a protocol for various reasons, including: the trust that it imputes to the other party based on cumulative interactions so far, whether it's in the middle of upgrading a plugin, whether it's currently under high load, and so forth. And responses to a discover-features request are not guaranteed to be true forever; agents can be upgraded or downgraded, although they probably won't churn in their protocol support from moment to moment.

    "},{"location":"features/0031-discover-features/#privacy-considerations","title":"Privacy Considerations","text":"

    Because the regex in a request message can be very inclusive, the discover-features protocol could be used to mine information suitable for agent fingerprinting, in much the same way that browser fingerprinting works. This is antithetical to the ethos of our ecosystem, and represents bad behavior. Agents should use discover-features to answer legitimate questions, and not to build detailed profiles of one another. However, fingerprinting may be attempted anyway.

    For agents that want to maintain privacy, several best practices are recommended:

    "},{"location":"features/0031-discover-features/#follow-selective-disclosure","title":"Follow selective disclosure.","text":"

    Only reveal supported ../../features based on trust in the relationship. Even if you support a protocol, you may not wish to use it in every relationship. Don't tell others about protocols you do not plan to use with them.

    Patterns are easier to see in larger data samples. However, a pattern of ultra-minimal data is also a problem, so use good judgment about how forthcoming to be.

    "},{"location":"features/0031-discover-features/#vary-the-format-of-responses","title":"Vary the format of responses.","text":"

    Sometimes, you might prettify your agent plaintext message one way, sometimes another.

    "},{"location":"features/0031-discover-features/#vary-the-order-of-items-in-the-protocols-array","title":"Vary the order of items in the protocols array.","text":"

    If more than one key matches a query, do not always return them in alphabetical order or version order. If you do return them in order, do not always return them in ascending order.

    "},{"location":"features/0031-discover-features/#consider-adding-some-spurious-details","title":"Consider adding some spurious details.","text":"

    If a query could match multiple message families, then occasionally you might add some made-up message family names as matches. If a regex allows multiple versions of a protocol, then sometimes you might use some made-up versions. And sometimes not. (Doing this too aggressively might reveal your agent implementation, so use sparingly.)

    "},{"location":"features/0031-discover-features/#vary-how-you-query-too","title":"Vary how you query, too.","text":"

    How you ask questions may also be fingerprintable.

    "},{"location":"features/0031-discover-features/#reference","title":"Reference","text":""},{"location":"features/0031-discover-features/#localization","title":"Localization","text":"

    The query message contains a comment field that is localizable. This field is optional and may not be often used, but when it is, it is to provide a human-friendly justification for the query. An agent that consults its master before answering a query could present the content of this field as an explanation of the request.

    All message types in this family thus have the following implicit decorator:

    {\n\n  \"~l10n\": {\n    \"locales\": { \"en\": [\"comment\"] },\n    \"catalogs\": [\"https://github.com/hyperledger/aries-rfcs/blob/a9ad499../../features/0031-discover-features/catalog.json\"]\n  }\n\n}\n
    "},{"location":"features/0031-discover-features/#message-catalog","title":"Message Catalog","text":"

    As shown in the above ~l10n decorator, all agents using this protocol have a simple message catalog in scope. This allows agents to send problem-reports to complain about something related to discover-features issues. The catalog looks like this (see catalog.json):

    {\n  \"query-too-intrusive\": {\n    \"en\": \"Protocol query asked me to reveal too much information.\"\n  }\n}\n

    For more information, see the localization RFC.

    "},{"location":"features/0031-discover-features/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0031-discover-features/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Streetcred.id Commercial mobile and web app built using Aries Framework - .NET MISSING test results Aries Protocol Test Suite"},{"location":"features/0032-message-timing/","title":"Aries RFC 0032: Message Timing","text":""},{"location":"features/0032-message-timing/#summary","title":"Summary","text":"

    Explain how timing of agent messages can be communicated and constrained.

    "},{"location":"features/0032-message-timing/#motivation","title":"Motivation","text":"

    Many timing considerations influence asynchronous messaging delivery. We need a standard way to talk about them.

    "},{"location":"features/0032-message-timing/#tutorial","title":"Tutorial","text":"

    This RFC introduces a decorator to communicate about timing of messages. It is compatible with, but independent from, conventions around date and time fields in messages.

    Timing attributes of messages can be described with the ~timing decorator. It offers a number of optional subfields:

    \"~timing\": {\n  \"in_time\":  \"2019-01-23 18:03:27.123Z\",\n  \"out_time\": \"2019-01-23 18:03:27.123Z\",\n  \"stale_time\": \"2019-01-24 18:25Z\",\n  \"expires_time\": \"2019-01-25 18:25Z\",\n  \"delay_milli\": 12345,\n  \"wait_until_time\": \"2019-01-24 00:00Z\"\n}\n

    The meaning of these fields is:

    All information in these fields should be considered best-effort. That is, the sender makes a best effort to communicate accurately, and the receiver makes a best effort to use the information intelligently. In this respect, these values are like timestamps in email headers--they are generally useful, but not expected to be perfect. Receivers are not required to honor them exactly.

    An agent may ignore the ~timing decorator entirely or implement the ~timing decorator and silently ignore any of the fields it chooses not to support.

    "},{"location":"features/0032-message-timing/#timing-in-routing","title":"Timing in Routing","text":"

    Most usage of the ~timing decorator is likely to focus on application-oriented messages processed at the edge. in_time and out_time, for example, are mainly useful so Bob can know how long Alice took to ponder her response to his love letter. In onion routing, where one edge agent prepares all layers of the forward wrapping, it makes no sense to apply them to forward messages. However, if a relay is composing new forward messages dynamically, these fields could be used to measure the delay imposed by that relay. All the other fields have meaning in routing.

    "},{"location":"features/0032-message-timing/#timing-and-threads","title":"Timing and Threads","text":"

    When a message is a reply, then in_time on an application-focused message is useful. However, out_time and all other fields are meaningful regardless of whether threading is active.

    "},{"location":"features/0032-message-timing/#reference","title":"Reference","text":""},{"location":"features/0032-message-timing/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes RFC 0193: Coin Flip Protocol Uses ~timing.expires_time to time out each step of the coin flip."},{"location":"features/0034-message-tracing/","title":"Aries RFC 0034: Message Tracing","text":""},{"location":"features/0034-message-tracing/#summary","title":"Summary","text":"

    Define a mechanism to track what happens in a complex DIDComm interactions, to make troubleshooting and auditing easier.

    "},{"location":"features/0034-message-tracing/#motivation","title":"Motivation","text":"

    Anyone who has searched trash and spam folders for a missing email knows that when messages don't elicit the expected reaction, troubleshooting can be tricky. Aries-style agent-to-agent communication is likely to manifest many of the same challenges as email, in that it may be routed to multiple places, by multiple parties, with incomplete visibility into the meaning or state associated with individual messages. Aries's communication is even more opaque than ordinary email, in that it is transport agnostic and encrypted...

    In a future world where DIDComm technology is ubiquitous, people may send messages from one agent to another, and wonder why nothing happened, or why a particular error is reported. They will need answers.

    Also, developers and testers who are working with DIDComm-based protocols need a way to debug.

    "},{"location":"features/0034-message-tracing/#tutorial","title":"Tutorial","text":""},{"location":"features/0034-message-tracing/#basics","title":"Basics","text":"

    Many systems that deliver physical packages offer a \"cerified delivery\" or \"return receipt requested\" feature. To activate the feature, a sender affixes a special label to the package, announcing who should be notified, and how. Handlers of the package then cooperate to satisfy the request.

    DIDComm thread tracing works on a similar principle. When tracing is desired, a sender adds to the normal message metadata a special decorator that the message handler can see. If the handler notices the decorator and chooses to honor the request, it emits a notification to provide tracing.

    The main complication is that DIDComm message routing uses nested layers of encryption. What is visible to one message handler may not be visible to another. Therefore, the decorator must be repeated in every layer of nesting where tracing is required. Although this makes tracing somewhat verbose, it also provides precision; troubleshooting can focus only on one problematic section of an overall route, and can degrade privacy selectively.

    "},{"location":"features/0034-message-tracing/#decorator","title":"Decorator","text":"

    Tracing is requested by decorating the JSON plaintext of an DIDComm message (which will often be a forward message, but could also be the terminal message unpacked and handled at its final destination) with the ~trace attribute. Here is the simplest possible example:

    This example asks the handler of the message to perform an HTTP POST of a trace report about the message to the URI http://example.com/tracer.

    The service listening for trace reports--called the trace sink-- doesn't have to have any special characteristics, other than support for HTTP 1.1 or SMTP (for mailto: URIs) and the ability to receive small plaintext payloads rapidly. It may use TLS, but it is not required to. If TLS is used, the parties that submit reports should accept the certificate without strong checking, even if it is expired or invalid. The rationale for this choice is:

    1. It is the sender's trust in the tracing service, not the handler's trust, that matters.
    2. Tracing is inherently unsafe and non-privacy-preserving, in that it introduces an eavesdropper and a channel with uncertain security guarantees. Trying to secure the eavesdropper is a waste of effort.
    3. Introducing a strong dependency on PKI-based trust into a protocol that exists to improve PKI feels wrong-headed.
    4. When tracing is needed, the last thing we should do is create another fragility to troubleshoot.
    "},{"location":"features/0034-message-tracing/#trace-reports","title":"Trace Reports","text":"

    The body of the HTTP request (the trace report) is a JSON document that looks like this:

    "},{"location":"features/0034-message-tracing/#subtleties","title":"Subtleties","text":""},{"location":"features/0034-message-tracing/#message-ids","title":"Message IDs","text":"

    If messages have a different @id attribute at each hop in a delivery chain, then a trace of the message at hop 1 and a trace of the message at hop 2 will not appear to have any connection when the reports are analyzed together.

    To solve this problem, traced messages use an ID convention that permits ordering. Assume that the inner application message has a base ID, X. Containing messages (e.g., forward messages) have IDs in the form X.1, X.2, X.3, and so forth -- where numbers represent the order in which the messages will be handled. Notice in the sample trace report above that the for_id of the trace report message is 98fd8d72-80f6-4419-abc2-c65ea39d0f38.1. This implies that it is tracing the first hop of inner, application message with id 98fd8d72-80f6-4419-abc2-c65ea39d0f38.

    "},{"location":"features/0034-message-tracing/#delegation","title":"Delegation","text":"

    Sometimes, a message is sent before it is fully wrapped for all hops in its route. This can happen, for example, if Alice's edge agent delegates to Alice's cloud agent the message preparation for later stages of routing.

    In such cases, tracing for the delegated portion of the route should default to inherit the tracing choice of the portion of the route already seen. To override this, the ~trace decorator placed on the initial message from Alice's edge to Alice's cloud can include the optional full-route attribute, with its value set to true or false.

    This tells handlers that are wrapping subsequent portions of a routed message to either propagate or truncate the routing request in any new forward messages they compose.

    "},{"location":"features/0034-message-tracing/#timing-and-sequencing","title":"Timing and Sequencing","text":"

    Each trace report includes a UTC timestamp from the reporting handler. This timestamp should be computed at the instant a trace report is prepared--not when it is queued or delivered. Even so, it offers only a rough approximation of when something happened. Since system clocks from handlers may not be synchronized, there is no guarantee of precision or of agreement among timestamps.

    In addition, trace reports may be submitted asynchronously with respect to the message handling they document. Thus, a trace report could arrive out of sequence, even if the handling it describes occurred correctly. This makes it vital to order trace reports according to the ID sequencing convention described above.

    "},{"location":"features/0034-message-tracing/#tracing-the-original-sender","title":"Tracing the original sender","text":"

    The original sender may not run a message handling routine that triggers tracing. However, as a best practice, senders that enable tracing should send a trace report when they send, so the beginning of a routing sequence is documented. This report should reference X.0 in for_id, where X is the ID of the inner application message for the final recipient.

    "},{"location":"features/0034-message-tracing/#handling-a-message-more-than-once","title":"Handling a message more than once","text":"

    A particular handler may wish to document multiple phases of processing for a message. For example, it may choose to emit a trace report when the message is received, and again when the message is \"done.\" In such cases, the proper sequence of the two messages, both of which will have the same for_id attribute, is given by the relative sequence of the timestamps.

    Processing time for each handler--or for phases within a handler--is given by the elapsed_milli attribute.

    "},{"location":"features/0034-message-tracing/#privacy","title":"Privacy","text":"

    Tracing inherently compromises privacy. It is totally voluntary, and handlers should not honor trace requests if they have reason to believe they have been inserted for nefarious purposes. However, the fact that the trace reports can only be requested by the same entities that send the messages, and that they are encrypted in the same way as any other plaintext that a handler eventually sees, puts privacy controls in the hands of the ultimate sender and receiver.

    "},{"location":"features/0034-message-tracing/#tracing-entire-threads","title":"Tracing entire threads","text":"

    If a sender wishes to enable threading for an entire multi-step interaction between multiple parties, the full_thread attribute can be included on an inner application, with its value set to true. This signals to recipients that the sender wishes to have tracing turned on until the interaction is complete. Recipients may or may not honor such requests. If they don't, they may choose to send an error to the sender explaining why they are not honoring the request.

    "},{"location":"features/0034-message-tracing/#reference","title":"Reference","text":""},{"location":"features/0034-message-tracing/#trace-decorator-trace","title":"Trace decorator (~trace)","text":"

    Value is any URI. At least http, https, and mailto should be supported. If mail is sent, the message subject should be \"trace report for ?\", where ? is the value of the for_id attribute in the report, and the email body should contain the plaintext of the report, as utf8.

    "},{"location":"features/0034-message-tracing/#trace-report-attributes","title":"Trace Report Attributes","text":""},{"location":"features/0034-message-tracing/#drawbacks","title":"Drawbacks","text":"

    Tracing makes network communication quite noisy. It imposes a burden on message handlers. It may also incur performance penalties.

    "},{"location":"features/0034-message-tracing/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Wireshark and similar network monitoring tools could give some visibility into agent-to-agent interactions. However, it would be hard to make sense of bytes on the wire, due to encryption and the way individual messages may be divorced from routing or thread context.

    Proprietary tracing could be added to the agents built by particular vendors. However, this would have limited utility if an interaction involved software not made by that vendor.

    "},{"location":"features/0034-message-tracing/#prior-art","title":"Prior art","text":"

    The message threading RFC and the error reporting RFC touch on similar subjects, but are distinct.

    "},{"location":"features/0034-message-tracing/#unresolved-questions","title":"Unresolved questions","text":"

    None.

    "},{"location":"features/0034-message-tracing/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0035-report-problem/","title":"Aries RFC 0035: Report Problem Protocol 1.0","text":""},{"location":"features/0035-report-problem/#summary","title":"Summary","text":"

    Describes how to report errors and warnings in a powerful, interoperable way. All implementations of SSI agent or hub technology SHOULD implement this RFC.

    "},{"location":"features/0035-report-problem/#change-log","title":"Change Log","text":""},{"location":"features/0035-report-problem/#motivation","title":"Motivation","text":"

    Effective reporting of errors and warnings is difficult in any system, and particularly so in decentralized systems such as remotely collaborating agents. We need to surface problems, and their supporting context, to people who want to know about them (and perhaps separately, to people who can actually fix them). This is especially challenging when a problem is detected well after and well away from its cause, and when multiple parties may need to cooperate on a solution.

    Interoperability is perhaps more crucial with problem reporting than with any other aspect of DIDComm, since an agent written by one developer MUST be able to understand an error reported by an entirely different team. Notice how different this is from normal enterprise software development, where developers only need to worry about understanding their own errors.

    The goal of this RFC is to provide agents with tools and techniques possible to address these challenges. It makes two key contributions:

    "},{"location":"features/0035-report-problem/#tutorial","title":"Tutorial","text":""},{"location":"features/0035-report-problem/#error-vs-warning-vs-problem","title":"\"Error\" vs. \"Warning\" vs. \"Problem\"","text":"

    The distinction between \"error\" and \"warning\" is often thought of as one of severity -- errors are really bad, and warnings are only somewhat bad. This is reinforced by the way logging platforms assign numeric constants to ERROR vs. WARN log events, and by the way compilers let warnings be suppressed but refuse to ignore errors.

    However, any cybersecurity professional will tell you that warnings sometimes signal deep and scary problems that should not be ignored, and most veteran programmers can tell war stories that reinforce this wisdom. A deeper analysis of warnings reveals that what truly differentiates them from errors is not their lesser severity, but rather their greater ambiguity. Warnings are problems that require human judgment to evaluate, whereas errors are unambiguously bad.

    The mechanism for reporting problems in DIDComm cannot make a simplistic assumption that all agents are configured to run with a particular verbosity or debug level. Each agent must let other agents decide for themselves, based on policy or user preference, what do do about various issues. For this reason, we use the generic term \"problem\" instead of the more specific and semantically opinionated term \"error\" (or \"warning\") to describe the general situation we're addressing. \"Problem\" includes any deviation from the so-called \"happy path\" of an interaction. This could include situations where the severity is unknown and must be evaluated by a human, as well as surprising events (e.g., a decision by a human to alter the basis for in-flight messaging by moving from one device to another).

    "},{"location":"features/0035-report-problem/#specific-challenges","title":"Specific Challenges","text":"

    All of the following challenges need to be addressed.

    1. Report problems to external parties interacting with us. For example, AliceCorp has to be able to tell Bob that it can\u2019t issue the credential he requested because his payment didn\u2019t go through.
    2. Report problems to other entities inside our own domain. For example, AliceCorp\u2019s agent #1 has to be able to report to AliceCorp agent #2 that it is out of disk space.
    3. Report in a way that provides human beings with useful context and guidance to troubleshoot. Most developers know of cases where error reporting was technically correct but completely useless. Bad communication about problems is one of the most common causes of UX debacles. Humans using agents will speak different languages, have differing degrees of technical competence, and have different software and hardware resources. They may lack context about what their agents are doing, such as when a DIDComm interaction occurs as a result of scheduled or policy-driven actions. This makes context and guidance crucial.
    4. Map a problem backward in time, space, and circumstances, so when it is studied, its original context is available. This is particularly difficult in DIDComm, which is transport-agnostic and inherently asynchronous, and which takes place on an inconsistently connected digital landscape.
    5. Support localization using techniques in the l10n RFC.
    6. Provide consistent, locale-independent problem codes, not just localized text, so problems can be researched in knowledge bases, on Stack Overflow, and in other internet forums, regardless of the natural language in which a message displays. This also helps meaning remain stable as wording is tweaked.
    7. Provide a registry of well known problem codes that are carefully defined and localized, to maximize shared understanding. Maintaining an exhaustive list of all possible things that can go wrong with all possible agents in all possible interactions is completely unrealistic. However, it may be possible to maintain a curated subset. While we can't enumerate everything that can go wrong in a financial transaction, a code for \"insufficient funds\" might have near-universal usefulness. Compare the posix error inventory in errorno.h.
    8. Facilitate automated problem handling by agents, not just manual handling by humans. Perfect automation may be impossible, but high levels of automation should be doable.
    9. Clarify how the problem affects an in-progress interaction. Does a failure to process payment reset the interaction to the very beginning of the protocol, or just back to the previous step, where payment was requested? This requires problems to be matched in a formal way to the state machine of a protocol underway.
    "},{"location":"features/0035-report-problem/#the-report-problem-protocol","title":"The report-problem protocol","text":"

    Reporting problems uses a simple one-step notification protocol. Its official PIURI is:

    https://didcomm.org/report-problem/1.0\n

    The protocol includes the standard notifier and notified roles. It defines a single message type problem-report, introduced here.

    A problem-report communicates about a problem when an agent-to-agent message is possible and a recipient for the problem report is known. This covers, for example, cases where a Sender's message gets to an intended Recipient, but the Recipient is unable to process the message for some reason and wants to notify the Sender. It may also be relevant in cases where the recipient of the problem-report is not a message Sender. Of course, a reporting technique that depends on message delivery doesn't apply when the error reporter can't identify or communicate with the proper recipient.

    "},{"location":"features/0035-report-problem/#the-problem-report-message-type","title":"The problem-report message type","text":"

    Only description.code is required, but a maximally verbose problem-report could contain all of the following:

    {\n  \"@type\"            : \"https://didcomm.org/report-problem/1.0/problem-report\",\n  \"@id\"              : \"an identifier that can be used to discuss this error message\",\n  \"~thread\"          : \"info about the threading context in which the error occurred (if any)\",\n  \"description\"      : { \"en\": \"localized message\", \"code\": \"symbolic-name-for-error\" },\n  \"problem_items\"    : [ {\"<item descrip>\": \"value\"} ],\n  \"who_retries\"      : \"enum: you | me | both | none\",\n  \"fix_hint\"         : { \"en\": \"localized error-instance-specific hint of how to fix issue\"},\n  \"impact\"           : \"enum: message | thread | connection\",\n  \"where\"            : \"enum: you | me | other - enum: cloud | edge | wire | agency | ..\",\n  \"noticed_time\"     : \"<time>\",\n  \"tracking_uri\"     : \"\",\n  \"escalation_uri\"   : \"\"\n}\n
    "},{"location":"features/0035-report-problem/#field-reference","title":"Field Reference","text":"

    Some fields will be relevant and useful in many use cases, but not always. Including empty or null fields is discouraged; best practice is to include as many fields as you can fill with useful data, and to omit the others.

    @id: An identifier for this message, as described in the message threading RFC. This decorator is STRONGLY recommended, because enables a dialog about the problem itself in a branched thread (e.g., suggest a retry, report a resolution, ask for more information).

    ~thread: A thread decorator that places the problem-report into a thread context. If the problem was triggered in the processing of a message, then the triggering message is the head of a new thread of which the problem report is the second member (~thread.sender_order = 0). In such cases, the ~thread.pthid (parent thread id) here would be the @id of the triggering message. If the problem-report is unrelated to a message, the thread decorator is mostly redundant, as ~thread.thid must equal @id.

    description: Contains human-readable, localized alternative string(s) that explain the problem. It is highly recommended that the message follow use the guidance in the l10n RFC, allowing the error to be searched on the web and documented formally.

    description.code: Required. Contains the code that indicates the problem being communicated. Codes are described in protocol RFCs and other relevant places. New Codes SHOULD follow the Problem Code naming convention detailed in the DIDComm v2 spec.

    problem_items: A list of one or more key/value pairs that are parameters about the problem. Some examples might be:

    All items should have in common the fact that they exemplify the problem described by the code (e.g., each is an invalid param, or each is an unresponsive URL, or each is an unrecognized crypto algorithm, etc).

    Each item in the list must be a tagged pair (a JSON {key:value}, where the key names the parameter or item, and the value is the actual problem text/number/value. For example, to report that two different endpoints listed in party B\u2019s DID Doc failed to respond when they were contacted, the code might contain \"endpoint-not-responding\", and the problem_items property might contain:

    [\n  {\"endpoint1\": \"http://agency.com/main/endpoint\"},\n  {\"endpoint2\": \"http://failover.agency.com/main/endpoint\"}\n]\n

    who_retries: value is the string \"you\", the string \"me\", the string \"both\", or the string \"none\". This property tells whether a problem is considered permanent and who the sender of the problem report believes should have the responsibility to resolve it by retrying. Rules about how many times to retry, and who does the retry, and under what circumstances, are not enforceable and not expressed in the message text. This property is thus not a strong commitment to retry--only a recommendation of who should retry, with the assumption that retries will often occur if they make sense.

    [TODO: figure out how to identify parties > 2 in n-wise interaction]

    fix_hint: Contains human-readable, localized suggestions about how to fix this instance of the problem. If present, this should be viewed as overriding general hints found in a message catalog.

    impact: A string describing the breadth of impact of the problem. An enumerated type:

    where: A string that describes where the error happened, from the perspective of the reporter, and that uses the \"you\" or \"me\" or \"other\" prefix, followed by a suffix like \"cloud\", \"edge\", \"wire\", \"agency\", etc.

    noticed_time: Standard time entry (ISO-8601 UTC with at least day precision and up to millisecond precision) of when the problem was detected.

    [TODO: should we refer to timestamps in a standard way (\"date\"? \"time\"? \"timestamp\"? \"when\"?)]

    tracking_uri: Provides a URI that allows the recipient to track the status of the error. For example, if the error is related to a service that is down, the URI could be used to monitor the status of the service, so its return to operational status could be automatically discovered.

    escalation_uri: Provides a URI where additional help on the issue can be received. For example, this might be a \"mailto\" and email address for the Help Desk associated with a currently down service.

    "},{"location":"features/0035-report-problem/#sample","title":"Sample","text":"
    {\n  \"@type\": \"https://didcomm.org/notification/1.0/problem-report\",\n  \"@id\": \"7c9de639-c51c-4d60-ab95-103fa613c805\",\n  \"~thread\": {\n    \"pthid\": \"1e513ad4-48c9-444e-9e7e-5b8b45c5e325\",\n    \"sender_order\": 1\n  },\n  \"~l10n\"            : {\"catalog\": \"https://didcomm.org/error-codes\"},\n  \"description\"      : \"Unable to find a route to the specified recipient.\",\n  \"description~l10n\" : {\"code\": \"cant-find-route\" },\n  \"problem_items\"    : [\n      { \"recipient\": \"did:sov:C805sNYhMrjHiqZDTUASHg\" }\n  ],\n  \"who_retries\"      : \"you\",\n  \"impact\"           : \"message\",\n  \"noticed_time\"     : \"2019-05-27 18:23:06Z\"\n}\n
    "},{"location":"features/0035-report-problem/#categorized-examples-of-errors-and-current-best-practice-handling","title":"Categorized Examples of Errors and (current) Best Practice Handling","text":"

    The following is a categorization of a number of examples of errors and (current) Best Practice handling for those types of errors. The new problem-report message type is used for some of these categories, but not all.

    "},{"location":"features/0035-report-problem/#unknown-error","title":"Unknown Error","text":"

    Errors of a known error code will be processed according to the understanding of what the code means. Support of a protocol includes support and proper processing of the error codes detailed within that protocol.

    Any unknown error code that starts with w. in the DIDComm v2 style may be considered a warning, and the flow of the active protocol SHOULD continue. All other unknown error codes SHOULD be considered to be an end to the active protocol.

    "},{"location":"features/0035-report-problem/#error-while-processing-a-received-message","title":"Error While Processing a Received Message","text":"

    An Agent Message sent by a Sender and received by its intended Recipient cannot be processed.

    "},{"location":"features/0035-report-problem/#examples","title":"Examples:","text":""},{"location":"features/0035-report-problem/#recommended-handling","title":"Recommended Handling","text":"

    The Recipient should send the Sender a problem-report Agent Message detailing the issue.

    The last example deserves an additional comment about whether there should be a response sent at all. Particularly in cases where trust in the message sender is low (e.g. when establishing the connection), an Agent may not want to send any response to a rejected message as even a negative response could reveal correlatable information. That said, if a response is provided, the problem-report message type should be used.

    "},{"location":"features/0035-report-problem/#error-while-routing-a-message","title":"Error While Routing A Message","text":"

    An Agent in the routing flow of getting a message from a Sender to the Agent Message Recipient cannot route the message.

    "},{"location":"features/0035-report-problem/#examples_1","title":"Examples:","text":""},{"location":"features/0035-report-problem/#recommended-handling_1","title":"Recommended Handling","text":"

    If the Sender is known to the Agent having the problem, send a problem-report Agent Message detailing at least that a blocking issue occurred, and if relevant (such as in the first example), some details about the issue. If the message is valid, and the problem is related to a lack of resources (e.g. the second issue), also send a problem-report message to an escalation point within the domain.

    Alternatively, the capabilities described in 0034: Message Tracing could be used to inform others of the fact that an issue occurred.

    "},{"location":"features/0035-report-problem/#messages-triggered-about-a-transaction","title":"Messages Triggered about a Transaction","text":""},{"location":"features/0035-report-problem/#examples_2","title":"Examples:","text":""},{"location":"features/0035-report-problem/#recommended-handling_2","title":"Recommended Handling","text":"

    These types of error scenarios represent a gray error in handling between using the generic problem-report message format, or a message type that is part of the current transaction's message family. For example, the \"Your credential has been revoked\" might well be included as a part of the (TBD) standard Credentials Exchange message family. The \"more information\" example might be a generic error across a number of message families and so should trigger a problem-report) or, might be specific to the ongoing thread (e.g. Credential Exchange) and so be better handled by a defined message within that thread and that message family.

    The current advice on which to use in a given scenario is to consider how the recipient will handle the message. If the handler will need to process the response in a specific way for the transaction, then a message family-specific message type should be used. If the error is cross-cutting such that a common handler can be used across transaction contexts, then a generic problem-report should be used.

    \"Current advice\" implies that as we gain more experience with Agent To Agent messaging, the recommendations could get more precise.

    "},{"location":"features/0035-report-problem/#messaging-channel-settings","title":"Messaging Channel Settings","text":""},{"location":"features/0035-report-problem/#examples_3","title":"Examples","text":""},{"location":"features/0035-report-problem/#recommended-handling_3","title":"Recommended Handling","text":"

    These types of messages might or might not be triggered during the receipt and processing of a message, but either way, they are unrelated to the message and are really about the communication channel between the entities. In such cases, the recommended approach is to use a (TBD) standard message family to notify and rectify the issue (e.g. change the attributes of a connection). The definition of that message family is outside the scope of this RFC.

    "},{"location":"features/0035-report-problem/#timeouts","title":"Timeouts","text":"

    A special generic class of errors that deserves mention is the timeout, where a Sender sends out a message and does not receive back a response in a given time. In a distributed environment such as Agent to Agent messaging, these are particularly likely - and particularly difficult to handle gracefully. The potential reasons for timeouts are numerous:

    "},{"location":"features/0035-report-problem/#recommended-handling_4","title":"Recommended Handling","text":"

    Appropriate timeout handling is extremely contextual, with two key parameters driving the handling - the length of the waiting period before triggering the timeout and the response to a triggered timeout.

    The time to wait for a response should be dynamic by at least type of message, and ideally learned through experience. Messages requiring human interaction should have an inherently longer timeout period than a message expected to be handled automatically. Beyond that, it would be good for Agents to track response times by message type (and perhaps other parameters) and adjust timeouts to match observed patterns.

    When a timeout is received there are three possible responses, handled automatically or based on feedback from the user:

    An automated \"wait longer\" response might be used when first interacting with a particular message type or identity, as the response cadence is learned.

    If the decision is to retry, it would be good to have support in areas covered by other RFCs. First, it would be helpful (and perhaps necessary) for the threading decorator to support the concept of retries, so that a Recipient would know when a message is a retry of an already sent message. Next, on \"forward\" message types, Agents might want to know that a message was a retry such that they can consider refreshing DIDDoc/encryption key cache before sending the message along. It could also be helpful for a retry to interact with the Tracing facility so that more information could be gathered about why messages are not getting to their destination.

    Excessive retrying can exacerbate an existing system issue. If the reason for the timeout is because there is a \"too many messages to be processed\" situation, then sending retries simply makes the problem worse. As such, a reasonable backoff strategy should be used (e.g. exponentially increasing times between retries). As well, a strategy used at Uber is to flag and handle retries differently from regular messages. The analogy with Uber is not pure - that is a single-vendor system - but the notion of flagging retries such that retry messages can be handled differently is a good approach.

    "},{"location":"features/0035-report-problem/#caveat-problem-report-loops","title":"Caveat: Problem Report Loops","text":"

    Implementers should consider and mitigate the risk of an endless loop of error messages. For example:

    "},{"location":"features/0035-report-problem/#recommended-handling_5","title":"Recommended Handling","text":"

    How agents mitigate the risk of this problem is implementation specific, balancing loop-tracking overhead versus the likelihood of occurrence. For example, an agent implementation might have a counter on a connection object that is incremented when certain types of Problem Report messages are sent on that connection, and reset when any other message is sent. The agent could stop sending those types of Problem Report messages after the counter reaches a given value.

    "},{"location":"features/0035-report-problem/#reference","title":"Reference","text":"

    TBD

    "},{"location":"features/0035-report-problem/#drawbacks","title":"Drawbacks","text":"

    In many cases, a specific problem-report message is necessary, so formalizing the format of the message is also preferred over leaving it to individual implementations. There is no drawback to specifying that format now.

    As experience is gained with handling distributed errors, the recommendations provided in this RFC will have to evolve.

    "},{"location":"features/0035-report-problem/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    The error type specification mechanism builds on the same approach used by the message type specifications. It's possible that additional capabilities could be gained by making runtime use of the error type specification - e.g. for the broader internationalization of the error messages.

    The main alternative to a formally defined error type format is leaving it to individual implementations to handle error notifications, which will not lead to an effective solution.

    "},{"location":"features/0035-report-problem/#prior-art","title":"Prior art","text":"

    A brief search was done for error handling in messaging systems with few useful results found. Perhaps the best was the Uber article referenced in the \"Timeout\" section above.

    "},{"location":"features/0035-report-problem/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0035-report-problem/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes RFC 0036: Issue Credential Protocol The problem-report message is adopted by this protocol. MISSING test results RFC 0037: Present Proof Protocol The problem-report message is adopted by this protocol. MISSING test results Trinsic.id Commercial mobile and web app built using Aries Framework - .NET MISSING test results"},{"location":"features/0036-issue-credential/","title":"Aries RFC 0036: Issue Credential Protocol 1.0","text":""},{"location":"features/0036-issue-credential/#version-change-log","title":"Version Change Log","text":""},{"location":"features/0036-issue-credential/#11propose-credential","title":"1.1/propose-credential","text":"

    In version 1.1 of the propose-credential message, the following optional fields were added: schema_name, schema_version, and issuer_did.

    The previous version is 1.0/propose-credential.

    "},{"location":"features/0036-issue-credential/#summary","title":"Summary","text":"

    Formalizes messages used to issue a credential--whether the credential is JWT-oriented, JSON-LD-oriented, or ZKP-oriented. The general flow is similar, and this protocol intends to handle all of them. If you are using a credential type that doesn't fit this protocol, please raise a Github issue.

    "},{"location":"features/0036-issue-credential/#motivation","title":"Motivation","text":"

    We need a standard protocol for issuing credentials. This is the basis of interoperability between Issuers and Holders.

    "},{"location":"features/0036-issue-credential/#tutorial","title":"Tutorial","text":""},{"location":"features/0036-issue-credential/#roles","title":"Roles","text":"

    There are two roles in this protocol: Issuer and Holder. Technically, the latter role is only potential until the protocol completes; that is, the second party becomes a Holder of a credential by completing the protocol. However, we will use the term Holder throughout, to keep things simple.

    Note: When a holder of credentials turns around and uses those credentials to prove something, they become a Prover. In the sister RFC to this one, 0037: Present Proof, the Holder is therefore renamed to Prover. Sometimes in casual conversation, the Holder role here might be called \"Prover\" as well, but more formally, \"Holder\" is the right term at this phase of the credential lifecycle.

    "},{"location":"features/0036-issue-credential/#states","title":"States","text":"

    The choreography diagrams shown below detail how state evolves in this protocol, in a \"happy path.\" The states include:

    "},{"location":"features/0036-issue-credential/#states-for-issuer","title":"states for Issuer","text":""},{"location":"features/0036-issue-credential/#states-for-holder","title":"states for Holder","text":"

    Errors might occur in various places. For example, an Issuer might offer a credential for a price that the Holder is unwilling to pay. All errors are modeled with a problem-report message. Easy-to-anticipate errors reset the flow as shown in the diagrams, and use the code issuance-abandoned; more exotic errors (e.g., server crashed at Issuer headquarters in the middle of a workflow) may have different codes but still cause the flow to be abandoned in the same way. That is, in this version of the protocol, all errors cause the state of both parties (the sender and the receiver of the problem-report) to revert to null (meaning it is no longer engaged in the protocol at all). Future versions of the protocol may allow more granular choices (e.g., requesting and receiving a (re-)send of the issue-credential message if the Holder times out while waiting in the request-sent state).

    "},{"location":"features/0036-issue-credential/#messages","title":"Messages","text":"

    The Issue Credential protocol consists of these messages:

    In addition, the ack and problem-report messages are adopted into the protocol for confirmation and error handling.

    Note: This protocol is about the messages that must be exchanged to issue verifiable credentials, NOT about the specifics of particular verifiable credential schemes. DIDComm attachments are deliberately used in messages to isolate the protocol flow/semantics from the credential artifacts themselves as separate constructs. Attachments allow credential formats and this protocol to evolve through versioning milestones independently instead of in lockstep. Links are provided in the message descriptions below, to describe how the protocol adapts to specific verifiable credential implementations.

    "},{"location":"features/0036-issue-credential/#choreography-diagram","title":"Choreography Diagram","text":"Note: This diagram was made in draw.io. To make changes: - upload the drawing HTML from this folder to the [draw.io](https://draw.io) site (Import From...GitHub), - make changes, - export the picture and HTML to your local copy of this repo, and - submit a pull request.

    The protocol has 3 alternative beginnings:

    1. The Issuer can begin with an offer.
    2. The Holder can begin with a proposal.
    3. the Holder can begin with a request.

    The offer and proposal messages are part of an optional negotiation phase and may trigger back-and-forth counters. A request is not subject to negotiation; it can only be accepted or rejected.

    "},{"location":"features/0036-issue-credential/#propose-credential","title":"Propose Credential","text":"

    An optional message sent by the potential Holder to the Issuer to initiate the protocol or in response to a offer-credential message when the Holder wants some adjustments made to the credential data offered by Issuer.

    Note: In Hyperledger Indy, where the request-credential message can only be sent in response to an offer-credential message, the propose-credential message is the only way for a potential Holder to initiate the workflow.

    Schema:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/1.1/propose-credential\",\n    \"@id\": \"<uuid-of-propose-message>\",\n    \"comment\": \"some comment\",\n    \"credential_proposal\": <json-ld object>,\n    \"schema_issuer_did\": \"DID of the proposed schema issuer\",\n    \"schema_id\": \"Schema ID string\",\n    \"schema_name\": \"Schema name string\",\n    \"schema_version\": \"Schema version string\",\n    \"cred_def_id\": \"Credential Definition ID string\"\n    \"issuer_did\": \"DID of the proposed issuer\"\n}\n

    Description of attributes:

    "},{"location":"features/0036-issue-credential/#offer-credential","title":"Offer Credential","text":"

    A message sent by the Issuer to the potential Holder, describing the credential they intend to offer and possibly the price they expect to be paid. In Hyperledger Indy, this message is required, because it forces the Issuer to make a cryptographic commitment to the set of fields in the final credential and thus prevents Issuers from inserting spurious data. In credential implementations where this message is optional, an Issuer can use the message to negotiate the issuing following receipt of a request-credential message.

    Schema:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/1.0/offer-credential\",\n    \"@id\": \"<uuid-of-offer-message>\",\n    \"comment\": \"some comment\",\n    \"credential_preview\": <json-ld object>,\n    \"offers~attach\": [\n        {\n            \"@id\": \"libindy-cred-offer-0\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    The Issuer may add a ~payment-request decorator to this message to convey the need for payment before issuance. See the payment section below for more details.

    It is possible for an Issuer to add a ~timing.expires_time decorator to this message to convey the idea that the offer will expire at a particular point in the future. Such behavior is not a special part of this protocol, and support for it is not a requirement of conforming implementations; the ~timing decorator is simply a general possibility for any DIDComm message. We mention it here just to note that the protocol can be enriched in composable ways.

    "},{"location":"features/0036-issue-credential/#request-credential","title":"Request Credential","text":"

    This is a message sent by the potential Holder to the Issuer, to request the issuance of a credential. Where circumstances do not require a preceding Offer Credential message (e.g., there is no cost to issuance that the Issuer needs to explain in advance, and there is no need for cryptographic negotiation), this message initiates the protocol. In Hyperledger Indy, this message can only be sent in response to an Offer Credential message.

    Schema:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/1.0/request-credential\",\n    \"@id\": \"<uuid-of-request-message>\",\n    \"comment\": \"some comment\",\n    \"requests~attach\": [\n        {\n            \"@id\": \"attachment id\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        },\n    ]\n}\n

    Description of Fields:

    This message may have a ~payment-receipt decorator to prove to the Issuer that the potential Holder has satisfied a payment requirement. See the payment section below.

    "},{"location":"features/0036-issue-credential/#issue-credential","title":"Issue Credential","text":"

    This message contains as attached payload the credentials being issued and is sent in response to a valid Request Credential message.

    Schema:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/1.0/issue-credential\",\n    \"@id\": \"<uuid-of-issue-message>\",\n    \"comment\": \"some comment\",\n    \"credentials~attach\": [\n        {\n            \"@id\": \"libindy-cred-0\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    If the issuer wants an acknowledgement that the issued credential was accepted, this message must be decorated with ~please-ack, and it is then best practice for the new Holder to respond with an explicit ack message as described in 0317: Please ACK Decorator.

    "},{"location":"features/0036-issue-credential/#encoding-claims-for-indy-based-verifiable-credentials","title":"Encoding Claims for Indy-based Verifiable Credentials","text":"

    Claims in Hyperledger Indy-based verifiable credentials are put into the credential in two forms, raw and encoded. raw is the actual data value, and encoded is the (possibly derived) integer value that is used in presentations. At this time, Indy does not take an opinion on the method used for encoding the raw value. This will change with the Rich Schema work that is underway in the Indy/Aries community, where the encoding method will be part of the credential metadata available from the public ledger.

    Until the Rich Schema mechanism is deployed, Aries issuers and verifiers must agree on the encoding method so that the verifier can check that the raw value returned in a presentation corresponds to the proven encoded value. The following is the encoding algorithm that MUST be used by Issuers when creating credentials and SHOULD be verified by Verifiers receiving presentations:

    An example implementation in Python can be found here.

    A gist of test value pairs can be found here.

    "},{"location":"features/0036-issue-credential/#preview-credential","title":"Preview Credential","text":"

    This is not a message but an inner object for other messages in this protocol. It is used construct a preview of the data for the credential that is to be issued. Its schema follows:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/1.0/credential-preview\",\n    \"attributes\": [\n        {\n            \"name\": \"<attribute name>\",\n            \"mime-type\": \"<type>\",\n            \"value\": \"<value>\"\n        },\n        // more attributes\n    ]\n}\n

    The main element is attributes. It is an array of (object) attribute specifications; the subsections below outline their semantics.

    "},{"location":"features/0036-issue-credential/#attribute-name","title":"Attribute Name","text":"

    The mandatory \"name\" key maps to the attribute name as a string.

    "},{"location":"features/0036-issue-credential/#mime-type-and-value","title":"MIME Type and Value","text":"

    The optional mime-type advises the issuer how to render a binary attribute, to judge its content for applicability before issuing a credential containing it. Its value parses case-insensitively in keeping with MIME type semantics of RFC 2045. If mime-type is missing, its value is null.

    The mandatory value holds the attribute value:

    "},{"location":"features/0036-issue-credential/#threading","title":"Threading","text":"

    Threading can be used to initiate a sub-protocol during an issue credential protocol instance. For example, during credential issuance, the Issuer may initiate a child message thread to execute the Present Proof sub-protocol to have the potential Holder (now acting as a Prover) prove attributes about themselves before issuing the credential. Depending on circumstances, this might be a best practice for preventing credential fraud at issuance time.

    If threading were added to all of the above messages, a ~thread decorator would be present, and later messages in the flow would reference the @id of earlier messages to stitch the flow into a single coherent sequence. Details about threading can be found in the 0008: Message ID and Threading RFC.

    "},{"location":"features/0036-issue-credential/#payments-during-credential-exchange","title":"Payments during credential exchange","text":"

    Credentialing ecosystems may wish to associate credential issuance with payments by fiat currency or tokens. This is common with non-digital credentials today; we pay a fee when we apply for a passport or purchase a plane ticket. Instead or in addition, some circumstances may fit a mode where payment is made each time a credential is used, as when a Verifier pays a Prover for verifiable medical data to be used in research, or when a Prover pays a Verifier as part of a workflow that applies for admittance to a university. For maximum flexibility, we mention payment possibilities here as well as in the sister 0037: Present Proof RFC.

    "},{"location":"features/0036-issue-credential/#payment-decorators","title":"Payment decorators","text":"

    Wherever they happen and whoever they involve, payments are accomplished with optional payment decorators. See 0075: Payment Decorators.

    "},{"location":"features/0036-issue-credential/#payment-flow","title":"Payment flow","text":"

    A ~payment-request may decorate a Credential Offer from Issuer to Holder. When they do, a corresponding ~payment-receipt should be provided on the Credential Request returned to the Issuer.

    During credential presentation, the Verifier may pay the Holder as compensation for Holder for disclosing data. This would require a ~payment-request in a Presentation Proposal message, and a corresponding ~payment-receipt in the subsequent Presentation Request. If such a workflow begins with the Presentation Request, the Prover may sending back a Presentation (counter-)Proposal with appropriate decorator inside it.

    "},{"location":"features/0036-issue-credential/#limitations","title":"Limitations","text":"

    Smart contracts may be missed in ecosystem, so operation \"issue credential after payment received\" is not atomic. It\u2019s possible case that malicious issuer will charge first and then will not issue credential in fact. But this situation should be easily detected and appropriate penalty should be applied in such type of networks.

    "},{"location":"features/0036-issue-credential/#negotiation-and-preview","title":"Negotiation and Preview","text":"

    Negotiation prior to issuing the credential can be done using the offer-credential and propose-credential messages. A common negotiation use case would be about the data to go into the credential. For that, the credential_preview element is used.

    "},{"location":"features/0036-issue-credential/#reference","title":"Reference","text":""},{"location":"features/0036-issue-credential/#drawbacks","title":"Drawbacks","text":"

    Why should we not do this?

    "},{"location":"features/0036-issue-credential/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0036-issue-credential/#prior-art","title":"Prior art","text":"

    Similar (but simplified) credential exchanged was already implemented in von-anchor.

    "},{"location":"features/0036-issue-credential/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0036-issue-credential/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Streetcred.id Commercial mobile and web app built using Aries Framework - .NET MISSING test results"},{"location":"features/0037-present-proof/","title":"Aries RFC 0037: Present Proof Protocol 1.0","text":""},{"location":"features/0037-present-proof/#summary","title":"Summary","text":"

    Formalization and generalization of existing message formats used for presenting a proof according to existing RFCs about message formats.

    "},{"location":"features/0037-present-proof/#motivation","title":"Motivation","text":"

    We need to define a standard protocol for presenting a proof.

    "},{"location":"features/0037-present-proof/#tutorial","title":"Tutorial","text":"

    The present proof protocol consists of these messages:

    In addition, the ack and problem-report messages are adopted into the protocol for confirmation and error handling.

    This protocol is about the messages to support the presentation of verifiable claims, not about the specifics of particular verifiable presentation mechanisms. This is challenging since at the time of writing this version of the protocol, there is only one supported verifiable presentation mechanism(Hyperledger Indy). DIDComm attachments are deliberately used in messages to try to make this protocol agnostic to the specific verifiable presentation mechanism payloads. Links are provided in the message data element descriptions to details of specific verifiable presentation implementation data structures.

    Diagrams in this protocol were made in draw.io. To make changes:

    "},{"location":"features/0037-present-proof/#states","title":"States","text":""},{"location":"features/0037-present-proof/#states-for-verifier","title":"states for Verifier","text":""},{"location":"features/0037-present-proof/#states-for-prover","title":"states for Prover","text":"

    For the most part, these states map onto the transitions shown in the choreography diagram in obvious ways. However, a few subtleties are worth highlighting:

    Errors might occur in various places. For example, a Verifier might time out waiting for the Prover to supply a presentation. Errors trigger a problem-report. In this version of the protocol, all errors cause the state of both parties (the sender and the receiver of the problem-report) to revert to null (meaning it is no longer engaged in the protocol at all). Future versions of the protocol may allow more granular choices.

    "},{"location":"features/0037-present-proof/#choreography-diagram","title":"Choreography Diagram:","text":""},{"location":"features/0037-present-proof/#propose-presentation","title":"Propose Presentation","text":"

    An optional message sent by the Prover to the verifier to initiate a proof presentation process, or in response to a request-presentation message when the Prover wants to propose using a different presentation format. Schema:

    {\n    \"@type\": \"https://didcomm.org/present-proof/1.0/propose-presentation\",\n    \"@id\": \"<uuid-propose-presentation>\",\n    \"comment\": \"some comment\",\n    \"presentation_proposal\": <json-ld object>\n}\n

    Description of attributes:

    "},{"location":"features/0037-present-proof/#request-presentation","title":"Request Presentation","text":"

    From a verifier to a prover, the request-presentation message describes values that need to be revealed and predicates that need to be fulfilled. Schema:

    {\n    \"@type\": \"https://didcomm.org/present-proof/1.0/request-presentation\",\n    \"@id\": \"<uuid-request>\",\n    \"comment\": \"some comment\",\n    \"request_presentations~attach\": [\n        {\n            \"@id\": \"libindy-request-presentation-0\",\n            \"mime-type\": \"application/json\",\n            \"data\":  {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    "},{"location":"features/0037-present-proof/#presentation","title":"Presentation","text":"

    This message is a response to a Presentation Request message and contains signed presentations. Schema:

    {\n    \"@type\": \"https://didcomm.org/present-proof/1.0/presentation\",\n    \"@id\": \"<uuid-presentation>\",\n    \"comment\": \"some comment\",\n    \"presentations~attach\": [\n        {\n            \"@id\": \"libindy-presentation-0\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    "},{"location":"features/0037-present-proof/#verifying-claims-of-indy-based-verifiable-credentials","title":"Verifying Claims of Indy-based Verifiable Credentials","text":"

    Claims in Hyperledger Indy-based verifiable credentials are put into the credential in two forms, raw and encoded. raw is the actual data value, and encoded is the (possibly derived) integer value that is used in presentations. At this time, Indy does not take an opinion on the method used for encoding the raw value. This will change with the Rich Schema work that is underway in the Indy/Aries community, where the encoding method will be part of the credential metadata available from the public ledger.

    Until the Rich Schema mechanism is deployed, the Aries issuers and verifiers must agree on an encoding method so that the verifier can check that the raw value returned in a presentation corresponds to the proven encoded value. The following is the encoding algorithm that MUST be used by Issuers when creating credentials and SHOULD be verified by Verifiers receiving presentations:

    An example implementation in Python can be found here.

    A gist of test value pairs can be found here.

    "},{"location":"features/0037-present-proof/#presentation-preview","title":"Presentation Preview","text":"

    This is not a message but an inner object for other messages in this protocol. It is used to construct a preview of the data for the presentation. Its schema follows:

    {\n    \"@type\": \"https://didcomm.org/present-proof/1.0/presentation-preview\",\n    \"attributes\": [\n        {\n            \"name\": \"<attribute_name>\",\n            \"cred_def_id\": \"<cred_def_id>\",\n            \"mime-type\": \"<type>\",\n            \"value\": \"<value>\",\n            \"referent\": \"<referent>\"\n        },\n        // more attributes\n    ],\n    \"predicates\": [\n        {\n            \"name\": \"<attribute_name>\",\n            \"cred_def_id\": \"<cred_def_id>\",\n            \"predicate\": \"<predicate>\",\n            \"threshold\": <threshold>\n        },\n        // more predicates\n    ]\n}\n

    The preview identifies attributes and predicates to present.

    "},{"location":"features/0037-present-proof/#attributes","title":"Attributes","text":"

    The mandatory \"attributes\" key maps to a list (possibly empty to propose a presentation with no attributes) of specifications, one per attribute. Each such specification proposes its attribute's characteristics for creation within a presentation.

    "},{"location":"features/0037-present-proof/#attribute-name","title":"Attribute Name","text":"

    The mandatory \"name\" key maps to the name of the attribute.

    "},{"location":"features/0037-present-proof/#credential-definition-identifier","title":"Credential Definition Identifier","text":"

    The optional \"cred_def_id\" key maps to the credential definition identifier of the credential with the current attribute. Note that since it is the holder who creates the preview and the holder possesses the corresponding credential, the holder must know its credential definition identifier.

    If the key is absent, the preview specifies attribute's posture in the presentation as a self-attested attribute. A self-attested attribute does not come from a credential, and hence any attribute specification without the \"cred_def_id\" key cannot use a \"referent\" key as per Referent below.

    "},{"location":"features/0037-present-proof/#mime-type-and-value","title":"MIME Type and Value","text":"

    The optional mime-type advises the verifier how to render a binary attribute, to judge its content for applicability before accepting a presentation containing it. Its value parses case-insensitively in keeping with MIME type semantics of RFC 2045. If mime-type is missing, its value is null.

    The optional value, when present, holds the value of the attribute to reveal in presentation:

    An attribute specification must specify a value, a cred_def_id, or both:

    "},{"location":"features/0037-present-proof/#referent","title":"Referent","text":"

    The optional referent can be useful in specifying multiple-credential presentations. Its value indicates which credential will supply the attribute in the presentation. Sharing a referent value between multiple attribute specifications indicates that the holder's same credential supplies the attribute.

    Any attribute specification using a referent must also have a cred_def_id; any attribute specifications sharing a common referent value must all have the same cred_def_id value (see Credential Definition Identifier above).

    For example, a holder with multiple account credentials could use a presentation preview such as

    {\n    \"@type\": \"https://didcomm.org/present-proof/1.0/presentation-preview\",\n    \"attributes\": [\n        {\n            \"name\": \"account\",\n            \"cred_def_id\": \"BzCbsNYhMrjHiqZDTUASHg:3:CL:1234:tag\",\n            \"value\": \"12345678\",\n            \"referent\": \"0\"\n        },\n        {\n            \"name\": \"streetAddress\",\n            \"cred_def_id\": \"BzCbsNYhMrjHiqZDTUASHg:3:CL:1234:tag\",\n            \"value\": \"123 Main Street\",\n            \"referent\": \"0\"\n        },\n    ],\n    \"predicates\": [\n    ]\n}\n

    to prompt a verifier to request proof of account number and street address from the same account, rather than potentially an account number and street address from distinct accounts.

    "},{"location":"features/0037-present-proof/#predicates","title":"Predicates","text":"

    The mandatory \"predicates\" key maps to a list (possibly empty to propose a presentation with no predicates) of predicate specifications, one per predicate. Each such specification proposes its predicate's characteristics for creation within a presentation.

    "},{"location":"features/0037-present-proof/#attribute-name_1","title":"Attribute Name","text":"

    The mandatory \"name\" key maps to the name of the attribute.

    "},{"location":"features/0037-present-proof/#credential-definition-identifier_1","title":"Credential Definition Identifier","text":"

    The mandatory \"cred_def_id\" key maps to the credential definition identifier of the credential with the current attribute. Note that since it is the holder who creates the preview and the holder possesses the corresponding credential, the holder must know its credential definition identifier.

    "},{"location":"features/0037-present-proof/#predicate","title":"Predicate","text":"

    The mandatory \"predicate\" key maps to the predicate operator: \"<\", \"<=\", \">=\", \">\".

    "},{"location":"features/0037-present-proof/#threshold-value","title":"Threshold Value","text":"

    The mandatory \"threshold\" key maps to the threshold value for the predicate.

    "},{"location":"features/0037-present-proof/#negotiation-and-preview","title":"Negotiation and Preview","text":"

    Negotiation prior to the presentation can be done using the propose-presentation and request-presentation messages. A common negotiation use case would be about the data to go into the presentation. For that, the presentation-preview element is used.

    "},{"location":"features/0037-present-proof/#reference","title":"Reference","text":""},{"location":"features/0037-present-proof/#drawbacks","title":"Drawbacks","text":"

    The presentation preview as proposed above does not allow nesting of predicate logic along the lines of \"A and either B or C if D, otherwise A and B\", nor cross-credential-definition predicates such as proposing a legal name from either a financial institution or selected government entity.

    The presentation preview may be indy-centric, as it assumes the inclusion of at most one credential per credential definition. In addition, it prescribes exactly four predicates and assumes mutual understanding of their semantics (e.g., could \">=\" imply a lexicographic order for non-integer values, and if so, where to specify character collation algorithm?).

    Finally, the inclusion of non-revocation timestamps may become desirable at the preview stage; the standard as proposed does not accommodate such.

    "},{"location":"features/0037-present-proof/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0037-present-proof/#prior-art","title":"Prior art","text":"

    Similar (but simplified) credential exchange was already implemented in von-anchor.

    "},{"location":"features/0037-present-proof/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0037-present-proof/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Streetcred.id Commercial mobile and web app built using Aries Framework - .NET MISSING test results"},{"location":"features/0042-lox/","title":"Aries RFC 0042: LOX -- A more secure pluggable framework for protecting wallet keys","text":""},{"location":"features/0042-lox/#summary","title":"Summary","text":"

    Wallets are protected by secrets that must live outside of the wallet. This document proposes the Lox framework for managing the wallet access key(s).

    "},{"location":"features/0042-lox/#motivation","title":"Motivation","text":"

    Wallets currently use a single key to access the wallet. The key is provided directly or derived from a password. However, this is prone to misuse as most developers have little experience in key management. Right now there are no recommendations for protecting a key provided by Aries forcing implementors to choose methods based on their company's or organization's policies or practices.

    Here Millenial Mike demonstrates this process.

    Some implementors have no policy or practice in place at all to follow leaving them to make bad decisions about managing wallet key storage and protection. For example, when creating an API token for Amazon's AWS, Amazon generates a secret key on a user's behalf and is downloaded to a CSV file. Many programmers do not know how to best protect these downloaded credentials because they must be used in a program to make API calls. They don't which of the following is the best option. They typically:

    The less commonly used or known solution involves keyrings, hardware security modules (HSM), trusted execution environments (TEE), and secure enclaves.

    "},{"location":"features/0042-lox/#keyrings","title":"Keyrings","text":"

    Keyrings come preinstalled with modern operating systems without requiring installing additional software, but other keyrings software packages can be installed that function in a similar way. Operating systems protect keyring contents in encrypted files with access controls based on the logged in user and process accessing them. The keyring can only be unlocked if the same user, process, and keyring credentials are used when the keyring is created. Keyring credentials can be any combination of passwords, pins, keys, cyber tokens, and biometrics. In principle, a system's keyring should be able to keep credentials away from root (as in, the attacker can use the credential as long as they have access, but they can't extract the credential for persistence and assuming no other attacks like Foreshadow). Mac OS X, Windows, Linux Gnome-Keyring and KWallet, Android, and iOS have built-in enclaves that are protected by the operating system.

    Some systems back keyrings with hardware to increase security. The following flow chart illustrates how a keyring functions.

    "},{"location":"features/0042-lox/#secure-enclaves","title":"Secure Enclaves","text":"

    Secure enclaves are used to describe HSMs, TPMs, and TEEs. An explaination of how secure enclaves work is detailed here.

    "},{"location":"features/0042-lox/#details","title":"Details","text":"

    To avoid the overuse of repeating each of these to describe a highly secure environment, the term enclave will be used to indicate all of them synonymously. Enclaves are specially designed to safeguard secrets but can be complex to use with varying APIs and libraries and are accessed using multiple various combinations of credentials. However, these complexities cause many to avoid using them.

    Where to put the wallet access credentials that are directly used by applications or people is called the top-level credential problem. Lox aims to provide guidance and aid in adopting best practices and developing code to address the top level credential problem\u2013-the credential used to protect all others\u2013the keys to the kingdom\u2013or a secret that is used directly by Aries that if compromised would yield disastrous consequences and give access to the wallet.

    "},{"location":"features/0042-lox/#tutorial","title":"Tutorial","text":"

    Lox is a layer that is designed to be an API for storing secrets with pluggable backends that implement reasonable defaults but is flexible to support various others that may be used.

    The default enclave will be the operating system keychain. Lox will also allow for many different enclaves that are optimal for storing keys like YubiKey, Hashicorp Vault, Intel SGX, or other methods supported by Hyperledger Ursa. Other hardware security modules can be plugged into the system via USB or accessed via the cloud. Trusted Platform Modules (TPMs) now come standard with many laptops and higher end tablets. Communication to enclaves can be done using drivers or over Unix or TCP sockets, or the Windows Communication Framework.

    The goal of Lox is to remove the complexity of the various enclaves by choosing the best secure defaults and hiding details that are prone to be misused or misunderstood, making it easier to secure the wallet.

    "},{"location":"features/0042-lox/#reference","title":"Reference","text":"

    Currently, there are two methods that are used to open a wallet: provide the wallet encryption key or use a password based key derivation function to derive the key. Neither of these methods is directly terrible but there are concerns. Where should the symmetric encryption key be stored? What settings should be chosen for Argon2id? Argon2id also does not scale horizontally because settings that are secure for a desktop can be very slow to execute on a mobile device in the order of tens of seconds to minutes. To make it usable, the settings must be dialed down for the mobile device, but this allows faster attacks from a hacker. Also, passwords are often weak in that they are short, easily guessed, and have low entropy.

    Lox, on the other hand, allows a wallet user to access a wallet providing the ID of the credential that will be used to open the wallet, then letting the secure enclave handle authenticating the owner and securing access control. The enclave will restrict access to the currently logged in user so even an administrator cannot read the enclave contents or even access the hardware or TEE.

    For example, a user creates a new wallet and instead of specifying a key, can specify an ID like youthful_burnell. The user is prompted to provide the enclave credentials like biometrics, pins, authenticator tokens, etc. If successful, Lox creates the wallet access key, stores it in the enclave, opens the wallet, and securely wipes the memory holding the key. The calling program can even store the ID value since this alone is not enough to access the enclave. The program must also be running as the same owner of the secret. This allows static agents to store the ID in a config file without having to store the key.

    Enclaves will usually remain unlocked until certain events occur like: when the system goes to sleep, after a set time interval passes, the user logs out. When the event occurs, the enclave reverts to its locked state which requires providing the credentials again. These settings can be modified if needed like only going to the locked state after system boot up.

    The benefits provided by Lox are these

    1. Avoid having Aries users reinvent the wheel when they manage secrets just like an enclave but in less secure ways.
    2. Securely create keys with sufficient entropy from trusted cryptographic sources.
    3. Safely use keys and wipe them from memory when finished to limit side-channel attacks.
    4. Support for pluggable enclave backends by providing a single API. This flexible architecture allows for the wallet key to be protected by a different enclave on each system its stored.
    5. Hide various enclave implementations and complexity to increase misuse-resistance.

    The first API iteration proposal includes the following functions

    function lox_create_wallet(wallet_name: String, config: Map<String, ...>)\nfunction lox_open_wallet(wallet_name: String, config: Map<String, ...>)\n

    wallet_name can be any human readable string. This value will vary based on the enclaves requirements as some allow different characters than others. config can include anything needed to specify the enclave and access it like a service name, remote IP address, and other miscellaneous settings. If nothing is specified, the operating system's default enclave will be used.

    Lox will be used to access the wallet in addition to providing the raw key for those that do not want to use Lox and want to continue to manage their own keys in their own way. Essentially a raw key provided to the wallet is setting the enclave backend to be a null provider. The function for deriving the wallet key from a password should be deprecated for reasons described earlier and use Lox instead.

    "},{"location":"features/0042-lox/#drawbacks","title":"Drawbacks","text":"

    This adds another layer to wallet security. The APIs must be thought through to accommodate as many possible enclaves that could be used as possible. Hardware enclave vendor APIs are similar but are all different and have not unified behind a common standard yet. Trying to account for all of these will be difficult and may require changes to the API.

    "},{"location":"features/0042-lox/#prior-art","title":"Prior art","text":"

    A brief overview of enclaves and their services have been discussed in the Indy wallet HIPE.

    "},{"location":"features/0042-lox/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0042-lox/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Reference Code Example rust code that implements Lox using OS keychains"},{"location":"features/0042-lox/reference_code/","title":"Lox","text":"

    A command line tool for accessing various keychains or secure enclaves.

    "},{"location":"features/0042-lox/reference_code/#the-problem","title":"The problem","text":"

    Applications use several credentials today to secure data locally and during transmitted. However, bad habits happen when safeguarding these credentials. For example, when creating an API token for Amazon's AWS, Amazon generates a secret key on a user's behalf and is downloaded to a CSV file. Programmers do not know how to best sure these downloaded credentials because they must be used in a program to make API calls. They don't which of the following is the best option. They can:

    Where to put the credential that is directly used by applications or people is called the top level credential problem.

    There are services like LeakLooker that browse the internet looking for credentials that can be scrapped and unfortunately but often succeed. Some projects have documented how to test credentials to see if they have been revealed. See keyhacks.

    These document aims to provide guidance and aid in adopting best practices and developing code to address the top level credential problem\u2013-the credential used to protect all others\u2013the keys to the kingdom\u2013or a secret that is used directly by a program that if compromised would yield disastrous consequences.

    "},{"location":"features/0042-lox/reference_code/#the-solution","title":"The solution","text":"

    Lox is a layer that is designed to be a command line tool or API library for storing secrets. The default is to use the operating system keychain. The goal is to add to Lox to allow for many different enclaves that are optimal for storing the keys to the kingdom like YubiKey, Intel SGX, or Arm Trustzone. In principle, a system's secure enclave should be able to keep some credentials away from root (as in, the attacker can use the credential as long as they have access, but they can't extract the credential for persistence), and assuming no other attacks like Foreshadow.

    Mac OS X, Linux, and Android have built-in keychains that are guarded by the operating system. iOS and Android come with hardware secure enclaves or trusted execution environments for managing the secrets stored in the keychain.

    This first iteration uses the OS keychain or an equivalent and uses the command line or is a C callable API. Future work could allow for communication over unix or tcp sockets with Lox running as a daemon process.

    Currently Mac OS X offers support for a CLI tool and libraries but they are complex to understand and can be prone to misuse due to misunderstandings. Lox removes the complexity by choosing secure defaults so developers can focus on their job.

    Lox is written in Rust and has no external dependencies to do its job except DBus on linux.

    The program can be compiled from any OS to run on any OS. Lox-CLI is the command line tool while Lox is the library.

    "},{"location":"features/0042-lox/reference_code/#run-the-program","title":"Run the program","text":"

    Basic Usage

    Requires dbus library on linux.

    On ubuntu, this is libdbus-1-3 when running. On redhat, this is dbus when running.

    Gnome-keyring or KWallet must also be installed on Linux.

    Lox can be run either using cargo run -- \\<args> or if it is already built from source using ./lox.

    Lox tries to determine if input is a file or text. If a file exists that matches the entered text, Lox will read the contents. Otherwise, it will prompt the user for either the id of the secret or to enter a secret.

    Lox stores secrets based on a service name and an ID. The service name is the name of the program or process that only is allowed to access the secret with ID. Secrets can be retrieved, stored, or deleted.

    When secrets are stored, care should be given to not pass the value over the command line as it could be stored in the command line history. For this reason, either put the value in a file or Lox will read it from STDIN. After Lox stores the secret, Lox will securely wipe it from memory.

    "},{"location":"features/0042-lox/reference_code/#caveat","title":"Caveat","text":"

    One remaining problem is how to solve the service name provided to Lox. Ideally Lox could compute it instead of supplied from the calling endpoint which can lie about the name. We can imagine an attacker who wants access to the aws credentials in the keychain just needs to know the service name and the id of the secret to request it. Access is still blocked by the operating system if the attacker doesn't know the keychain credentials similar to a password vault. If Lox could compute the service name then this makes it harder for an attacker to retrieve targeted secrets. However, this is better than the secrets existing in plaintext in code, config files, or environment variables.

    "},{"location":"features/0042-lox/reference_code/#examples","title":"Examples","text":"

    Lox takes at least two arguments: service_name and ID. When storing a secret, an additional parameter is needed. If omitted (the preferred method) the value is read from STDIN.

    "},{"location":"features/0042-lox/reference_code/#storing-a-secret","title":"Storing a secret","text":"
    lox set aws 1qwasdrtyuhjnjyt987yh\nprompt> ...<Return>\nSuccess\n
    "},{"location":"features/0042-lox/reference_code/#retrieve-a-secret","title":"Retrieve a secret","text":"
    lox get aws 1qwasdrtyuhjnjyt987yh\n<Secret Value>\n
    "},{"location":"features/0042-lox/reference_code/#delete-a-secret","title":"Delete a secret","text":"
    lox delete aws 1qwasdrtyuhjnjyt987yh\n
    "},{"location":"features/0042-lox/reference_code/#list-all-secrets","title":"List all secrets","text":"

    Lox can read all values stored in the keyring. List will just list the name of all the values in the keyring without retrieving their actual values.

    lox list\n

    {\"application\": \"lox\", \"id\": \"apikey\", \"service\": \"aws\", \"username\": \"mike\", \"xdg:schema\": \"org.freedesktop.Secret.Generic\"}\n{\"application\": \"lox\", \"id\": \"walletkey\", \"service\": \"indy\", \"username\": \"mike\", \"xdg:schema\": \"org.freedesktop.Secret.Generic\"}\n
    "},{"location":"features/0042-lox/reference_code/#peek-secrets","title":"Peek secrets","text":"

    Lox can retrieve all or a subset of secrets in the keyring. Peek without any arguments will pull out all keyring names and their values. Because Lox encrypts values before storing them in the keyring if it can, those values will be returned as hex values instead of their associated plaintext. Peek filtering is different based on the operating system.

    For OSX, filtering is based on the kind that should be read. It can be generic or internet passwords. generic only requires the service and account labels. internet requires the server, account, protocol, authentication_type values. Filters are supplied as name value pairs separated by = and multiple pairs separated by a comma.

    lox peek service=aws,account=apikey\n

    For Linux, filtering is based on a subset of name value pairs of the attributes that match. For example, if the attributes in the keyring were like this

    {\"application\": \"lox\", \"id\": \"apikey\", \"service\": \"aws\", \"username\": \"mike\", \"xdg:schema\": \"org.freedesktop.Secret.Generic\"}\n{\"application\": \"lox\", \"id\": \"walletkey\", \"service\": \"indy\", \"username\": \"mike\", \"xdg:schema\": \"org.freedesktop.Secret.Generic\"}\n
    To filter based on id, run
    lox peek id=apikey\n
    To filter based on username AND service, run
    lox peek username=mike,service=aws\n

    For Windows, filtering is based on the credentials targetname and globbing. For example, if list returned

    {\"targetname\": \"MicrosoftAccount:target=SSO_POP_Device\"}\n{\"targetname\": \"WindowsLive:target=virtualapp/didlogical\"}\n{\"targetname\": \"LegacyGeneric:target=IEUser:aws:apikey\"}\n
    then filtering searches everything after \":target=\". In this case, if the value to be peeked is IEUser:aws:apikey, the following will return just that result
    lox.exe peek IE*\nlox.exe peek IE*apikey\nlox.ece peek IEUser:aws:apikey\n

    "},{"location":"features/0042-lox/reference_code/#build-from-source","title":"Build from source","text":"

    [build-from-source]: # build-from-source

    To make a distributable executable, run the following commands:

    1. On Linux install dbus library. On a debian based OS this is libdbus-1-dev. On a Redhat based OS this is dbus-devel.
    2. curl https://sh.rustup.rs -sSf | sh -s -- -y - installs the run compiler
    3. cd reference_code/
    4. cargo build --release - when this is finished the executable is target/release/lox.
    5. For *nix users cp target/release/lox /usr/local/lib and chmod +x /usr/local/lib/lox
    6. For Windows users copy target/release/lox.exe to a folder and add that folder to your %PATH variable.

    Liblox is the library that can be linked to programs to manage secrets. Use the library for the underlying operating system that meets your needs

    1. liblox.dll - Windows
    2. liblox.so - Linux
    3. liblox.dylib - Mac OS X
    "},{"location":"features/0042-lox/reference_code/#future-work","title":"FUTURE WORK","text":"

    Allow for other enclaves like Hashicorp vault, LastPass, 1Password. Allow for steganography methods like using images or Microsoft Office files for storing the secrets.

    "},{"location":"features/0043-l10n/","title":"Aries RFC 0043: l10n (Locali[s|z]ation)","text":""},{"location":"features/0043-l10n/#summary","title":"Summary","text":"

    Defines how to send a DIDComm message in a way that facilitates interoperable localization, so humans communicating through agents can interact without natural language barriers.

    "},{"location":"features/0043-l10n/#motivation","title":"Motivation","text":"

    The primary use case for DIDComm is to support automated processing, as with messages that lead to credential issuance, proof exchange, and so forth. Automated processing may be the only way that certain agents can process messages, if they are devices or pieces of software run by organizations with no human intervention.

    However, humans are also a crucial component of the DIDComm ecosystem, and many interactions have them as either a primary or a secondary audience. In credential issuance, a human may need to accept terms and conditions from the issuer, even if their agent navigates the protocol. Some protocols, like a chat between friends, may be entirely human-centric. And in any protocol between agents, a human may have to interpret errors.

    When humans are involved, locale and potential translation into various natural languages becomes important. Normally, localization is the concern of individual software packages. However, in DIDComm, the participants may be using different software, and the localization may be a cross-cutting concern--Alice's software may need to send a localized message to Bob, who's running different software. It therefore becomes useful to explore a way to facilitate localization that allows interoperability without imposing undue burdens on any implementer or participant.

    NOTE: JSON-LD also describes a localization mechanism. We have chosen not to use it, for reasons enumerated in the RFC about JSON-LD compatibility.

    "},{"location":"features/0043-l10n/#tutorial","title":"Tutorial","text":"

    Here we introduce some flexible and easy-to-use conventions. Software that uses these conventions should be able to add localization value in several ways, depending on needs.

    "},{"location":"features/0043-l10n/#introducing-l10n","title":"Introducing ~l10n","text":"

    The default assumption about locale with respect to all DIDComm messages is that they are locale-independent, because they are going to be processed entirely by automation. Dates should be in ISO 8601 format, typically in UTC. Numbers should use JSON formatting rules (or, if embedded in strings, the \"C\" locale). Booleans and null values use JSON keywords.

    Strings tend to be somewhat more interesting. An agent message may contain many strings. Some will be keys; others may be values. Usually, keys do not need to be localized, as they will be interpreted by software (though see Advanced Use Case for an example that does). Among string values, some may be locale-sensitive, while others may not. For example, consider the following fictional message that proposes a meeting between Alice and Bob:

    Here, the string value named proposed_location need not be changed, no matter what language Bob speaks. But note might be worth localizing, in case Bob speaks French instead of English.

    We can't assume all text is localizable. This would result in silly processing, such as trying to translate the first_name field in a driver's license:

    The ~l10n decorator (so-named because \"localization\" has 10 letters between \"l\" and \"n\") may be added to the note field to meet this need:

    If you are not familiar with this notion of field decorators, please review the section about scope in the RFC on decorators.

    "},{"location":"features/0043-l10n/#decorator-at-message-scope","title":"Decorator at Message Scope","text":"

    The example above is minimal. It shows a French localized alternative for the string value of note in the note~l10n.fr field. Any number of these alternatives may be provided, for any set of locales. Deciding whether to use one depends on knowing the locale of the content that's already in note, so note~l10n.locale is also provided.

    But suppose we evolved our message type, and it ended up having 2 fields that were localization-worthy. Both would likely use the same locale in their values, but we don't really want to repeat that locale twice. The preferred way to handle this is to decorate the message with semantics that apply message-wide, and to decorate fields with semantics that apply just to field instances or to fields in the abstracts. Following this pattern puts our example message into a more canonical form:

    "},{"location":"features/0043-l10n/#decorator-at-message-type-scope","title":"Decorator at Message Type Scope","text":"

    Now we are declaring, at message scope, that note and fallback_plan are localizable and that their locale is en.

    It is worth noting that this information is probably true of all instances of messages of this type--not just this particular message. This raises the possibility of declaring the localization data at an evey higher level of abstraction. We do this by moving the decorator from a message instance to a message type. Decorators on a message type are declared in a section of the associated RFC named Localization (or \"Localisation\", for folks that like a different locale's spelling rules :-). In our example, the relevant section of the RFC might look like this:

    This snippet contains one unfamiliar construct, catalogs, which will be discussed below. Ignore that for a moment and focus on the rest of the content. As this snippet mentions, the JSON fragment for ~l10n that's displayed in the running text of the RFC should also be checked in to github with the RFC's markdown as <message type name>~l10n.json, so automated tools can consume the content without parsing markdown.

    Notice that the markdown section is hyperlinked back to this RFC so developers unfamiliar with the mechanism will end up reading this RFC for more details.

    With this decorator on the message type, we can now send our original message, with no message or field decorators, and localization is still fully defined:

    Despite the terse message, its locale is known to be English, and the note field is known to be localizable, with current content also in English.

    One benefit of defining a ~l10n decorator for a message family is that developers can add localization support to their messages without changing field names or schema, and with only a minor semver revision to a message's version.

    We expect most message types to use localization ../../features in more or less this form. In fact, if localization settings have much in common across a message family, the Localization section of a RFC may be defined not just for a message type, but for a whole message family.

    "},{"location":"features/0043-l10n/#message-codes-and-catalogs","title":"Message Codes and Catalogs","text":"

    When the same text values are used over and over again (as opposed to the sort of unpredictable, human-provided text that we've seen in the note field thus far), it may be desirable to identify a piece of text by a code that describes its meaning, and to publish an inventory of these codes and their localized alternatives. By doing this, a message can avoid having to include a huge inventory of localized alternatives every time it is sent.

    We call this inventory of message codes and their localized alternatives a message catalog. Catalogs may be helpful to track a list of common errors (think of symbolic constants like EBADF and EBUSY, and the short explanatory strings associated with them, in Posix's <errno.h>). Catalogs let translation be done once, and reused globally. Also, the code for a message can be searched on the web, even when no localized alternative exists for a particular language. And the message text in a default language can undergo minor variation without invalidating translations or searches.

    If this usage is desired, a special subfield named code may be included inside the map of localized alternatives:

    Note, however, that a code for a localized message is not useful unless we know what that code means. To do that, we need to know where the code is defined. In other words, codes need a namespace or context. Usually, this namespace or context comes from the message family where the code is used, and codes are defined in the same RFC where the message family is defined.

    Message families that support localized text with predictable values should thus include or reference an official catalog of codes for those messages. A catalog is a dictionary of code \u2192 localized alternatives mappings. For example:

    To associate this catalog with a message type, the RFC defining the message type should contain a \"Message Catalog\" section that looks like this:

    Note the verbiage about an official, immutable URL. This is important because localized alternatives for a message code could be an attack vector if the message catalog isn't handled correctly. If a hacker is able to change the content of a catalog, they may be able to change how a message is interpreted by a human that's using localization support. For example, they could suggest that the en localized alternative for code \"warn-suspicious-key-in-use` is \"Key has been properly verified and is trustworthy.\" By having a tamper-evident version of the catalog (e.g., in github or published on a blockchain), devlopers can write software that only deals with canonical text or dynamically translated text, never with something the hacker can manipulate.

    In addition, the following best practices are recommended to maximize catalog usefulness:

    1. Especially when displaying localized error text, software should also display the underlying code. (This is desirable anyway, as it allows searching the web for hints and discussion about the code.)

    2. Software that regularly deals with localizable fields of key messages should download a catalog of localizable alternatives in advance, rather than fetching it just in time.

    "},{"location":"features/0043-l10n/#connecting-code-with-its-catalog","title":"Connecting code with its catalog","text":"

    We've described a catalog's structure and definition, but we haven't yet explained how it's referenced. This is done through the catalogs field inside a ~l10n decorator. There was an example above, in the example of a \"Localization\" section for a RFC. The field name, catalogs, is plural; its value is an array of URIs that reference specific catalog versions. Any catalogs listed in this URI are searched, in the order given, to find the definition and corresponding localized alternatives for a given code.

    A catalogs field can be placed in a ~l10n decorator at various scopes. If it appears at the message or field level, the catalogs it lists are searched before the more general catalogs.

    "},{"location":"features/0043-l10n/#advanced-use-case","title":"Advanced Use Case","text":"

    This section is not normative in this version of the RFC. It is considered experimental for now.

    Let's consider a scenario that pushes the localization ../../features to their limit. Suppose we have a family of DIDComm messages that's designed to exchange genealogical records. The main message type, record, has a fairly simple schema: it just contains record_type, record_date, and content. But content is designed to hold arbitrary sub-records from various archives: probate paperwork from France, military discharge records from Japan, christening certificates from Germany.

    Imagine that the UX we want to build on top of these messages is similar to the one at Ancestry.com:

    Notice that the names of fields in this UX are all given in English. But how likely is it that a christening certificate from Germany will have English field names like \"Birth Data\" and \"Marriage Date\" in its JSON?

    The record message behind data like this might be:

    In order to translate this data, not just values but also keys need to have associated ~l10n data. We do this with a locales array. This allows us to specify very complex locale settings--including multiple locales in the same message, and locales on keys. We may still have the ~l10n.locale array and similar fields to establish defaults that are overridden in ~l10n.locales:

    \"~l10n\": {\n  \"locales\": {\n    \"de\": [\"content.key@*\", \"content.Geburtstag\", \"content.Heiratsdatum\"]\n  }\n}\n

    This says that all fields under content have names that are German, and that the content.Geburtstag and content.Heiratsdatum field values (which are of type date) are also represented in a German locale rather than the default ISO 8601.

    Besides supporting key localization, having a ~l10n.locales array on a message, message type, or message family scope is an elegant, concise way to cope with messages that have mixed field locales (fields in a variety of locales).

    "},{"location":"features/0043-l10n/#drawbacks","title":"Drawbacks","text":"

    The major problem with this feature is that it introduces complexity. However, it is complexity that most developers can ignore unless or until they care about localization. Once that becomes a concern, the complexity provides important ../../features--and it remains nicely encapsulated.

    "},{"location":"features/0043-l10n/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    We could choose not to support this feature.

    We could also use JSON-LD's @language feature. However, this feature has a number of limitations, as documented in the RFC about JSON-LD compatibility.

    "},{"location":"features/0043-l10n/#prior-art","title":"Prior art","text":"

    Java's property bundle mechanism, Posix's gettext() function, and many other localization techniques are well known. They are not directly applicable, mostly because they don't address the need to communicate with software that may or may not be using the same underlying mapping/localization mechanism.

    "},{"location":"features/0043-l10n/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0043-l10n/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes RFC 0035: Report Problem Protocol Depends on this mechanism to localize the description field of an error. RFC 0036: Issue Credential Protocol Depends on this mechanism to localize the comment field of a propose-credential, offer-credential, request-credential, or issue-credential message. RFC 0037: Present Proof Protocol Depends on this mechanism to localize the comment field of a propose-presentation, offer-presentation, or presentation message. RFC 0193: Coin Flip Protocol Uses this mechanism to localize the comment field, when human iteraction around coin tosses is a a goal."},{"location":"features/0043-l10n/localization-section/","title":"Localization section","text":""},{"location":"features/0043-l10n/localization-section/#localization","title":"Localization","text":"

    By default, all instances of this message type carry localization metadata in the form of an implicit ~l10n decorator that looks like this:

    This ~l10n JSON fragment is checked in next to the narrative content of this RFC as l10n.json.

    Individual messages can use the ~l10n decorator to supplement or override these settings.

    "},{"location":"features/0043-l10n/message-catalog-section/","title":"Message catalog section","text":""},{"location":"features/0043-l10n/message-catalog-section/#message-catalog","title":"Message Catalog","text":"

    By default, all instances of this message type assume the following catalog in their @l10n data:

    When referencing this catalog, please be sure you have the correct version. The official, immutable URL to this version of the catalog file is:

    https://github.com/x/y/blob/dc525a27d3b75/text/myfamily/catalog.json\n

    For more information, see the Message Catalog section of the localization RFC.

    "},{"location":"features/0044-didcomm-file-and-mime-types/","title":"Aries RFC 0044: DIDComm File and MIME Types","text":""},{"location":"features/0044-didcomm-file-and-mime-types/#summary","title":"Summary","text":"

    Defines the media (MIME) types and file types that hold DIDComm messages in encrypted, signed, and plaintext forms. Covers DIDComm V1, plus a little of V2 to clarify how DIDComm versions are detected.

    "},{"location":"features/0044-didcomm-file-and-mime-types/#motivation","title":"Motivation","text":"

    Most work on DIDComm so far has assumed HTTP as a transport. However, we know that DID communication is transport-agnostic. We should be able to say the same thing no matter which channel we use.

    An incredibly important channel or transport for messages is digital files. Files can be attached to messages in email or chat, can be carried around on a thumb drive, can be backed up, can be distributed via CDN, can be replicated on distributed file systems like IPFS, can be inserted in an object store or in content-addressable storage, can be viewed and modified in editors, and support a million other uses.

    We need to define how files and attachments can contain DIDComm messages, and what the semantics of processing such files will be.

    "},{"location":"features/0044-didcomm-file-and-mime-types/#tutorial","title":"Tutorial","text":""},{"location":"features/0044-didcomm-file-and-mime-types/#media-types","title":"Media Types","text":"

    Media types are based on the conventions of RFC6838. Similar to RFC7515, the application/ prefix MAY be omitted and the recipient MUST treat media types not containing / as having the application/ prefix present.

    "},{"location":"features/0044-didcomm-file-and-mime-types/#didcomm-v1-encrypted-envelope-dee","title":"DIDComm v1 Encrypted Envelope (*.dee)","text":"

    The raw bytes of an encrypted envelope may be persisted to a file without any modifications whatsoever. In such a case, the data will be encrypted and packaged such that only specific receiver(s) can process it. However, the file will contain a JOSE-style header that can be used by magic bytes algorithms to detect its type reliably.

    The file extension associated with this filetype is dee, giving a globbing pattern of *.dee; this should be be read as \"STAR DOT D E E\" or as \"D E E\" files.

    The name of this file format is \"DIDComm V1 Encrypted Envelope.\" We expect people to say, \"I am looking at a DIDComm V1 Encrypted Envelope\", or \"This file is in DIDComm V1 Encrypted Envelope format\", or \"Does my editor have a DIDComm V1 Encrypted Envelope plugin?\"

    Although the format of encrypted envelopes is derived from JSON and the JWT/JWE family of specs, no useful processing of these files will take place by viewing them as JSON, and viewing them as generic JWEs will greatly constrain which semantics are applied. Therefore, the recommended MIME type for *.dee files is application/didcomm-envelope-enc, with application/jwe as a fallback, and application/json as an even less desirable fallback. (In this, we are making a choice similar to the one that views *.docx files primarily as application/msword instead of application/xml.) If format evolution takes place, the version could become a parameter as described in RFC 1341: application/didcomm-envelope-enc;v=2.

    A recipient using the media type value MUST treat it as if \u201capplication/\u201d were prepended to any \"typ\" or \"cty\" value not containing a \u2018/\u2019 in compliance with the JWE /JWS family of specs.

    The default action for DIDComm V1 Encrypted Envelopes (what happens when a user double-clicks one) should be Handle (that is, process the message as if it had just arrived by some other transport), if the software handling the message is an agent. In other types of software, the default action might be to view the file. Other useful actions might include Send, Attach (to email, chat, etc), Open with agent, and Decrypt to *.dp.

    NOTE: The analog to this content type in DIDComm v2 is called a \"DIDComm Encrypted Message.\" Its format is slightly different. For more info, see Detecting DIDComm Versions below.

    "},{"location":"features/0044-didcomm-file-and-mime-types/#didcomm-v1-signed-envelopes-dse","title":"DIDComm V1 Signed Envelopes (*.dse)","text":"

    When DIDComm messages are signed, the signing uses a JWS signing envelope. Often signing is unnecessary, since authenticated encryption proves the sender of the message to the recipient(s), but sometimes when non-repudiation is required, this envelope is used. It is also required when the recipient of a message is unknown, but tamper-evidence is still required, as in the case of a public invitation.

    By convention, DIDComm Signed Envelopes contain plaintext; if encryption is used in combination with signing, the DSE goes inside the DEE.

    The file extension associated with this filetype is dse, giving a globbing pattern of *.dse; this should be be read as \"STAR DOT D S E\" or as \"D S E\" files.

    The name of this file format is \"DIDComm V1 Signed Envelope.\" We expect people to say, \"I am looking at a DIDComm V1 Signed Envelope\", or \"This file is in DIDComm V1 Signed Envelope format\", or \"Does my editor have a DIDComm V1 Signed Envelope plugin?\"

    As with *.dee files, the best way to hande *.dse files is to map them to a custom MIME type. The recommendation is application/didcomm-sig-env, with application/jws as a fallback, and application/json as an even less desirable fallback.

    A recipient using the media type value MUST treat it as if \u201capplication/\u201d were prepended to any \"typ\" or \"cty\" value not containing a \u2018/\u2019 in compliance with the JWE /JWS family of specs.

    The default action for DIDComm V1 Signed Envelopes (what happens when a user double-clicks one) should be Validate (that is, process the signature to see if it is valid.

    NOTE: The analog to this content type in DIDComm v2 is called a \"DIDComm Signed Message.\" Its format is slightly different. For more info, see Detecting DIDComm Versions below.

    "},{"location":"features/0044-didcomm-file-and-mime-types/#didcomm-v1-messages-dm","title":"DIDComm V1 Messages (*.dm)","text":"

    The plaintext representation of a DIDComm message--something like a credential offer, a proof request, a connection invitation, or anything else worthy of a DIDComm protocol--is JSON. As such, it should be editable by anything that expects JSON.

    However, all such files have some additional conventions, over and above the simple requirements of JSON. For example, key decorators have special meaning ( @id, ~thread, @trace , etc). Nonces may be especially significant. The format of particular values such as DID and DID+key references is important. Therefore, we refer to these messages generically as JSON, but we also define a file format for tools that are aware of the additional semantics.

    The file extension associated with this filetype is *.dm, and should be read as \"STAR DOT D M\" or \"D M\" files. If a format evolution takes place, a subsequent version could be noted by appending a digit, as in *.dm2 for second-generation dm files.

    The name of this file format is \"DIDComm V1 Message.\" We expect people to say, \"I am looking at a DIDComm V1 Message\", or \"This file is in DIDComm V1 Message format\", or \"Does my editor have a DIDComm V1 Message plugin?\" For extra clarity, it is acceptable to add the adjective \"plaintext\", as in \"DIDComm V1 Plaintext Message.\"

    The most specific MIME type of *.dm files is application/json;flavor=didcomm-msg--or, if more generic handling is appropriate, just application/json.

    A recipient using the media type value MUST treat it as if \u201capplication/\u201d were prepended to any \"typ\" or \"cty\" value not containing a \u2018/\u2019 in compliance with the JWE /JWS family of specs.

    The default action for DIDComm V1 Messages should be to View or Validate them. Other interesting actions might be Encrypt to *.dee, Sign to *.dse, and Find definition of protocol.

    NOTE: The analog to this content type in DIDComm v2 is called a \"DIDComm Plaintext Message.\" Its format is slightly different. For more info, see Detecting DIDComm Versions below.

    As a general rule, DIDComm messages that are being sent in production use cases of DID communication should be stored in encrypted form (*.dee) at rest. There are cases where this might not be preferred, e.g., providing documentation of the format of message or during a debugging scenario using message tracing. However, these are exceptional cases. Storing meaningful *.dm files decrypted is not a security best practice, since it replaces all the privacy and security guarantees provided by the DID communication mechanism with only the ACLs and other security barriers that are offered by the container.

    "},{"location":"features/0044-didcomm-file-and-mime-types/#native-object-representation","title":"Native Object representation","text":"

    This is not a file format, but rather an in-memory form of a DIDComm Message using whatever object hierarchy is natural for a programming language to map to and from JSON. For example, in python, the natural Native Object format is a dict that contains properties indexed by strings. This is the representation that python's json library expects when converting to JSON, and the format it produces when converting from JSON. In Java, Native Object format might be a bean. In C++, it might be a std::map<std::string, variant>...

    There can be more than one Native Object representation for a given programming language.

    Native Object forms are never rendered directly to files; rather, they are serialized to DIDComm Plaintext Format and then persisted (likely after also encrypting to DIDComm V1 Encrypted Envelope).

    "},{"location":"features/0044-didcomm-file-and-mime-types/#negotiating-compatibility","title":"Negotiating Compatibility","text":"

    When parties want to communicate via DIDComm, a number of mechanisms must align. These include:

    1. The type of service endpoint used by each party
    2. The key types used for encryption and/or signing
    3. The format of the encryption and/or signing envelopes
    4. The encoding of plaintext messages
    5. The protocol used to forward and route
    6. The protocol embodied in the plaintext messages

    Although DIDComm allows flexibility in each of these choices, it is not expected that a given DIDComm implementation will support many permutations. Rather, we expect a few sets of choices that commonly go together. We call a set of choices that work well together a profile. Profiles are identified by a string that matches the conventions of IANA media types, but they express choices about plaintext, encryption, signing, and routing in a single value. The following profile identifiers are defined in this version of the RFC:

    "},{"location":"features/0044-didcomm-file-and-mime-types/#defined-profiles","title":"Defined Profiles","text":"

    Profiles are named in the accept section of a DIDComm service endpoint and in an out-of-band message. When Alice declares that she accepts didcomm/aip2;env=rfc19, she is making a declaration about more than her own endpoint. She is saying that all publicly visible steps in an inbound route to her will use the didcomm/aip2;env=rfc19 profile, such that a sender only has to use didcomm/aip2;env=rfc19 choices to get the message from Alice's outermost mediator to Alice's edge. It is up to Alice to select and configure mediators and internal routing in such a way that this is true for the sender.

    "},{"location":"features/0044-didcomm-file-and-mime-types/#detecting-didcomm-versions","title":"Detecting DIDComm Versions","text":"

    Because media types differ from DIDComm V1 to V2, and because media types are easy to communicate in headers and message fields, they are a convenient way to detect which version of DIDComm applies in a given context:

    Nature of Content V1 V2 encrypted application/didcomm-envelope-encDIDComm V1 Encrypted Envelope*.dee application/didcomm-encrypted+jsonDIDComm Encrypted Message*.dcem signed application/didcomm-sig-envDIDComm V1 Signed Envelope*.dse application/didcomm-signed+jsonDIDComm Signed Message*.dcsm plaintext application/json;flavor=didcomm-msgDIDComm V1 Message*.dm application/didcomm-plain+jsonDIDComm Plaintext Message*.dcpm

    It is also recommended that agents implementing Discover Features Protocol v2 respond to queries about supported DIDComm versions using the didcomm-version feature name. This allows queries about what an agent is willing to support, whereas the media type mechanism describes what is in active use. The values that should be returned from such a query are URIs that tell where DIDComm versions are developed:

    Version URI V1 https://github.com/hyperledger/aries-rfcs V2 https://github.com/decentralized-identity/didcomm-messaging"},{"location":"features/0044-didcomm-file-and-mime-types/#what-it-means-to-implement-this-rfc","title":"What it means to \"implement\" this RFC","text":"

    For the purposes of Aries Interop Profiles, an agent \"implements\" this RFC when:

    "},{"location":"features/0044-didcomm-file-and-mime-types/#reference","title":"Reference","text":"

    The file extensions and MIME types described here are also accompanied by suggested graphics. Vector forms of these graphics are available.

    "},{"location":"features/0044-didcomm-file-and-mime-types/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0048-trust-ping/","title":"Aries RFC 0048: Trust Ping Protocol 1.0","text":""},{"location":"features/0048-trust-ping/#summary","title":"Summary","text":"

    Describe a standard way for agents to test connectivity, responsiveness, and security of a pairwise channel.

    "},{"location":"features/0048-trust-ping/#motivation","title":"Motivation","text":"

    Agents are distributed. They are not guaranteed to be connected or running all the time. They support a variety of transports, speak a variety of protocols, and run software from many different vendors.

    This can make it very difficult to prove that two agents have a functional pairwise channel. Troubleshooting connectivity, responsivenes, and security is vital.

    "},{"location":"features/0048-trust-ping/#tutorial","title":"Tutorial","text":"

    This protocol is analogous to the familiar ping command in networking--but because it operates over agent-to-agent channels, it is transport agnostic and asynchronous, and it can produce insights into privacy and security that a regular ping cannot.

    "},{"location":"features/0048-trust-ping/#roles","title":"Roles","text":"

    There are two parties in a trust ping: the sender and the receiver. The sender initiates the trust ping. The receiver responds. If the receiver wants to do a ping of their own, they can, but this is a new interaction in which they become the sender.

    "},{"location":"features/0048-trust-ping/#messages","title":"Messages","text":"

    The trust ping interaction begins when sender creates a ping message like this:

    {\n  \"@type\": \"https://didcomm.org/trust_ping/1.0/ping\",\n  \"@id\": \"518be002-de8e-456e-b3d5-8fe472477a86\",\n  \"~timing\": {\n    \"out_time\": \"2018-12-15 04:29:23Z\",\n    \"expires_time\": \"2018-12-15 05:29:23Z\",\n    \"delay_milli\": 0\n  },\n  \"comment\": \"Hi. Are you listening?\",\n  \"response_requested\": true\n}\n

    Only @type and @id are required; ~timing.out_time, ~timing.expires_time, and ~timing.delay_milli are optional message timing decorators, and comment follows the conventions of localizable message fields. If present, it may be used to display a human-friendly description of the ping to a user that gives approval to respond. (Whether an agent responds to a trust ping is a decision for each agent owner to make, per policy and/or interaction with their agent.)

    The response_requested field deserves special mention. The normal expectation of a trust ping is that it elicits a response. However, it may be desirable to do a unilateral trust ping at times--communicate information without any expecation of a reaction. In this case, \"response_requested\": false may be used. This might be useful, for example, to defeat correlation between request and response (to generate noise). Or agents A and B might agree that periodically A will ping B without a response, as a way of evidencing that A is up and functional. If response_requested is false, then the receiver MUST NOT respond.

    When the message arrives at the receiver, assuming that response_requested is not false, the receiver should reply as quickly as possible with a ping_response message that looks like this:

    {\n  \"@type\": \"https://didcomm.org/trust_ping/1.0/ping_response\",\n  \"@id\": \"e002518b-456e-b3d5-de8e-7a86fe472847\",\n  \"~thread\": { \"thid\": \"518be002-de8e-456e-b3d5-8fe472477a86\" },\n  \"~timing\": { \"in_time\": \"2018-12-15 04:29:28Z\", \"out_time\": \"2018-12-15 04:31:00Z\"},\n  \"comment\": \"Hi yourself. I'm here.\"\n}\n

    Here, @type and ~thread are required, and the rest is optional.

    "},{"location":"features/0048-trust-ping/#trust","title":"Trust","text":"

    This is the \"trust ping protocol\", not just the \"ping protocol.\" The \"trust\" in its name comes from several ../../features that the interaction gains by virtue of its use of standard agent-to-agent conventions:

    1. Messages should be associated with a message trust context that allows sender and receiver to evaluate how much trust can be placed in the channel. For example, both sender and receiver can check whether messages are encrypted with suitable algorithms and keys.

    2. Messages may be targeted at any known agent in the other party's sovereign domain, using cross-domain routing conventions, and may be encrypted and packaged to expose exactly and only the information desired, at each hop along the way. This allows two parties to evaluate the completeness of a channel and the alignment of all agents that maintain it.

    3. This interaction may be traced using the general message tracing mechanism.

    "},{"location":"features/0048-trust-ping/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community; MISSING test results Aries Framework - .NET .NET framework for building agents of all types; MISSING test results Streetcred.id Commercial mobile and web app built using Aries Framework - .NET; MISSING test results Aries Cloud Agent - Python Contributed by the government of British Columbia.; MISSING test results Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases.; MISSING test results Aries Protocol Test Suite MISSING test results"},{"location":"features/0056-service-decorator/","title":"Aries RFC 0056: Service Decorator","text":""},{"location":"features/0056-service-decorator/#summary","title":"Summary","text":"

    The ~service decorator describes a DID service endpoint inline to a message.

    "},{"location":"features/0056-service-decorator/#motivation","title":"Motivation","text":"

    This allows messages to self contain endpoint and routing information normally in a DID Document. This comes in handy when DIDs or DID Documents have not been exchanged.

    Examples include the Connect Protocol and Challenge Protocols.

    The ~service decorator on a message contains the service definition that you might expect to find in a DID Document. These values function the same way.

    "},{"location":"features/0056-service-decorator/#tutorial","title":"Tutorial","text":"

    Usage looks like this, with the contents defined the Service Endpoint section of the DID Spec:

    json= { \"@type\": \"somemessagetype\", \"~service\": { \"recipientKeys\": [\"8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K\"], \"routingKeys\": [\"8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K\"] \"serviceEndpoint\": \"https://example.com/endpoint\" } }

    "},{"location":"features/0056-service-decorator/#reference","title":"Reference","text":"

    The contents of the ~service decorator are defined by the Service Endpoint section of the DID Spec.

    The decorator should not be used when the message recipient already has a service endpoint.

    "},{"location":"features/0056-service-decorator/#drawbacks","title":"Drawbacks","text":"

    The current service block definition is not very compact, and could cause problems when attempting to transfer a message via QR code.

    "},{"location":"features/0056-service-decorator/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0056-service-decorator/#prior-art","title":"Prior art","text":"

    The Connect Protocol had previously included this same information as an attribute of the messages themselves.

    "},{"location":"features/0056-service-decorator/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0056-service-decorator/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0066-non-repudiable-cryptographic-envelope/","title":"Aries RFC 0066: Non-Repudiable Signature for Cryptographic Envelope","text":""},{"location":"features/0066-non-repudiable-cryptographic-envelope/#summary","title":"Summary","text":"

    This HIPE is intended to highlight the ways that a non-repudiable signature could be added to a message field or message family through the use of JSON Web Signatures format.

    "},{"location":"features/0066-non-repudiable-cryptographic-envelope/#motivation","title":"Motivation","text":"

    Non-repudiable digital signatures serve as a beneficial method to provide proof of provenance of a message. There's many use cases where non-repudiable signatures are necessary and provide value. Some examples may be for a bank to keep on record when a mortgage is being signed. Some of the early use cases where this is going to be of value is going to be in the connection initiate protocol and the ephemeral challenge protocol. The expected outcome of this RFC is to define a method for using non-repudiable digital signatures in the cryptographic envelope layer of DID Communications.

    "},{"location":"features/0066-non-repudiable-cryptographic-envelope/#tutorial","title":"Tutorial","text":""},{"location":"features/0066-non-repudiable-cryptographic-envelope/#json-web-signatures","title":"JSON Web Signatures","text":"

    The JSON Web Signatures specification is written to define how to represent content secured with digital signatures or Message Authentication Codes (MACs) using JavaScript Object Notation (JSON) based data structures.

    Our particular interest is in the use of non-repudiable digital signature using the ed25519 curve with edDSA signatures to sign invitation messages as well as sign full content layer messages.

    "},{"location":"features/0066-non-repudiable-cryptographic-envelope/#when-should-non-repudiable-signatures-be-used","title":"When should non-repudiable signatures be used?","text":"

    As highlighted in the repudiation RFC #0049, non-repudiable signatures are not always necessary and SHOULD NOT be used by default. The primary instances where a non-repudiable digital signature should be used is when a signer expects and considers it acceptable that a receiver can prove the sender sent the message.

    If Alice is entering into a borrower:lender relationship with Carol, Carol needs to prove to third parties that Alice, and only Alice, incurred the legal obligation.

    A good rule of thumb for a developer to decide when to use a non-repudiable signature is:

    \"Does the Receiver need to be able to prove who created the message to another person?\"

    In most cases, the answer to this is likely no. The few cases where it does make sense is when a message is establishing some burden of legal liability.

    "},{"location":"features/0066-non-repudiable-cryptographic-envelope/#reference","title":"Reference","text":"

    Provide guidance for implementers, procedures to inform testing, interface definitions, formal function prototypes, error codes, diagrams, and other technical details that might be looked up. Strive to guarantee that:

    At a high level, the usage of a digital signature should occur before a message is encrypted. There's some cases where this may not make sense. This RFC will highlight a few different examples of how non-repudiable digital signatures could be used.

    "},{"location":"features/0066-non-repudiable-cryptographic-envelope/#connect-protocol-example","title":"Connect protocol example","text":"

    Starting with an initial connections/1.0/invitation message like this:

    {\n    \"@type\": \"https://didcomm.org/connections/1.0/invitation\",\n    \"@id\": \"12345678900987654321\",\n    \"label\": \"Alice\",\n    \"recipientKeys\": [\"8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K\"],\n    \"serviceEndpoint\": \"https://example.com/endpoint\",\n    \"routingKeys\": [\"8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K\"]\n}\n

    We would then bas64URL encode this message like this:

    eyJAdHlwZSI6ImRpZDpzb3Y6QnpDYnNOWWhNcmpIaXFaRFRVQVNIZztzcGVjL2Nvbm5lY3Rpb25zLzEuMC9pbnZpdGF0aW9uIiwiQGlkIjoiMTIzNDU2Nzg5MDA5ODc2NTQzMjEiLCJsYWJlbCI6IkFsaWNlIiwicmVjaXBpZW50S2V5cyI6WyI4SEg1Z1lFZU5jM3o3UFlYbWQ1NGQ0eDZxQWZDTnJxUXFFQjNuUzdaZnU3SyJdLCJzZXJ2aWNlRW5kcG9pbnQiOiJodHRwczovL2V4YW1wbGUuY29tL2VuZHBvaW50Iiwicm91dGluZ0tleXMiOlsiOEhINWdZRWVOYzN6N1BZWG1kNTRkNHg2cUFmQ05ycVFxRUIzblM3WmZ1N0siXX0=\n

    This base64URL encoded string would then become the payload in the JWS.

    Using the compact serialization format, our JOSE Header would look like this:

    {\n    \"alg\":\"EdDSA\",\n    \"kid\":\"FYmoFw55GeQH7SRFa37dkx1d2dZ3zUF8ckg7wmL7ofN4\"\n}\n

    alg: specifies the signature algorithm used kid: specifies the key identifier. In the case of DIDComm, this will be a base58 encoded ed25519 key.

    To sign, we would combine the JOSE Header with the payload and separate it using a period. This would be the resulting data that would be signed:

    ewogICAgImFsZyI6IkVkRFNBIiwKICAgICJraWQiOiJGWW1vRnc1NUdlUUg3U1JGYTM3ZGt4MWQyZFozelVGOGNrZzd3bUw3b2ZONCIKfQ==.eyJAdHlwZSI6ImRpZDpzb3Y6QnpDYnNOWWhNcmpIaXFaRFRVQVNIZztzcGVjL2Nvbm5lY3Rpb25zLzEuMC9pbnZpdGF0aW9uIiwiQGlkIjoiMTIzNDU2Nzg5MDA5ODc2NTQzMjEiLCJsYWJlbCI6IkFsaWNlIiwicmVjaXBpZW50S2V5cyI6WyI4SEg1Z1lFZU5jM3o3UFlYbWQ1NGQ0eDZxQWZDTnJxUXFFQjNuUzdaZnU3SyJdLCJzZXJ2aWNlRW5kcG9pbnQiOiJodHRwczovL2V4YW1wbGUuY29tL2VuZHBvaW50Iiwicm91dGluZ0tleXMiOlsiOEhINWdZRWVOYzN6N1BZWG1kNTRkNHg2cUFmQ05ycVFxRUIzblM3WmZ1N0siXX0=\n

    and the resulting signature would be:

    cwKY4Qhz0IFG9rGqNjcR-6K1NJqgyoGhso28ZGYkOPNI3C8rO6lmjwYstY0Fa2ew8jaFB-jWQN55kOTL5oHVDQ==\n

    The final output would then produce this:

    ewogICAgImFsZyI6IkVkRFNBIiwKICAgICJraWQiOiJGWW1vRnc1NUdlUUg3U1JGYTM3ZGt4MWQyZFozelVGOGNrZzd3bUw3b2ZONCIKfQ==.eyJAdHlwZSI6ImRpZDpzb3Y6QnpDYnNOWWhNcmpIaXFaRFRVQVNIZztzcGVjL2Nvbm5lY3Rpb25zLzEuMC9pbnZpdGF0aW9uIiwiQGlkIjoiMTIzNDU2Nzg5MDA5ODc2NTQzMjEiLCJsYWJlbCI6IkFsaWNlIiwicmVjaXBpZW50S2V5cyI6WyI4SEg1Z1lFZU5jM3o3UFlYbWQ1NGQ0eDZxQWZDTnJxUXFFQjNuUzdaZnU3SyJdLCJzZXJ2aWNlRW5kcG9pbnQiOiJodHRwczovL2V4YW1wbGUuY29tL2VuZHBvaW50Iiwicm91dGluZ0tleXMiOlsiOEhINWdZRWVOYzN6N1BZWG1kNTRkNHg2cUFmQ05ycVFxRUIzblM3WmZ1N0siXX0=.cwKY4Qhz0IFG9rGqNjcR-6K1NJqgyoGhso28ZGYkOPNI3C8rO6lmjwYstY0Fa2ew8jaFB-jWQN55kOTL5oHVDQ==\n
    "},{"location":"features/0066-non-repudiable-cryptographic-envelope/#basic-message-protocol-example","title":"Basic Message protocol example","text":""},{"location":"features/0066-non-repudiable-cryptographic-envelope/#sign-and-encrypt-process","title":"Sign and encrypt process","text":"

    Next is an example that showcases what a basic message would look like. Since this message would utilize a connection to encrypt the message, we will produce a JWS first, and then encrypt the outputted compact JWS.

    We would first encode our JOSE Header which looks like this:

    {\n    \"alg\": \"edDSA\",\n    \"kid\": \"7XVZJUuKtfYeN1W4Dq2Tw2ameG6gC1amxL7xZSsZxQCK\"\n}\n

    and when base64url encoded it would be converted to this:

    eyJhbGciOiAiZWREU0EiLCAia2lkIjogIjdYVlpKVXVLdGZZZU4xVzREcTJUdzJhbWVHNmdDMWFteEw3eFpTc1p4UUNLIn0=\n

    Next we'll take our content layer message which as an example is the JSON provided:

    {\n    \"@id\": \"123456780\",\n    \"@type\": \"https://didcomm.org/basicmessage/1.0/message\",\n    \"~l10n\": { \"locale\": \"en\" },\n    \"sent_time\": \"2019-01-15 18:42:01Z\",\n    \"content\": \"Your hovercraft is full of eels.\"\n}\n

    and now we'll base64url encode this message which results in this output:

    eyJjb250ZW50IjogIllvdXIgaG92ZXJjcmFmdCBpcyBmdWxsIG9mIGVlbHMuIiwgInNlbnRfdGltZSI6ICIyMDE5LTAxLTE1IDE4OjQyOjAxWiIsICJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9iYXNpY21lc3NhZ2UvMS4wL21lc3NhZ2UiLCAiQGlkIjogIjEyMzQ1Njc4MCIsICJ-bDEwbiI6IHsibG9jYWxlIjogImVuIn19\n

    Next, they should be concatenated using a period (.) as a delimiter character which would produce this output:

    eyJhbGciOiAiZWREU0EiLCAia2lkIjogIjdYVlpKVXVLdGZZZU4xVzREcTJUdzJhbWVHNmdDMWFteEw3eFpTc1p4UUNLIn0=.eyJjb250ZW50IjogIllvdXIgaG92ZXJjcmFmdCBpcyBmdWxsIG9mIGVlbHMuIiwgInNlbnRfdGltZSI6ICIyMDE5LTAxLTE1IDE4OjQyOjAxWiIsICJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9iYXNpY21lc3NhZ2UvMS4wL21lc3NhZ2UiLCAiQGlkIjogIjEyMzQ1Njc4MCIsICJ-bDEwbiI6IHsibG9jYWxlIjogImVuIn19\n

    The signature for the signature data is:

    FV7Yyz7i31EKoqS_cycQRr2pN59Q5Ojoxnr7uf6yZBqylnUZW2jCk_LesgWy5ZEux2K6dkrZh7q9pUs9dEsJBQ==\n

    The signature should be concatenated to the signed data above resulting in this final string:

    eyJhbGciOiAiZWREU0EiLCAia2lkIjogIjdYVlpKVXVLdGZZZU4xVzREcTJUdzJhbWVHNmdDMWFteEw3eFpTc1p4UUNLIn0=.eyJjb250ZW50IjogIllvdXIgaG92ZXJjcmFmdCBpcyBmdWxsIG9mIGVlbHMuIiwgInNlbnRfdGltZSI6ICIyMDE5LTAxLTE1IDE4OjQyOjAxWiIsICJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9iYXNpY21lc3NhZ2UvMS4wL21lc3NhZ2UiLCAiQGlkIjogIjEyMzQ1Njc4MCIsICJ-bDEwbiI6IHsibG9jYWxlIjogImVuIn19.FV7Yyz7i31EKoqS_cycQRr2pN59Q5Ojoxnr7uf6yZBqylnUZW2jCk_LesgWy5ZEux2K6dkrZh7q9pUs9dEsJBQ==\n

    The last step is to encrypt this base64URL encoded string as the message in pack which will complete the cryptographic envelope.

    The output of this message then becomes:

    {\"protected\":\"eyJlbmMiOiJ4Y2hhY2hhMjBwb2x5MTMwNV9pZXRmIiwidHlwIjoiSldNLzEuMCIsImFsZyI6IkF1dGhjcnlwdCIsInJlY2lwaWVudHMiOlt7ImVuY3J5cHRlZF9rZXkiOiJac2dYVWdNVGowUk9lbFBTT09lRGxtaE9sbngwMkVVYjZCbml4QjBESGtEZFRLaGc3ZlE1Tk1zcjU3bzA5WDZxIiwiaGVhZGVyIjp7ImtpZCI6IjRXenZOWjJjQUt6TXM4Nmo2S1c5WGZjMmhLdTNoaFd4V1RydkRNbWFSTEFiIiwiaXYiOiJsOWJHVnlyUnRseUNMX244UmNEakJVb1I3eU5sdEZqMCIsInNlbmRlciI6Imh4alZMRWpXcmY0RFplUGFsRGJnYzVfNmFMN2ltOGs1WElQWnBqTURlUzZaUS1jcEFUaGNzNVdiT25uaVFBM2Z0ZnlYWDJkVUc0dVZ3WHhOTHdMTXRqV3lxNkNKeDdUWEdBQW9ZY0RMMW1aaTJxd2xZMGlDQ2N0dHdNVT0ifX1dfQ==\",\"iv\":\"puCgKCfsOb5gRG81\",\"ciphertext\":\"EpHaC0ZMXQakM4n8Fxbedq_3UhiJHq6vd_I4NNz3N7aDbq7-0F6OXi--VaR7xoTqAyJjrOTYmy1SqivSkGmKaCcpFwC9Shdo_vcMFzIxu90_m3MG1xKNsvDmQBFnD0qgjPPXxmxTlmmYLSdA3JaHpEx1K9gYgGqv4X5bgWZqzFCoevyOlD5a2bDZBY5Mn__IT1pVzjbMbDeSgM2nOztWyF0baXwrqczBW-Msx-uP5HNlLdz02FPbMnRP6MYyw6q0wI0EqwzzwH81bZzHKrTVHT2-M_aIEQp9lKGLhnSW3-aIOpSzonGOriyDukfTpvsCUZEd_X1u0G3iZKxYCbIKaj_ARLbb6idlRngVGW9LYYaw7Xay83exp22gflvLmmN25Xzo1vLlaDaFr9h-J_QAvFebCHgWjl1kcodBRc2jhoMVSpEXJHoI5qMrlVvh45PLTEjxy7y5FHQ1L8klwWZN5EIwui3ExIOA8RwYDlp8-HLib_uqB7hNzVUYC0iPd1KTiNIcidYVdAoPpdtLDOh-KCmPB9RkjVUqSlwNYUAAnfY8OJXuBLHP2nWiYUDA6VDbvrv4npW88VMdsFDk_QzvDRvg7gkW8x8jNd8=\",\"tag\":\"B4UilbBNSUr3QcALtVxTEw==\"}\n
    "},{"location":"features/0066-non-repudiable-cryptographic-envelope/#decrypt-and-verify-process","title":"Decrypt and Verify process","text":"

    To Decrypt and verify the JWS first unpack the message, which provides this result:

    {\n    \"message\":\"eyJhbGciOiAiZWREU0EiLCAia2lkIjogIjdYVlpKVXVLdGZZZU4xVzREcTJUdzJhbWVHNmdDMWFteEw3eFpTc1p4UUNLIn0=.eyJjb250ZW50IjogIllvdXIgaG92ZXJjcmFmdCBpcyBmdWxsIG9mIGVlbHMuIiwgInNlbnRfdGltZSI6ICIyMDE5LTAxLTE1IDE4OjQyOjAxWiIsICJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9iYXNpY21lc3NhZ2UvMS4wL21lc3NhZ2UiLCAiQGlkIjogIjEyMzQ1Njc4MCIsICJ-bDEwbiI6IHsibG9jYWxlIjogImVuIn19.FV7Yyz7i31EKoqS_cycQRr2pN59Q5Ojoxnr7uf6yZBqylnUZW2jCk_LesgWy5ZEux2K6dkrZh7q9pUs9dEsJBQ==\",\n    \"recipient_verkey\":\"4WzvNZ2cAKzMs86j6KW9Xfc2hKu3hhWxWTrvDMmaRLAb\",\n    \"sender_verkey\":\"7XVZJUuKtfYeN1W4Dq2Tw2ameG6gC1amxL7xZSsZxQCK\"\n}\n

    Parse the message field splitting on the second period . You should then have this as the payload:

    eyJhbGciOiAiZWREU0EiLCAia2lkIjogIjdYVlpKVXVLdGZZZU4xVzREcTJUdzJhbWVHNmdDMWFteEw3eFpTc1p4UUNLIn0=.eyJjb250ZW50IjogIllvdXIgaG92ZXJjcmFmdCBpcyBmdWxsIG9mIGVlbHMuIiwgInNlbnRfdGltZSI6ICIyMDE5LTAxLTE1IDE4OjQyOjAxWiIsICJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9iYXNpY21lc3NhZ2UvMS4wL21lc3NhZ2UiLCAiQGlkIjogIjEyMzQ1Njc4MCIsICJ-bDEwbiI6IHsibG9jYWxlIjogImVuIn19\n

    and the signature will be base64URL encoded and look like this:

    FV7Yyz7i31EKoqS_cycQRr2pN59Q5Ojoxnr7uf6yZBqylnUZW2jCk_LesgWy5ZEux2K6dkrZh7q9pUs9dEsJBQ==\n

    Now decode the signature and then convert the signature and payload to bytes and use crypto.crypto_verify() API in IndySDK

    Your message has now been verified.

    To get the original message, you'll again parse the JWS this time taking the second section only which looks like this:

    eyJjb250ZW50IjogIllvdXIgaG92ZXJjcmFmdCBpcyBmdWxsIG9mIGVlbHMuIiwgInNlbnRfdGltZSI6ICIyMDE5LTAxLTE1IDE4OjQyOjAxWiIsICJAdHlwZSI6ICJkaWQ6c292OkJ6Q2JzTlloTXJqSGlxWkRUVUFTSGc7c3BlYy9iYXNpY21lc3NhZ2UvMS4wL21lc3NhZ2UiLCAiQGlkIjogIjEyMzQ1Njc4MCIsICJ-bDEwbiI6IHsibG9jYWxlIjogImVuIn19\n

    Now Base64URL decode that section and you'll get the original message:

    {\n    \"content\": \"Your hovercraft is full of eels.\",\n    \"sent_time\": \"2019-01-15 18:42:01Z\",\n    \"@type\": \"https://didcomm.org/basicmessage/1.0/message\",\n    \"@id\": \"123456780\",\n    \"~l10n\": {\"locale\": \"en\"}\n}\n
    "},{"location":"features/0066-non-repudiable-cryptographic-envelope/#modifications-to-packunpack-api","title":"Modifications to pack()/unpack() API","text":""},{"location":"features/0066-non-repudiable-cryptographic-envelope/#drawbacks","title":"Drawbacks","text":"

    Through the choice of a JWS formatted structure we imply that an off the shelf library will support this structure. However, it's uncommon for libraries to support the edDSA signature algorithm even though it's a valid algorithm based on the IANA registry. This means that most implementations that support this will either need to add this signature algorithm to another JWS library or

    "},{"location":"features/0066-non-repudiable-cryptographic-envelope/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0066-non-repudiable-cryptographic-envelope/#prior-art","title":"Prior art","text":"

    The majority of prior art discussions are mentioned above in the rationale and alternatives section. Some prior art that was considered when selecting this system is how closely it aligns with OpenID Connect systems. This has the possibility to converge with Self Issued OpenID Connect systems when running over HTTP, but doesn't specifically constrain to an particular transport mechanism. This is a distinct advantage for backward compatibility.

    "},{"location":"features/0066-non-repudiable-cryptographic-envelope/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0066-non-repudiable-cryptographic-envelope/#is-limiting-to-only-ed25519-key-types-an-unnecessary-restraint-given-the-broad-support-needed-for-didcomm","title":"Is limiting to only ed25519 key types an unnecessary restraint given the broad support needed for DIDComm?","text":""},{"location":"features/0066-non-repudiable-cryptographic-envelope/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0067-didcomm-diddoc-conventions/","title":"Aries RFC 0067: DIDComm DID document conventions","text":""},{"location":"features/0067-didcomm-diddoc-conventions/#summary","title":"Summary","text":"

    Explain the DID document conventions required to enable DID communications.

    "},{"location":"features/0067-didcomm-diddoc-conventions/#motivation","title":"Motivation","text":"

    Standardization of these conventions is essential to promoting interoperability of DID communications.

    "},{"location":"features/0067-didcomm-diddoc-conventions/#tutorial","title":"Tutorial","text":""},{"location":"features/0067-didcomm-diddoc-conventions/#did-documents","title":"DID documents","text":"

    A DID document is the associated data model to a DID, it contains important associated cryptographic information and a declaration of capabilities the DID supports.

    Of particular interested to this RFC is the definition of service endpoints. The primary object of this RFC is to document the DID communication service type and describe the associated conventions.

    "},{"location":"features/0067-didcomm-diddoc-conventions/#service-conventions","title":"Service Conventions","text":"

    As referenced above within the DID specification lies a section called service endpoints, this section of the DID document is reserved for any type of service the entity wishes to advertise, including decentralized identity management services for further discovery, authentication, authorization, or interaction.

    When a DID document wishes to express support for DID communications, the following service definition is defined.

    {\n  \"service\": [{\n    \"id\": \"did:example:123456789abcdefghi#did-communication\",\n    \"type\": \"did-communication\",\n    \"priority\" : 0,\n    \"recipientKeys\" : [ \"did:example:123456789abcdefghi#1\" ],\n    \"routingKeys\" : [ \"did:example:123456789abcdefghi#1\" ],\n    \"accept\": [\n      \"didcomm/aip2;env=rfc587\",\n      \"didcomm/aip2;env=rfc19\"\n    ],\n    \"serviceEndpoint\": \"https://agent.example.com/\"\n  }]\n}\n

    Notes 1. The keys featured in this array must resolve to keys of the same type, for example a mix Ed25519VerificationKey2018 or RsaVerificationKey2018 in the same array is invalid.

    "},{"location":"features/0067-didcomm-diddoc-conventions/#message-preparation-conventions","title":"Message Preparation Conventions","text":"

    Below describes the process under which a DID communication message is prepared and sent to a DID based on the conventions declared in the associated DID document. The scenario in which the below is predicated has the following conditions. - The sender possesses the DID document for the intended recipient(s) of a DID communication message. - The sender has created a content level message that is now ready to be prepared for sending to the intended recipient(s).

    1. The sender resolves the relevant did-communication service of the intended recipient(s) DID document.
    2. The sender resolves the recipient keys present in the recipientKeys array of the service declaration.
    3. Using the resolved keys, the sender takes the content level message and packs it inside an encrypted envelope for the recipient keys. (note-2)
    4. The sender then inspects the routingKeys array, if it is found to be empty, then the process skips to step 5. Otherwise, the sender prepares a content level message of type forward. The resolved keys from the recipientKeys array is set as the contents of the to field in the forward message and the encrypted envelope from the previous step is set as the contents of the msg field in the forward message. Following this, for each element in the routingKeys array the following sub-process is repeated:
      1. The sender resolves the current key in the routing array and takes the outputted encrypted envelope from the previous step and packs it inside a new encrypted envelope for the current key.
      2. The sender prepares a content level message of type forward. The current key in the routing array is set as the contents of the to field in the forward message and the encrypted envelope from the previous step is set as the contents of the msg field in the forward message.
    5. Resolve the service endpoint:
      • If the endpoint is a valid DID URL, check that it resolves to another DID service definition. If the resolution is successful the process from step 2. is repeated using the message outputted from this process as the input message.
      • If the service endpoint is not a DID URL, send the message using the transport protocol declared by the URL's scheme.

    Notes 1. There are two main situations that an agent will be in prior to preparing a new message.

    1. When preparing this envelope the sender has two main choices to make around properties to include in envelope
      • Whether to include sender information
      • Whether to include a non-reputable signature
    "},{"location":"features/0067-didcomm-diddoc-conventions/#example-domain-and-did-document","title":"Example: Domain and DID document","text":"

    The following is an example of an arbitrary pair of domains that will be helpful in providing context to conventions defined above.

    In the diagram above:

    "},{"location":"features/0067-didcomm-diddoc-conventions/#bobs-did-document-for-his-relationship-with-alice","title":"Bob's DID document for his Relationship with Alice","text":"

    Bob\u2019s domain has 3 devices he uses for processing messages - two phones (4 and 5) and a cloud-based agent (6). As well, Bob has one agent that he uses as a mediator (3) that can hold messages for the two phones when they are offline. However, in Bob's relationship with Alice, he ONLY uses one phone (4) and the cloud-based agent (6). Thus the key for device 5 is left out of the DID document (see below). For further privacy preservation, Bob also elects to use a shared domain endpoint (agents-r-us), giving him an extra layer of isolation from correlation. This is represented by the serviceEndpoint in the service definition not directly resolving to an endpoint URI rather resolving to another did-communication service definition which is owned and controlled by the endpoint owner (agents-r-us).

    Bobs DID document given to Alice

    {\n  \"@context\": \"https://w3id.org/did/v1\",\n  \"id\": \"did:example:1234abcd\",\n  \"publicKey\": [\n    {\"id\": \"3\", \"type\": \"RsaVerificationKey2018\",  \"controller\": \"did:example:1234abcd\",\"publicKeyPem\": \"-----BEGIN PUBLIC X\u2026\"},\n    {\"id\": \"4\", \"type\": \"RsaVerificationKey2018\",  \"controller\": \"did:example:1234abcd\",\"publicKeyPem\": \"-----BEGIN PUBLIC 9\u2026\"},\n    {\"id\": \"6\", \"type\": \"RsaVerificationKey2018\",  \"controller\": \"did:example:1234abcd\",\"publicKeyPem\": \"-----BEGIN PUBLIC A\u2026\"}\n  ],\n  \"authentication\": [\n    {\"type\": \"RsaSignatureAuthentication2018\", \"publicKey\": \"did:example:1234abcd#4\"}\n  ],\n  \"service\": [\n    {\n      \"id\": \"did:example:123456789abcdefghi;did-communication\",\n      \"type\": \"did-communication\",\n      \"priority\" : 0,\n      \"recipientKeys\" : [ \"did:example:1234abcd#4\" ],\n      \"routingKeys\" : [ \"did:example:1234abcd#3\" ],\n      \"serviceEndpoint\" : \"did:example:xd45fr567794lrzti67;did-communication\"\n    }\n  ]\n}\n

    Agents r Us DID document - resolvable by Alice

    {\n  \"@context\": \"https://w3id.org/did/v1\",\n  \"id\": \"did:example:xd45fr567794lrzti67\",\n  \"publicKey\": [\n    {\"id\": \"1\", \"type\": \"RsaVerificationKey2018\",  \"controller\": \"did:example:xd45fr567794lrzti67\",\"publicKeyPem\": \"-----BEGIN PUBLIC X\u2026\"},\n  ],\n  \"authentication\": [\n    {\"type\": \"RsaSignatureAuthentication2018\", \"publicKey\": \"did:example:xd45fr567794lrzti67#1\"}\n  ],\n  \"service\": [\n    {\n      \"id\": \"did:example:xd45fr567794lrzti67;did-communication\",\n      \"type\": \"did-communication\",\n      \"priority\" : 0,\n      \"recipientKeys\" : [ \"did:example:xd45fr567794lrzti67#1\" ],\n      \"routingKeys\" : [ ],\n      \"serviceEndpoint\" : \"http://agents-r-us.com\"\n    }\n  ]\n}\n
    "},{"location":"features/0067-didcomm-diddoc-conventions/#message-preparation-example","title":"Message Preparation Example","text":"

    Alices agent goes to prepare a message desired_msg for Bob.

    1. Alices agent resolves the above DID document did:example:1234abcd for Bob and resolves the did-communication service definition.
    2. Alices agent then packs desired_msg in an encrypted envelope message to the resolved keys defined in the recipientKeys array.
    3. Because the routingKeys array is not empty, a content level message of type forward is prepared where the to field of the forward message is set to the resolved keys and the msg field of the forward message is set to the encrypted envelope from the previous step.
    4. The resulting forward message from the previous step is then packed inside another encrypted envelope for the first and only key in the routingKeys array.
    5. Inspection of the service endpoint, reveals it is a did url and leads to resolving another did-communication service definition, this time owned and controlled by agents-r-us.
    6. Because in the agents-r-us service definition there is a recipient key. A content level message of type forward is prepared where the to field of the forward message is set to the recipient key and the msg field of the forward message is set to the encrypted envelope from the previous step.
    7. This content message is then packed in a encrypted envelope for the recipient key in agents-r-us service definition.
    8. Finally as the endpoint listed in the serviceEndpoint field for the agents-r-us did-communication service definition is a valid endpoint URL, the message is transmitted in accordance with the URL's protocol.
    "},{"location":"features/0067-didcomm-diddoc-conventions/#reference","title":"Reference","text":""},{"location":"features/0067-didcomm-diddoc-conventions/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0067-didcomm-diddoc-conventions/#prior-art","title":"Prior art","text":""},{"location":"features/0067-didcomm-diddoc-conventions/#unresolved-questions","title":"Unresolved questions","text":"

    The following remain unresolved:

    "},{"location":"features/0067-didcomm-diddoc-conventions/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0075-payment-decorators/","title":"Aries RFC 0075: Payment Decorators","text":""},{"location":"features/0075-payment-decorators/#summary","title":"Summary","text":"

    Defines the ~payment_request, payment_internal_response, and ~payment_receipt decorators. These offer standard payment ../../features in all DIDComm interactions, and let DIDComm take advantage of the W3C's Payment Request API in an interoperable way.

    "},{"location":"features/0075-payment-decorators/#motivation","title":"Motivation","text":"

    Instead of inventing custom messages for payments in each protocol, arbitrary messages can express payment semantics with payment decorators. Individual protocol specs should clarify on which messages and under which conditions the decorators are used.

    "},{"location":"features/0075-payment-decorators/#tutorial","title":"Tutorial","text":"

    The W3C's Payment Request API governs interactions between three parties:

    1. payer
    2. payee
    3. payment method

    The payer is usually imagined to be a person operating a web browser, the payee is imagined to be an online store, and the payment method might be something like a credit card processing service. The payee emits a PaymentRequest JSON structure (step 1 below); this causes the payee to be prompted (step 2, \"Render\"). The payer decides whether to pay, and if so, which payment method and options she prefers (step 3, \"Configure\"). The payer's choices are embodied in a PaymentResponse JSON structure (step 4). This is then used to select the appropriate codepath and inputs to invoke the desired payment method (step 5).

    Notice that this flow does not include anything coming back to the payer. In this API, the PaymentResponse structure embodies a response from the payer to the payer's own agent, expressing choices about which credit card to use and which shipping options are desired; it's not a response that crosses identity boundaries. That's reasonable because this is the Payment Request API, not a Payment Roundtrip API. It's only about requesting payments, not completing payments or reporting results. Also, each payment method will have unique APIs for fulfillment and receipts; the W3C Payment Request spec does not attempt to harmonize them, though some work in that direction is underway in the separate Payment Handler API spec.

    In DIDComm, the normal emphasis is on interactions between parties with different identities. This makes PaymentResponse and the communication that elicits it (steps 2-4) a bit unusual from a DIDComm perspective; normally DIDComm would use the word \"response\" for something that comes back from Bob, after Alice asks Bob a question. It also makes the scope of the W3C API feel incomplete, because we'd like to be able to model the entire flow, not just part of it.

    The DIDComm payment decorators map to the W3C API as follows:

    "},{"location":"features/0075-payment-decorators/#reference","title":"Reference","text":""},{"location":"features/0075-payment-decorators/#payment_request","title":"~payment_request","text":"

    Please see the PaymentRequest interface docs in the W3C spec for a full reference, or Section 2, Examples of Usage in the W3C spec for a narration that builds a PaymentRequest from first principles.

    The following is a sample ~payment_request decorator with some interesting details to suggest what's possible:

    {\n  \"~payment_request\": {\n    \"methodData\": [\n      {\n        \"supportedMethods\": \"basic-card\",\n        \"data\": {\n          \"supportedNetworks\": [\"visa\", \"mastercard\"],\n          \"payeeId\": \"12345\"\n        },\n      },\n      {\n        \"supportedMethods\": \"sovrin\",\n        \"data\": {\n          \"supportedNetworks\": [\"sov\", \"sov:test\", \"ibm-indy\"],\n          \"payeeId\": \"XXXX\"\n        },\n      }\n    ],\n    \"details\": {\n      \"id\": \"super-store-order-123-12312\",\n      \"displayItems\": [\n        {\n          \"label\": \"Sub-total\",\n          \"amount\": { \"currency\": \"USD\", \"value\": \"55.00\" },\n        },\n        {\n          \"label\": \"Sales Tax\",\n          \"amount\": { \"currency\": \"USD\", \"value\": \"5.00\" },\n          \"type\": \"tax\"\n        },\n      ],\n      \"total\": {\n        \"label\": \"Total due\",\n        // The total is USD$65.00 here because we need to\n        // add shipping (below). The selected shipping\n        // costs USD$5.00.\n        \"amount\": { \"currency\": \"USD\", \"value\": \"65.00\" }\n      },\n      \"shippingOptions\": [\n        {\n          \"id\": \"standard\",\n          \"label\": \"Ground Shipping (2 days)\",\n          \"amount\": { \"currency\": \"USD\", \"value\": \"5.00\" },\n          \"selected\": true,\n        },\n        {\n          \"id\": \"drone\",\n          \"label\": \"Drone Express (2 hours)\",\n          \"amount\": { \"currency\": \"USD\", \"value\": \"25.00\" }\n        }\n      ],\n      \"modifiers\": [\n        {\n          \"additionalDisplayItems\": [{\n            \"label\": \"Card processing fee\",\n            \"amount\": { \"currency\": \"USD\", \"value\": \"3.00\" },\n          }],\n          \"supportedMethods\": \"basic-card\",\n          \"total\": {\n            \"label\": \"Total due\",\n            \"amount\": { \"currency\": \"USD\", \"value\": \"68.00\" },\n          },\n          \"data\": {\n            \"supportedNetworks\": [\"visa\"],\n          },\n        },\n        {\n          \"supportedMethods\": \"sovrin\",\n          \"total\": {\n            \"label\": \"Total due\",\n            \"amount\": { \"currency\": \"SOV\", \"value\": \"2254\" },\n          },\n        },\n      ]\n    },\n    \"options\": {\n      \"requestPayerEmail\": false,\n      \"requestPayerName\": true,\n      \"requestPayerPhone\": false,\n      \"requestShipping\": true\n    }\n  }\n}\n

    The details.id field contains an invoice number, shopping cart ID, or similar identifier that unambiguously identifies the goods and services for which payment is requested. The payeeId field would contain a payment address for cryptocurrency payment methods, or a merchant ID for credit cards. The modifiers section shows how the requested payment amount should be modified if the basic-card method is selected. That specific example is discussed in greater detail in the W3C spec. It also shows how the currency could be changed if a token-based method is selected instead of a fiat-based method. See the separate W3C spec on Payment Method IDs.

    Note that standard DIDComm localization can be used to provide localized alternatives to the label fields; this is a DIDComm-specific extension.

    This example shows options where the payee is requesting self-attested data from the payer. DIDComm offers the option of replacing this simple approach with a sophisticated presentation request based on verifiable credentials. The simple approach is fine where self-attested data is enough; the VC approach is useful when assurance of the data must be higher (e.g., a verified email address), or where fancy logic about what's required (Name plus either Email or Phone) is needed.

    The DIDComm payment_request decorator may be combined with the ~timing.expires_time decorator to express the idea that the payment must be made within a certain time period or else the price or availability of merchandise is not guaranteed.

    "},{"location":"features/0075-payment-decorators/#payment_internal_response","title":"~payment_internal_response","text":"

    This decorator exactly matches PaymentResponse from the W3C API and will not be further described here. A useful example of a response is given in the related Basic Card Response doc.

    "},{"location":"features/0075-payment-decorators/#payment_receipt","title":"~payment_receipt","text":"

    This decorator on a message indicates that a payment has been made. It looks like this (note the snake_case since we are not matching a W3C spec):

    {\n  \"~payment_receipt\": {\n      \"request_id\": \"super-store-order-123-12312\",\n      \"selected_method\": \"sovrin\",\n      \"selected_shippingOption\": \"standard\",\n      \"transaction_id\": \"abc123\",\n      \"proof\": \"directly verifiable proof of payment\",\n      \"payeeId\": \"XXXX\",\n      \"amount\": { \"currency\": \"SOV\", \"value\": \"2254\" }\n  }\n}\n

    request_id: This contains the details.id of ~payment_request that this payment receipt satisfies.

    selected_method: Which payment method was chosen to pay.

    selected_shippingOption: Which shipping option was chosen.

    transaction_id: A transaction identifier that can be checked by the payee to verify that funds were transferred, and that the transfer relates to this payment request instead of another. This might be a ledger's transaction ID, for example.

    proof: Optional. A base64url-encoded blob that contains directly verifiable proof that the transaction took place. This might be useful for payments enacted by a triple-signed receipt mechanism, for example. When this is present, transaction_id becomes optional. For ledgers that support state proofs, the state proof could be offered here.

    "},{"location":"features/0075-payment-decorators/#example","title":"Example","text":"

    Here is a rough description of how these decorators might be used in a protocol to issue credentials. We are not guaranteeing that the message details will remain up-to-date as that protocol evolves; this is only for purposes of general illustration.

    "},{"location":"features/0075-payment-decorators/#credential-offer","title":"Credential Offer","text":"

    This message is sent by the issuer; it indicates that payment is requested for the credential under discussion.

    {\n    \"@type\": \"https://didcomm.org/issue_credential/1.0/offer_credential\",\n    \"@id\": \"5bc1989d-f5c1-4eb1-89dd-21fd47093d96\",\n    \"cred_def_id\": \"KTwaKJkvyjKKf55uc6U8ZB:3:CL:59:tag1\",\n    \"~payment_request\": {\n        \"methodData\": [\n          {\n            \"supportedMethods\": \"ETH\",\n            \"data\": {\n              \"payeeId\": \"0xD15239C7e7dDd46575DaD9134a1bae81068AB2A4\"\n            },\n          }\n        ],\n        \"details\": {\n          \"id\": \"0a2bc4a6-1f45-4ff0-a046-703c71ab845d\",\n          \"displayItems\": [\n            {\n              \"label\": \"commercial driver's license\",\n              \"amount\": { \"currency\": \"ETH\", \"value\": \"0.0023\" },\n            }\n          ],\n          \"total\": {\n            \"label\": \"Total due\",\n            \"amount\": { \"currency\": \"ETH\", \"value\": \"0.0023\" }\n          }\n        }\n      },\n    \"credential_preview\": <json-ld object>,\n    ///...\n}\n
    "},{"location":"features/0075-payment-decorators/#example-credential-request","title":"Example Credential Request","text":"

    This Credential Request is sent to the issuer, indicating that they have paid the requested amount.

    {\n    \"@type\": \"https://didcomm.org/issue_credential/1.0/request_credential\",\n    \"@id\": \"94af9be9-5248-4a65-ad14-3e7a6c3489b6\",\n    \"~thread\": { \"thid\": \"5bc1989d-f5c1-4eb1-89dd-21fd47093d96\" },\n    \"cred_def_id\": \"KTwaKJkvyjKKf55uc6U8ZB:3:CL:59:tag1\",\n    \"~payment_receipt\": {\n      \"request_id\": \"0a2bc4a6-1f45-4ff0-a046-703c71ab845d\",\n      \"selected_method\": \"ETH\",\n      \"transaction_id\": \"0x5674bfea99c480e110ea61c3e52783506e2c467f108b3068d642712aca4ea479\",\n      \"payeeId\": \"0xD15239C7e7dDd46575DaD9134a1bae81068AB2A4\",\n      \"amount\": { \"currency\": \"ETH\", \"value\": \"0.0023\" }\n    }\n\n    ///...\n}\n
    "},{"location":"features/0075-payment-decorators/#drawbacks","title":"Drawbacks","text":"

    TBD

    "},{"location":"features/0075-payment-decorators/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0075-payment-decorators/#prior-art","title":"Prior art","text":""},{"location":"features/0075-payment-decorators/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0075-payment-decorators/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0092-transport-return-route/","title":"Aries RFC 0092: Transports Return Route","text":""},{"location":"features/0092-transport-return-route/#summary","title":"Summary","text":"

    Agents can indicate that an inbound message transmission may also be used as a return route for messages. This allows for transports of increased efficiency as well as agents without an inbound route.

    "},{"location":"features/0092-transport-return-route/#motivation","title":"Motivation","text":"

    Inbound HTTP and Websockets are used only for receiving messages by default. Return messages are sent using their own outbound connections. Including a decorator allows the receiving agent to know that using the inbound connection as a return route is acceptable. This allows two way communication with agents that may not have an inbound route available. Agents without an inbound route include mobile agents, and agents that use a client (and not a server) for communication.

    This decorator is intended to facilitate message communication between a client based agent (an agent that can only operate as a client, not a server) and the server based agents they communicate directly with. Use on messages that will be forwarded is not allowed.

    "},{"location":"features/0092-transport-return-route/#tutorial","title":"Tutorial","text":"

    When you send a message through a connection, you can use the ~transport decorator on the message and specify return_route. The value of return_route is discussed in the Reference section of this document.

    {\n    \"~transport\": {\n        \"return_route\": \"all\"\n    }\n}\n
    "},{"location":"features/0092-transport-return-route/#reference","title":"Reference","text":"

    The ~transport decorator should be processed after unpacking and prior to routing the message to a message handler.

    For HTTP transports, the presence of this message decorator indicates that the receiving agent MAY hold onto the connection and use it to return messages as designated. HTTP transports will only be able to receive at most one message at a time. Websocket transports are capable of receiving multiple messages.

    Compliance with this indicator is optional for agents generally, but required for agents wishing to connect with client based agents.

    "},{"location":"features/0092-transport-return-route/#drawbacks","title":"Drawbacks","text":""},{"location":"features/0092-transport-return-route/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0092-transport-return-route/#prior-art","title":"Prior art","text":"

    The Decorators RFC describes scope of decorators. Transport isn't one of the scopes listed.

    "},{"location":"features/0092-transport-return-route/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Cloud Agent - Python Contributed by the government of British Columbia. Aries Protocol Test Suite Used in Tests"},{"location":"features/0095-basic-message/","title":"Aries RFC 0095: Basic Message Protocol 1.0","text":""},{"location":"features/0095-basic-message/#summary","title":"Summary","text":"

    The BasicMessage protocol describes a stateless, easy to support user message protocol. It has a single message type used to communicate.

    "},{"location":"features/0095-basic-message/#motivation","title":"Motivation","text":"

    It is a useful feature to be able to communicate human written messages. BasicMessage is the most basic form of this written message communication, explicitly excluding advanced ../../features to make implementation easier.

    "},{"location":"features/0095-basic-message/#tutorial","title":"Tutorial","text":""},{"location":"features/0095-basic-message/#roles","title":"Roles","text":"

    There are two roles in this protocol: sender and receiver. It is anticipated that both roles are supported by agents that provide an interface for humans, but it is possible for an agent to only act as a sender (do not process received messages) or a receiver (will never send messages).

    "},{"location":"features/0095-basic-message/#states","title":"States","text":"

    There are not really states in this protocol, as sending a message leaves both parties in the same state they were before.

    "},{"location":"features/0095-basic-message/#out-of-scope","title":"Out of Scope","text":"

    There are many useful ../../features of user messaging systems that we will not be adding to this protocol. We anticipate the development of more advanced and full-featured message protocols to fill these needs. Features that are considered out of scope for this protocol include:

    "},{"location":"features/0095-basic-message/#reference","title":"Reference","text":"

    Protocol: https://didcomm.org/basicmessage/1.0/

    message

    Example:

    {\n    \"@id\": \"123456780\",\n    \"@type\": \"https://didcomm.org/basicmessage/1.0/message\",\n    \"~l10n\": { \"locale\": \"en\" },\n    \"sent_time\": \"2019-01-15 18:42:01Z\",\n    \"content\": \"Your hovercraft is full of eels.\"\n}\n
    "},{"location":"features/0095-basic-message/#drawbacks","title":"Drawbacks","text":""},{"location":"features/0095-basic-message/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0095-basic-message/#prior-art","title":"Prior art","text":"

    BasicMessage has parallels to SMS, which led to the later creation of MMS and even the still-under-development RCS.

    "},{"location":"features/0095-basic-message/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0095-basic-message/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Indy Cloud Agent - Python Reference agent implementation contributed by Sovrin Foundation and Community; MISSING test results Aries Framework - .NET .NET framework for building agents of all types; MISSING test results Streetcred.id Commercial mobile and web app built using Aries Framework - .NET; MISSING test results Aries Cloud Agent - Python Contributed by the government of British Columbia.; MISSING test results Aries Static Agent - Python Useful for cron jobs and other simple, automated use cases.; MISSING test results Aries Protocol Test Suite ; MISSING test results"},{"location":"features/0113-question-answer/","title":"Aries RFC 0113: Question Answer Protocol 0.9","text":""},{"location":"features/0113-question-answer/#summary","title":"Summary","text":"

    A simple protocol where a questioner asks a responder a question with at least one valid answer. The responder then replies with an answer or ignores the question.

    Note: While there is a need in the future for a robust negotiation protocol\nthis is not it. This is for simple question/answer exchanges.\n
    "},{"location":"features/0113-question-answer/#motivation","title":"Motivation","text":"

    There are many instances where one party needs an answer to a specific question from another party. These can be related to consent, proof of identity, authentication, or choosing from a list of options. For example, when receiving a phone call a customer service representative can ask a question to the customer\u2019s phone to authenticate the caller, \u201cAre you on the phone with our representative?\u201d. The same could be done to authorize transactions, validate logins (2FA), accept terms and conditions, and any other simple, non-negotiable exchanges.

    "},{"location":"features/0113-question-answer/#tutorial","title":"Tutorial","text":"

    We'll describe this protocol in terms of a [Challenge/Response] https://en.wikipedia.org/wiki/Challenge%E2%80%93response_authentication) scenario where a customer service representative for Faber Bank questions its customer Alice, who is speaking with them on the phone, to answer whether it is really her.

    "},{"location":"features/0113-question-answer/#interaction","title":"Interaction","text":"

    Using an already established pairwise connection and agent-to-agent communication Faber will send a question to Alice with one or more valid responses with an optional deadline and Alice can select one of the valid responses or ignore the question. If she selects one of the valid responses she will respond with her answer.

    "},{"location":"features/0113-question-answer/#roles","title":"Roles","text":"

    There are two parties in a typical question/answer interaction. The first party (Questioner) issues the question with its valid answers and the second party (Responder) responds with the selected answer. The parties must have already exchanged pairwise keys and created a connection. These pairwise can be used to encrypt and verify the response. When the answer has been sent questioner can know with a high level of certainty that it was sent by responder.

    In this tutorial Faber (the questioner) initiates the interaction and creates and sends the question to Alice. The question includes the valid responses, which can optionally be signed for non-repudiability.

    In this tutorial Alice (the responder) receives the packet and must respond to the question (or ignore it, which is not an answer) by encrypting either the positive or the negative response_code (signing both is invalid).

    "},{"location":"features/0113-question-answer/#messages","title":"Messages","text":"

    All messages in this protocol are part of the \"Question/Answer 1.0\" message family uniquely identified by this DID reference:

    https://didcomm.org/questionanswer/1.0\n

    The protocol begins when the questioner sends a questionanswer message to the responder:

    {\n  \"@type\": \"https://didcomm.org/questionanswer/1.0/question\",\n  \"@id\": \"518be002-de8e-456e-b3d5-8fe472477a86\",\n  \"question_text\": \"Alice, are you on the phone with Bob from Faber Bank right now?\",\n  \"question_detail\": \"This is optional fine-print giving context to the question and its various answers.\",\n  \"nonce\": \"<valid_nonce>\",\n  \"signature_required\": true,\n  \"valid_responses\" : [\n    {\"text\": \"Yes, it's me\"},\n    {\"text\": \"No, that's not me!\"}],\n  \"~timing\": {\n    \"expires_time\": \"2018-12-13T17:29:06+0000\"\n  }\n}\n

    The responder receives this message and chooses the answer. If the signature is required then she uses her private pairwise key to sign her response.

    Note: Alice should sign the following: the question, the chosen answer,\nand the nonce: HASH(<question_text>+<answer_text>+<nonce>), this keeps a\nrecord of each part of the transaction.\n

    The response message is then sent using the ~sig message decorator:

    {\n  \"@type\": \"https://didcomm.org/questionanswer/1.0/answer\",\n  \"~thread\": { \"thid\": \"518be002-de8e-456e-b3d5-8fe472477a86\", \"seqnum\": 0 },\n  \"response\": \"Yes, it's me\",\n  \"response~sig\": {\n    \"@type\": \"https://didcomm.org/signature/1.0/ed25519Sha512_single\"\n    \"signature\": \"<digital signature function output>\",\n    \"sig_data\": \"<base64url(HASH(\"Alice, are you on the phone with Bob?\"+\"Yes, it's me\"+\"<nonce>\"))>\",\n    \"signers\": [\"<responder_key>\"],\n    }\n  \"~timing\": {\n    \"out_time\": \"2018-12-13T17:29:34+0000\"\n  }\n}\n

    The questioner then checks the signature against the sig_data.

    "},{"location":"features/0113-question-answer/#optional-elements","title":"Optional Elements","text":"

    The \"question_detail\" field is optional. It can be used to give \"fine print\"-like context around the question and all of its valid responses. While this could always be displayed, some UIs may choose to only make it available on-demand, in a \"More info...\" kind of way.

    ~timing.expires_time is optional ~response~sig is optional when \"signature_required\" is false

    "},{"location":"features/0113-question-answer/#business-cases-and-auditing","title":"Business cases and auditing","text":"

    In the above scenario, Faber bank can audit the reply and prove that only Alice's pairwise key signed the response (a cryptographic API like Indy-SDK can be used to guarantee the responder's signature). Conversely, Alice can also use her key to prove or dispute the validity of the signature. The cryptographic guarantees central to agent-to-agent communication and digital signatures create a trustworthy protocol for obtaining a committed answer from a pairwise connection. This protocol can be used for approving wire transfers, accepting EULAs, or even selecting an item from a food menu. Of course, as with a real world signature, Alice should be careful about what she signs.

    "},{"location":"features/0113-question-answer/#invalid-replies","title":"Invalid replies","text":"

    The responder may send an invalid, incomplete, or unsigned response. In this case the questioner must decide what to do. As with normal verbal communication, if the response is not understood the question can be asked again, maybe with increased emphasis. Or the questioner may determine the lack of a valid response is a response in and of itself. This depends on the parties involved and the question being asked. For example, in the exchange above, if the question times out or the answer is not \"Yes, it's me\" then Faber would probably choose to discontinue the phone call.

    "},{"location":"features/0113-question-answer/#trust-and-constraints","title":"Trust and Constraints","text":"

    Using already established pairwise relationships allows each side to trust each other. The responder can know who sent the message and the questioner knows that only the responder could have encrypted the response. This response gives a high level of trust to the exchange.

    "},{"location":"features/0113-question-answer/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Connect.Me Free mobile app from Evernym. Installed via app store on iOS and Android. Verity Commercially licensed enterprise agent, SaaS or on-prem."},{"location":"features/0114-predefined-identities/","title":"Aries RFC 0114: Predefined Identities","text":""},{"location":"features/0114-predefined-identities/#summary","title":"Summary","text":"

    Documents some keys, DIDs, and DID Docs that may be useful for testing, simulation, and spec writing. The fake ones are the DIDComm / identity analogs to the reserved domain \"example.com\" that was allocated for testing purposes with DNS and other internet systems -- or to Microsoft's example Contoso database and website used to explore and document web development ../../concepts.

    "},{"location":"features/0114-predefined-identities/#real-identities","title":"Real Identities","text":"

    The following real--NOT fake--identities are worth publicly documenting.

    "},{"location":"features/0114-predefined-identities/#aries-community","title":"Aries community","text":"

    The collective Aries developer community is represented by:

    did:sov:BzCbsNYhMrjHiqZDTUASHg -- verkey = 6zJ9dboyug451A8dtLgsjmjyguQcmq823y7vHP6vT2Eu\n

    This DID is currently allocated, but not actually registered on Sovrin's mainnet. You will see this DID in a number of RFCs, as the basis of a PIURI that identifies a community-defined protocol. You DO NOT have to actually resolve this DID or relate to a Sovrin identity to use Aries or its RFCs; think of this more like the opaque URNs that are sometimes used in XML namespacing. At some point it may be registered, but nothing else in the preceding summary will change.

    The community controls a second DID that is useful for defining message families that are not canonical (e.g., in the sample tic-tac-toe protocol). It is:

    did:sov:SLfEi9esrjzybysFxQZbfq -- verkey = Ep1puxjTDREwEyz91RYzn7arKL2iKQaDEB5kYDUUUwh5\n

    This community may create DIDs for itself from other DID methods, too. If so, we will publish them here.

    "},{"location":"features/0114-predefined-identities/#subgroups","title":"Subgroups","text":"

    The Aries community may create subgroups with their own DIDs. If so, we may publish such information here.

    "},{"location":"features/0114-predefined-identities/#allied-communities","title":"Allied communities","text":"

    Other groups such as DIF, the W3C Crecentials Community Group, and so forth may wish to define identities and announce their associated DIDs here.

    "},{"location":"features/0114-predefined-identities/#fake-identities","title":"Fake Identities","text":"

    The identity material shown below is not registered anywhere. This is because sometimes our tests or demos are about registering or connecting, and because the identity material is intended to be somewhat independent of a specific blockchain instance. Instead, we define values and give them names, permalinks, and semantics in this RFC. This lets us have a shared understanding of how we expect them to behave in various contexts.

    WARNING: Below you will see some published secrets. By disclosing private keys and/or their seeds, we are compromising the keypairs. This fake identity material is thus NOT trustworthy for anything; the world knows the secrets, and now you do, too. :-) You can test or simulate workflows with these keys. You might use them in debugging and development. But you should never use them as the basis of real trust.

    "},{"location":"features/0114-predefined-identities/#dids","title":"DIDs","text":""},{"location":"features/0114-predefined-identities/#alice-sov-1","title":"alice-sov-1","text":"

    This DID, the alice-sov-1 DID with value did:sov:UrDaZsMUpa91DqU3vrGmoJ, is associated with a very simplistic Indy/Sovrin identity. It has a single keypair (Key 1 below) that it uses for everything. In demos or tests, its genesis DID Doc looks like this:

    {\n    \"@context\": \"https://w3id.org/did/v0.11\",\n    \"id\": \"did:sov:UrDaZsMUpa91DqU3vrGmoJ\",\n    \"service\": [{\n        \"type\": \"did-communication\",\n        \"serviceEndpoint\": \"https://localhost:23456\"\n    }],\n    \"publicKey\": [{\n        \"id\": \"#key-1\",\n        \"type\": \"Ed25519VerificationKey2018\",\n        \"publicKeyBase58\": \"GBMBzuhw7XgSdbNffh8HpoKWEdEN6hU2Q5WqL1KQTG5Z\"\n    }],\n    \"authentication\": [\"#key-1\"]\n}\n
    "},{"location":"features/0114-predefined-identities/#bob-many-1","title":"bob-many-1","text":"

    This DID, the bob-many-1 DID with value did:sov:T9nQQ8CjAhk2oGAgAw1ToF, is associated with a much more flexible, complex identity than alice-sov-1. It places every test keypair except Key 1 in the authentication section of its DID Doc. This means you should be able to talk to Bob using the types of crypto common in many communities, not just Indy/Sovrin. Its genesis DID doc looks like this:

    {\n    \"@context\": \"https://w3id.org/did/v0.11\",\n    \"id\": \"did:sov:T9nQQ8CjAhk2oGAgAw1ToF\",\n    \"service\": [{\n        \"type\": \"did-communication\",\n        \"serviceEndpoint\": \"https://localhost:23457\"\n    }],\n    \"publicKey\": [{\n        \"id\": \"#key-2\",\n        \"type\": \"Ed25519VerificationKey2018\",\n        \"controller\": \"#id\",\n        \"publicKeyBase58\": \"FFhViHkJwqA15ruKmHQUoZYtc5ZkddozN3tSjETrUH9z\"\n      },\n      {\n        \"id\": \"#key-3\",\n        \"type\": \"Secp256k1VerificationKey2018\",\n        \"controller\": \"#id\",\n        \"publicKeyHex\": \"3056301006072a8648ce3d020106052b8104000a03420004a34521c8191d625ff811c82a24a60ff9f174c8b17a7550c11bba35dbf97f3f04392e6a9c6353fd07987e016122157bf56c487865036722e4a978bb6cd8843fa8\"\n      },\n      {\n        \"id\": \"#key-4\",\n        \"type\": \"RsaVerificationKey2018\",\n        \"controller\": \"#id\",\n        \"publicKeyPem\": \"-----BEGIN PUBLIC KEY-----\\r\\nMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDlOJu6TyygqxfWT7eLtGDwajtN\\r\\nFOb9I5XRb6khyfD1Yt3YiCgQWMNW649887VGJiGr/L5i2osbl8C9+WJTeucF+S76\\r\\nxFxdU6jE0NQ+Z+zEdhUTooNRaY5nZiu5PgDB0ED/ZKBUSLKL7eibMxZtMlUDHjm4\\r\\ngwQco1KRMDSmXSMkDwIDAQAB\\r\\n-----END PUBLIC KEY-----\"\n      },\n      {\n        \"id\": \"#key-5\",\n        \"type\": \"RsaVerificationKey2018\",\n        \"controller\": \"#id\",\n        \"publicKeyPem\": \"-----BEGIN PUBLIC KEY-----\\r\\nMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAoZp7md4nkmmFvkoHhQMw\\r\\nN0lcpYeKfeinKir7zYWFLmpClZHawZKLkB52+nnY4w9ZlKhc4Yosrw/N0h1sZlVZ\\r\\nfOQBnzFUQCea6uK/4BKHPhiHpN73uOwu5TAY4BHS7fsXRLPgQFB6o6iy127o2Jfb\\r\\nUVpbNU/rJGxVI2K1BIzkfrXAJ0pkjkdP7OFE6yRLU4ZcATWSIPwGvlF6a0/QPC3B\\r\\nbTvp2+DYPDC4pKWxNF/qOwOnMWqxGq6ookn12N/GufA/Ugv3BTVoy7I7Q9SXty4u\\r\\nUat19OBJVIqBOMgXsyDz0x/C6lhBR2uQ1K06XRa8N4hbfcgkSs+yNBkLfBl7N80Q\\r\\n0Wkq2PHetzQU12dPnz64vvr6s0rpYIo20VtLzhYA8ZxseGc3s7zmY5QWYx3ek7Vu\\r\\nwPv9QQzcmtIQQsUbekPoLnKLt6wJhPIGEr4tPXy8bmbaThRMx4tjyEQYy6d+uD0h\\r\\nXTLSjZ1SccMRqLxoPtTWVNXKY1E84EcS/QkqlY4AthLFBL6r+lnm+DlNaG8LMwCm\\r\\ncz5NMag9ooM9IqgdDYhUpWYDSdOvDubtz1YZ4hjQhaofdC2AkPXRiQvMy/Nx9WjQ\\r\\nn4z387kz5PK5YbadoZYkwtFttmxJ/EQkkhGEDTXoSRTufv+qjXDsmhEsdaNkvcDP\\r\\n1uiCSY19UWe5LQhIMbR0u/0CAwEAAQ==\\r\\n-----END PUBLIC KEY-----\"\n      },\n    ],\n    \"authentication\": [\"#key-2\", \"#key-3\", \"#key-4\", \"#key-5\", \"#key-6\"]\n}\n

    [TODO: define DIDs from other ecosystems that put the same set of keys in their DID docs -- maybe bob-many-2 is a did:eth using these same keys, and bob-many-3 is a did:btc using them...]

    "},{"location":"features/0114-predefined-identities/#keys","title":"Keys","text":""},{"location":"features/0114-predefined-identities/#key-1-ed25519","title":"Key 1 (Ed25519)","text":"

    This key is used by the alice-sov-1 DID, but could also be used with other DIDs defined elsewhere.

    signing key (private)\nGa3v3SyNsvv1QhSCrEAQfJiyxQYUdZzQARkCosSWrXbT\n\nhex seed (private; in a form usable by Indy CLI)\ne756c41c1b5c48d3be0f7b5c7aa576d2709f13b67c9078c7ded047fe87c8a79e\n\nverkey (public)\nGBMBzuhw7XgSdbNffh8HpoKWEdEN6hU2Q5WqL1KQTG5Z\n\nas a Sovrin DID\ndid:sov:UrDaZsMUpa91DqU3vrGmoJ\n
    "},{"location":"features/0114-predefined-identities/#key-2-ed25519","title":"Key 2 (Ed25519)","text":"

    This key is used by the bob-many-1 DID, but could also be used with other DIDs defined elsewhere.

    signing key (private)\nFE2EYN25vcQmCU52MkiHuXHKqR46TwjFU4D4TGaYDRyd\n\nhex seed (private)\nd3598fea152e6a480faa676a76e545de7db9ac1093b9cee90b031d9625f3ce64\n\nverkey (public)\nFFhViHkJwqA15ruKmHQUoZYtc5ZkddozN3tSjETrUH9z\n\nas a Sovrin DID\ndid:sov:T9nQQ8CjAhk2oGAgAw1ToF\n
    "},{"location":"features/0114-predefined-identities/#key-3-secp256k1","title":"Key 3 (Secp256k1)","text":"

    This key is used by the bob-many-1 DID, but could also be used with other DIDs defined elsewhere.

    -----BEGIN EC PRIVATE KEY-----\nMHQCAQEEIMFcUvDujXt0/C48vm1Wfj8ADlrGsHCHzp//2mUARw79oAcGBSuBBAAK\noUQDQgAEo0UhyBkdYl/4EcgqJKYP+fF0yLF6dVDBG7o12/l/PwQ5LmqcY1P9B5h+\nAWEiFXv1bEh4ZQNnIuSpeLts2IQ/qA==\n-----END EC PRIVATE KEY-----\n\n-----BEGIN PUBLIC KEY-----\nMFYwEAYHKoZIzj0CAQYFK4EEAAoDQgAEo0UhyBkdYl/4EcgqJKYP+fF0yLF6dVDB\nG7o12/l/PwQ5LmqcY1P9B5h+AWEiFXv1bEh4ZQNnIuSpeLts2IQ/qA==\n-----END PUBLIC KEY-----\n\npublic key as hex\n3056301006072a8648ce3d020106052b8104000a03420004a34521c8191d625ff811c82a24a60ff9f174c8b17a7550c11bba35dbf97f3f04392e6a9c6353fd07987e016122157bf56c487865036722e4a978bb6cd8843fa8\n
    "},{"location":"features/0114-predefined-identities/#key-4-1024-bit-rsa","title":"Key 4 (1024-bit RSA)","text":"

    This key is used by the bob-many-1 DID, but could also be used with other DIDs defined elsewhere.

    -----BEGIN RSA PRIVATE KEY-----\nMIICXQIBAAKBgQDlOJu6TyygqxfWT7eLtGDwajtNFOb9I5XRb6khyfD1Yt3YiCgQ\nWMNW649887VGJiGr/L5i2osbl8C9+WJTeucF+S76xFxdU6jE0NQ+Z+zEdhUTooNR\naY5nZiu5PgDB0ED/ZKBUSLKL7eibMxZtMlUDHjm4gwQco1KRMDSmXSMkDwIDAQAB\nAoGAfY9LpnuWK5Bs50UVep5c93SJdUi82u7yMx4iHFMc/Z2hfenfYEzu+57fI4fv\nxTQ//5DbzRR/XKb8ulNv6+CHyPF31xk7YOBfkGI8qjLoq06V+FyBfDSwL8KbLyeH\nm7KUZnLNQbk8yGLzB3iYKkRHlmUanQGaNMIJziWOkN+N9dECQQD0ONYRNZeuM8zd\n8XJTSdcIX4a3gy3GGCJxOzv16XHxD03GW6UNLmfPwenKu+cdrQeaqEixrCejXdAF\nz/7+BSMpAkEA8EaSOeP5Xr3ZrbiKzi6TGMwHMvC7HdJxaBJbVRfApFrE0/mPwmP5\nrN7QwjrMY+0+AbXcm8mRQyQ1+IGEembsdwJBAN6az8Rv7QnD/YBvi52POIlRSSIM\nV7SwWvSK4WSMnGb1ZBbhgdg57DXaspcwHsFV7hByQ5BvMtIduHcT14ECfcECQATe\naTgjFnqE/lQ22Rk0eGaYO80cc643BXVGafNfd9fcvwBMnk0iGX0XRsOozVt5Azil\npsLBYuApa66NcVHJpCECQQDTjI2AQhFc1yRnCU/YgDnSpJVm1nASoRUnU8Jfm3Oz\nuku7JUXcVpt08DFSceCEX9unCuMcT72rAQlLpdZir876\n-----END RSA PRIVATE KEY-----\n\n-----BEGIN PUBLIC KEY-----\nMIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDlOJu6TyygqxfWT7eLtGDwajtN\nFOb9I5XRb6khyfD1Yt3YiCgQWMNW649887VGJiGr/L5i2osbl8C9+WJTeucF+S76\nxFxdU6jE0NQ+Z+zEdhUTooNRaY5nZiu5PgDB0ED/ZKBUSLKL7eibMxZtMlUDHjm4\ngwQco1KRMDSmXSMkDwIDAQAB\n-----END PUBLIC KEY-----\n
    "},{"location":"features/0114-predefined-identities/#key-5-4096-bit-rsa","title":"Key 5 (4096-bit RSA)","text":"

    This key is used by the bob-many-1 DID, but could also be used with other DIDs defined elsewhere.

    -----BEGIN RSA PRIVATE KEY-----\nMIIJKAIBAAKCAgEAoZp7md4nkmmFvkoHhQMwN0lcpYeKfeinKir7zYWFLmpClZHa\nwZKLkB52+nnY4w9ZlKhc4Yosrw/N0h1sZlVZfOQBnzFUQCea6uK/4BKHPhiHpN73\nuOwu5TAY4BHS7fsXRLPgQFB6o6iy127o2JfbUVpbNU/rJGxVI2K1BIzkfrXAJ0pk\njkdP7OFE6yRLU4ZcATWSIPwGvlF6a0/QPC3BbTvp2+DYPDC4pKWxNF/qOwOnMWqx\nGq6ookn12N/GufA/Ugv3BTVoy7I7Q9SXty4uUat19OBJVIqBOMgXsyDz0x/C6lhB\nR2uQ1K06XRa8N4hbfcgkSs+yNBkLfBl7N80Q0Wkq2PHetzQU12dPnz64vvr6s0rp\nYIo20VtLzhYA8ZxseGc3s7zmY5QWYx3ek7VuwPv9QQzcmtIQQsUbekPoLnKLt6wJ\nhPIGEr4tPXy8bmbaThRMx4tjyEQYy6d+uD0hXTLSjZ1SccMRqLxoPtTWVNXKY1E8\n4EcS/QkqlY4AthLFBL6r+lnm+DlNaG8LMwCmcz5NMag9ooM9IqgdDYhUpWYDSdOv\nDubtz1YZ4hjQhaofdC2AkPXRiQvMy/Nx9WjQn4z387kz5PK5YbadoZYkwtFttmxJ\n/EQkkhGEDTXoSRTufv+qjXDsmhEsdaNkvcDP1uiCSY19UWe5LQhIMbR0u/0CAwEA\nAQKCAgBWzqj+ajtPhqd1JEcNyDyqNhoyQLDAGa1SFWzVZZe46xOBTKv5t0KI1BSN\nT86VibVRCW97J8IA97hT2cJU5hv/3mqQnOro2114Nv1i3BER5hNXGP5ws04thryW\nAH0RoQNKwGUBpzl5mDEZUFZ7oncJKEQ+SwPAuQCy1V7vZs+G0RK7CFcjpmLkl81x\nkjl0UIQzkhdA6KCmsxXTdzggW2O/zaM9nXYKPxGwP+EEhVFJChlRjkI8Vv32z0vk\nh7A0ST16UTsL7Tix0rfLI/OrTn9LF5NxStmZNB1d5v30FwtiqXkGcQn/12QhGjxz\nrLbGDdU3p773AMJ1Ac8NhpKN0vXo7NOh9qKEq0KfLy+AD6CIDB9pjZIolajqFOmO\nRENAP9eY/dP7EJNTSU84GJn8csQ4imOIYqp0FkRhigshMbr7bToUos+/OlHYbMry\nr/I8VdMt4xazMK5PtGn9oBzfv/ovNyrQxv562rtx3G996HFF6+kCVC3mBtTHe0p2\nVKNJaXlQSkEyrYAOqhnMvIfIMuuG2+hIuv5LBBdCyv6YC4ER2RsaXHt4ZBfsbPfO\nTEP4YCJTuLc+Fyg1f01EsuboB0JmvzNyiK+lBp8FsxiqwpIExriBCPJgaxoWJMFh\nxrRzTXwBWkJaDhYVbc2bn8TtJE6uEC9m4B7IUQOrXXKyOTqUgQKCAQEAzJl16J3Y\nYjkeJORmvi2J1UbaaBJAeCB7jwXlarwAq8sdxEqdDoRB6cZhWX0VMH46oaUA+Ldx\nCoO2iMgOrs0p6dJOj1ybtIhiX9PJTzstd5WEltU/mov+DzlBiKg78dFi/B5HfE/F\nKIDx4gTcD//sahooMqbg78QfOO+JjLrvT7TljL/puOAM8LTytZqOaDIDwnblpSgZ\nJcCqochmz9b7f7NHbgVrBkXZTsgbH6Dw4H7T0WC4K4P4dJW8Js18r+xN3W8/ZhmY\nlxTDZy40LlUy7++Ha+v8vZ4cRJKq2sdTtt9Z/ZYDfpCDT5ZmGS/gDloGean9mivG\nlt/zgDswEUji9QKCAQEAyjPKsBitJ39S26H/hp5oZRad1MOXgXPpbu8ARmHooKP3\nQ0dtnreBLzIQxxIitp3GjzJFU9r/tqy/ylOhIGAt+340KoSye3gGpvxZImMAIIR9\ns03GE5AHJ4J5NIxQKX+g9o0fV44bVNrLzAnHaZh+Bi4xbLatBJABgN2TnjA8lx7x\nlrqb99VpKLZP7DGxK7o0Ji4qerMPeIVoJ9RaUkTYguJaXG22nPeKfDiI13xlm1RU\nptulJG3CkRYp48Udmqb1b+67KMOxKL1ISGhuzqitOY+Ua1sM5SEFyukEhMuK6/uM\nSCAVl9aNHU5vx95D/T7onPAnxNqDObWeZi2HWoif6QKCAQEAxC4BmOKBMO2Dsewv\nd/tCRnaBxXh6yLScxS7qI8XQ/ujryeOhZOH8MaQ+hAgj4TOoFIaav+FlSqewxsbN\nDV876S/2lBBAXILJkQkJ5ibgGeIMGHSxYAcLvJ0x8U8e62fSedyuvsveSFAbnpT6\nTX0fuz0Jfkf1NvHe3kEQqxgzj0HtOWBrQxHSVpuqfeeM1OvgHv7Sg+JG+qQa+LWn\nn3KMBI5q11vqm0EudRP6rgEr9pallAYhkdggy+knWC2AeU8j+kdJiyTP403Nb4om\nDqczCE2slBbbaRXKFRZtLQojgx32s+i7wQfgYNfdXhlBxYEc5FvTB5kh+lkSqsoV\n9PzmYQKCAQBrQHGAWnZt/uEqUpFBDID/LbHmCyEvrxXgm7EfpAtKOe6LpzWD/H3v\nVLUFgp8bEjEh/146jm0YriTE4vsSOzHothZhfyVUzGNq62s0DCMjHGO4WcZ41eqV\nkGVN9CcI/AObA1veiygAKFX1EjLN1e7yxEm/Cl5XjzLc8aq9O4TH+8fVVYIpQO+Y\ngqt98xWwxgGnRtGNZ7ELEmgeyEpoXNAjDIE1iZRVShAQt8QN2JPkgiSspNDBs96C\nKqlpgUKkp26EQrLPeo1buJrAnXQ49ct8PqZRE2iRmKSD7nlRHs2/Qhw0naAWe905\n8ELmVwTlLRshM1lE10rHr4gnVnr3EIURAoIBAFXLQXV9CuLoV9nosprVYbhSWLMj\nO9ChjgGfCmqi3gQecJxctwNlo3l8f5W2ZBrIqgWFsrxzHd2Ll4k2k/IcFa4jtz9+\nPrSGZz8TEkM5ERSwDd1QXNE/P7AV6EDs/W/V0T5G1RE82YGkf0PNM+drJ/r/I4HS\nN0DDlZb8YwjkP1tT8x3I+vx9bLWczbsMhrwIEUPQJZxMSdZ+DMM45TwAXyp9aLzU\npa9CdL1gAtSLA7AmcafGeUIA7N1evRYuUVWhhSRjPX55hGBoO0u9fxZIPRTf0dcK\nHHK05KthUPh7W5TXSPbni/GyuNg3H7kavT7ANHOwI77CfaKFgxLrZan+sAk=\n-----END RSA PRIVATE KEY-----\n\n-----BEGIN PUBLIC KEY-----\nMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEAoZp7md4nkmmFvkoHhQMw\nN0lcpYeKfeinKir7zYWFLmpClZHawZKLkB52+nnY4w9ZlKhc4Yosrw/N0h1sZlVZ\nfOQBnzFUQCea6uK/4BKHPhiHpN73uOwu5TAY4BHS7fsXRLPgQFB6o6iy127o2Jfb\nUVpbNU/rJGxVI2K1BIzkfrXAJ0pkjkdP7OFE6yRLU4ZcATWSIPwGvlF6a0/QPC3B\nbTvp2+DYPDC4pKWxNF/qOwOnMWqxGq6ookn12N/GufA/Ugv3BTVoy7I7Q9SXty4u\nUat19OBJVIqBOMgXsyDz0x/C6lhBR2uQ1K06XRa8N4hbfcgkSs+yNBkLfBl7N80Q\n0Wkq2PHetzQU12dPnz64vvr6s0rpYIo20VtLzhYA8ZxseGc3s7zmY5QWYx3ek7Vu\nwPv9QQzcmtIQQsUbekPoLnKLt6wJhPIGEr4tPXy8bmbaThRMx4tjyEQYy6d+uD0h\nXTLSjZ1SccMRqLxoPtTWVNXKY1E84EcS/QkqlY4AthLFBL6r+lnm+DlNaG8LMwCm\ncz5NMag9ooM9IqgdDYhUpWYDSdOvDubtz1YZ4hjQhaofdC2AkPXRiQvMy/Nx9WjQ\nn4z387kz5PK5YbadoZYkwtFttmxJ/EQkkhGEDTXoSRTufv+qjXDsmhEsdaNkvcDP\n1uiCSY19UWe5LQhIMbR0u/0CAwEAAQ==\n-----END PUBLIC KEY-----\n
    "},{"location":"features/0114-predefined-identities/#key-6-ed25519","title":"Key 6 (Ed25519)","text":"

    This key is used by the bob-many-1 DID, but could also be used with other DIDs defined elsewhere.

    signing key (private)\n9dTU6xawVQJprz7zYGCiTJCGjHdW5EcZduzRU4z69p64\n\nhex seed (private; in a form usable by Indy CLI)\n803454c9429467530b17e8e571df5442b6620ac06ab0172d943ab9e01f6d4e31\n\nverkey (public)\n4zZJaPg26FYcLZmqm99K2dz99agHd5rkhuYGCcKntAZ4\n\nas a Sovrin DID\ndid:sov:8KrDpiKkHsFyDm3ZM36Rwm\n
    "},{"location":"features/0114-predefined-identities/#tools-to-generate-your-own-identity-material","title":"Tools to generate your own identity material","text":""},{"location":"features/0114-predefined-identities/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0116-evidence-exchange/","title":"Aries RFC 0116: Evidence Exchange Protocol 0.9","text":""},{"location":"features/0116-evidence-exchange/#summary","title":"Summary","text":"

    The goal of this protocol is to allow Holders to provider an inquiring Verifier with a secure and trusted mechanism for obtaining access to the foundational evidence that enabled the Issuer the assurances necessary to create the Verifiable Credential(s) that the Holder has presented to the Verifier. To this end, a P2P evidence exchange protocol is required that will allow parties using Pair-wise Peer DIDs to exchange evidence in support of the issuance of Verified Credentials without any dependencies on a centralized storage facility.

    "},{"location":"features/0116-evidence-exchange/#motivation","title":"Motivation","text":"

    During the identity verification process, an entity may require access to the genesis documents used to establish digital credentials issued by a credential issuing entity or Credential Service Provider (CSP). In support of the transition from existing business verification processes to emerging business processes that rely on digitally verified credentials using protocols such as 0036-issue-credential and 0037-present-proof, we need to establish a protocol that allow entities to make this transition while remaining compliant with business and regulatory requirements. Therefore, we need a mechanism for Verifiers to obtain access to vetted evidence (physical or digital information or documentation) without requiring a relationship or interaction with the Issuer.

    While this protocol should be supported by all persona, its relevance to decentralized identity ecosystems is highly dependent on the business policies of a market segment of Verifiers. For more details see the Persona section.

    While technology advancements around identity verification are improving, business policies (most often grounded in risk mitigation) will not change at the same rate of speed. For example, just because a financial institution in Singapore is willing to rely on the KYC due-diligence processing of another institution, we should not assume that the banks in another geolocation (i.e: Hong Kong) can embrace the same level of trust. For this reason, we must enable Verifiers with the option to obtain evidence that backs any assertions made by digital credential issuers.

    Based on a web-of-trust and cryptographic processing techniques, Verifiers of digital credentials can fulfill their identity proofing workflow requirements. However, business policies and regulatory compliance may require them to have evidence for oversight activities such as but not limited to government mandated Anti-Money Laundering (AML) Compliance audits.

    Verifiers or relying parties (RPs) of digital credentials need to make informed decisions about the risk of accepting a digital identity before trusting the digital credential and granting associated privileges. To mitigate such risk, the Verifier may need to understand the strength of the identity proofing process. According to a December 2015 - NIST Information Technology Laboratory Workshop Report, Measuring Strength of Identity Proofing, there are two (2) identity proofing methods that can be leveraged by a CSP:

    Proofing Method Description In-Person Identity Proofing Holder is required to present themselves and their documentation directly to a trained representative of an identity proofing agency. Remote Identity Proofing Holder is not expected to present themselves or their documents at a physical location. Validation and verification of presented data (including digital documents) is performed programmatically against one or more corroborating authoritative sources of data.

    If the In-Person Identity Proofing method is used, the strength can easily be determined by allowing the Verifier to gain access to any Original Documents used by the Issuer of a Derived Credential. In the situation where a Remote Identity Proofing method is used, confidence in the strength of the identity proofing process can be determined by allowing the Verifier to gain access to Digital Assertions used by the Issuer of a Derived Credential.

    "},{"location":"features/0116-evidence-exchange/#problem-scope","title":"Problem Scope","text":"

    This protocol is intended to address the following challenging questions:

    1. What evidence (information or documentation) was used to establish the level of certitude necessary to allow an Issuer to issue a Verifiable Credential?

    2. For each Identity Proofing Inquiry (challenge) such as Address, Identity, Photo and Achievement, which forms of evidence was used by the Issuer of the Verifiable Credential?

    3. When the Issuer's Examinier relies on an Identity Proofing Service Provider (IPSP) as part of its Remote Identity Proofing process:

    4. Can the IPSP provide a Digital Assertion in association with the Identity Instrument they have vetted as part of their service to the Examiner?

    5. Can the Issuer provide a Digital Assertion in association with its certitude in the reliability of its due-diligence activity that is dependent on 3rd parties?

    6. When the Issuer relies on trained examiners for its In-Person Identity Proofing process, can the Issuer provide access to the digitally scanned documents either by-value or by-reference?

    "},{"location":"features/0116-evidence-exchange/#assurance-levels","title":"Assurance Levels","text":"

    Organizations that implement Identity Proofing generally seek to balance cost, convenience, and security for both the Issuer and the Holder. Examples of these tradeoffs include:

    To mitigate the risk associated with such tradeoffs, the NIST 800-63A Digital Identity Guidelines outline three (3) levels of identity proofing assurance. These levels describe the degree of due-diligence performed during an Identity Proofing Process. See Section 5.2 Identity Assurance Levels Table 5-1.

    Users of this protocol will need to understand the type of evidence that was collected and how it was confirmed so that they can adhere to any business processes that require IAL2 or IAL3 assurance levels supported by Strong or Superior forms of evidence.

    "},{"location":"features/0116-evidence-exchange/#dematerialization-of-physical-documents","title":"Dematerialization of physical documents","text":"

    Today, entities (businesses, organizations, government agencies) maintain existing processes for the gathering, examination and archiving of physical documents. These entities may retain a copy of a physical document, a scanned digital copy or both. Using manual or automated procedures, the information encapsulated within these documents is extracted and stored as personal data attestations about the document presenter within a system of record (SOR).

    As decentralized identity technologies begin to be adopted, these entities can transform these attestations into Verifiable Credentials.

    "},{"location":"features/0116-evidence-exchange/#understanding-kyc","title":"Understanding KYC","text":"

    Know Your Customer (KYC) is a process by which entities (business, governments, organizations) obtain information about the identity and address of their customers. This process helps to ensure that the services that the entity provides are not misused. KYC procedures vary based on geolocation and industry. For example, the KYC documents required to open a bank account in India versus the USA may differ but the basic intent of demonstrating proof of identity and address are similar. Additionally, the KYC documents necessary to meet business processing requirements for enrollment in a university may differ from that of onboarding a new employee.

    Regardless of the type of KYC processing performed by an entity, there may be regulatory or business best practice requirements that mandate access to any Original Documents presented as evidence during the KYC process. As entities transition from paper/plastic based identity proofing practices to Verifiable Credentials there may exist (albeit only for a transitional period) the need to gain access to the Identity Evidence that an Issuer examined before issuing credentials.

    This process is time consuming and costly for the credential Issuer and often redundant and inconvenient for the Holder. Some industry attempts have been made to establish centrally controlled B2B sharing schemas to help reduce such impediments to the Issuer. These approaches are typically viewed as vital for the betterment of the Issuers and Verifiers and are not designed for or motivated by the data privacy concerns of the Holder. The purpose of this protocol is to place the Holder at the center of the P2P C2B exchange of Identity Evidence while allowing Verifiers to gain confidence in identity proofing assurance levels.

    "},{"location":"features/0116-evidence-exchange/#evidence-vetting-workflow","title":"Evidence Vetting Workflow","text":"

    The Verifiable Credentials Specification describes three key stakeholders in an ecosystem that manages digital credentials: Issuers, Holders and Verifiers. However, before an Issuer can attest to claims about a Holder, an Examiner must perform the required vetting, due diligence, regulatory compliance and other tasks needed to establish confidence in making a claim about an identity trait associated with a Holder. The actions of the Examiner may include physical validation of information (i.e.: comparison of real person to a photo) as well as reliance on third party services as part of its vetting process. Depending on the situational context of a credential request or the type of privileges to be granted, the complexity of the vetting process taken by an examiner to confirm the truth about a specific trait may vary.

    An identity Holder may present to an Examiner Identity Evidence in the form of a physical document or other forms of Identity Instruments to resolve Identity Proofing Inquires. The presentment of these types of evidence may come in a variety of formats:

    "},{"location":"features/0116-evidence-exchange/#evidence-access-matrix","title":"Evidence Access Matrix","text":"

    Note: Assumption herein is that original documents are never forfeited by an individual.

    Original Source Format Issuer Archived Format Verifier Business Process Format Protocol Requirement Paper/Plastic Paper-Copy n/a n/a Paper/Plastic Digital Copy Digital Copy Access by Value Paper/Plastic Digital Copy URL Access by Reference Digital Copy Digital Copy Digital Copy Access by Value Digital Copy Digital Copy URL Access by Reference Digital Scan Digital Copy Digital Copy Digital Assertion URL Digital Copy Digital Copy Access by Value URL Digital Copy URL Access by Reference"},{"location":"features/0116-evidence-exchange/#why-a-peer-did-protocol","title":"Why a Peer DID Protocol?","text":"

    In a decentralized identity ecosystem where peer relationships no longer depend on a centralized authority for the source of truth, why should a Verifier refer to some 3rd party or back to the issuing institution for capturing Identity Evidence?

    "},{"location":"features/0116-evidence-exchange/#solution-concepts","title":"Solution Concepts","text":""},{"location":"features/0116-evidence-exchange/#protocol-assumptions","title":"Protocol Assumptions","text":"
    1. Holder must present Identity Evidence access to Verifier such that Verifier can be assured that the Issuer vetted the evidence.
    2. Some business processes and/or regulatory compliance requirements may demand that a Verifier gains access to the Original Documents vetted by a credential Issuer.
    3. Some Issuers may accept digital access links to documents as input into vetting process. This is often associated with Issuers who will accept copies of the Original Documents.
    4. Some Issuer may accept Digital Assertions from IPSPs as evidence of their due-diligence process. Examples of such IPSPs are: Acuant, Au10tix, IWS, Onfido and 1Kosmos.
    "},{"location":"features/0116-evidence-exchange/#protocol-objectives","title":"Protocol Objectives","text":"

    In order for a Verifier to avoid or reduce evidence vetting expenses it must be able to:

    This implies that the protocol must address the following evidence concerns:

    Interaction Type Challenge Protocol Approach Examiner-to-Holder How does Issuer provide Holder with proof that it has vetted Identity Evidence? Issuer signs hash of the evidence and presents signature to Holder. Holder-to-Verifier How does Holder present Verifier with evidence that the Issuer of a Credential vetted Identity Evidence? Holder presents verifier with digitally signed hash of evidence, public DID of Issuer and access to a copy of the digital evidence. Verifier-to-FileStorageProvider How does Verifier access the evidence in digital format (base64)? Issuer or Holder must provide secure access to a digital copy of the document. Verifier-to-Verifier How does Verifier validate that Issuer attests to the vetting of the Identity Evidence for personal data claims encapsulated in issued credentials? Verifier gains access to the digital evidence, fetches the public key associated with the Issuer's DID and validates Issuer's signature of document hash."},{"location":"features/0116-evidence-exchange/#protocol-outcome","title":"Protocol Outcome","text":"

    This protocol is intended to be a compliment to the foundational (issuance, verification) protocols for credential lifecycle management in support of the Verifiable Credentials Specification. Overtime, it is assumed that the exchange of Identity Evidence will no longer be necessary as digital credentials become ubiquitous. In the meantime, the trust in and access to Identity Evidence can be achieved in private peer to peer relationships using the Peer DID Spec.

    "},{"location":"features/0116-evidence-exchange/#persona","title":"Persona","text":"

    This protocol addresses the business policy needs of a market segment of Verifiers. Agent Software used by the following persona is required to support this market segment.

    Persona Applicability Examiner Entities that perform In-Person and/or Remote Identity Proofing processes and need to support potential requests for evidence in support of the issuance of Verifiable Credentials based on the results of such processes. Issuer Entities with the certitude to share with a Holder supporting evidence for the due-diligence performed in association with attestations backing an issued Verifiable Credential. Holder A recipient of a Verifiable Credential, that desires to proactively gather supporting evidence of such a credential incase a Verifier should inquire. Verifier Entities that require access to Original Documents or Digital Assertions because they can not (for business policy reasons) rely on the identity proofing due-diligence of others. These entities may refer to a Trust Score based on their own business heuristics associated with the type of evidence supplied: Original Documents, Digital Assertions."},{"location":"features/0116-evidence-exchange/#user-stories","title":"User Stories","text":"

    An example of the applicability of this protocol to real world user scenarios is discussed in the context of a decentralized digital notary where the credential issuing institution is not the issuer of the original source document(s) or digital assertions.

    "},{"location":"features/0116-evidence-exchange/#evidence-types","title":"Evidence Types","text":"

    In the context of this protocol, Identity Evidence represents physical or digital information-based artifacts that support a belief to common Identity Proofing Inquires (challenges):

    The following, non-exhaustive, list of physical information-based artifacts (documents) are used as evidence when confronted with common identity related inquires. They are often accompanied with a recent photograph. Since this protocol is intended to be agnostic of business and regulatory processes, the types of acceptable documents will vary.

    Proof Type Sample Documents Address Passport, Voter\u2019s Identity Card, Utility Bill (Gas, Electric, Telephone/Mobile), Bank Account Statement, Letter from any recognized public authority or public servant, Credit Card Statement, House Purchase deed, Lease agreement along with last 3 months rent receipt, Employer\u2019s certificate for residence proof Identity Passport, PAN Card, Voter\u2019s Identity Card, Driving License, Photo identity proof of Central or State government, Ration card with photograph, Letter from a recognized public authority or public servant, Bank Pass Book bearing photograph, Employee identity card of a listed company or public sector company, Identity card of University or board of education Photo Passport, Pistol Permit, Photo identity proof of Central or State government Achievement Diploma, Certificate Privilege Membership/Loyalty Card, Health Insurance Card

    These forms of Identity Evidence are examples of trusted credentials that an Examiner relies on during their vetting process.

    "},{"location":"features/0116-evidence-exchange/#tutorial","title":"Tutorial","text":"

    The evidence exchange protocol builds on the attachment decorator within DIDComm using the the Inlining Method for Digital Assertions and the Appending Method for Original Documents.

    The protocol is comprised of the following messages and associated actions:

    Interaction Type Message Process Actions Holder to Issuer Request Evidence Holder reviews the list of credentials it has received from the Issuer and sends an evidence_request message to Issuer's agent. Issuer to Holder Evidence Response Issuer collects Identity Evidence associated with each requested credential ID and sends an evidence_response message to Holder's agent. Upon receipt, the Holder stores evidence data in Wallet. Verifier to Holder Evidence Access Request Verifier builds and sends an evidence_access_request message to Holder's agent. Holder to Verifier Evidence Access Response Holder builds and sends an evidence_access_response message to the Verifier's agent. Verifier fetches requested Identity Evidence and performs digital signature validation on each. Verifier stores evidence in system of record.

    "},{"location":"features/0116-evidence-exchange/#request-evidence-message","title":"Request Evidence Message","text":"

    This message should be used as an accompaniment to an issue credential message. Upon receipt and storage of a credential the Holder should compose an evidence_request for each credential received from the Issuer. The Holder may use this message to get an update for new and existing credentials from the Issuer.

    {\n  \"@type\": \"https://didcomm.org/evidence_exchange/1.0/evidence_request\",\n  \"@id\": \"6a4986dd-f50e-4ed5-a389-718e61517207\",\n  \"for\": \"did:peer:1-F1220479cbc07c3f991725836a3aa2a581ca2029198aa420b9d99bc0e131d9f3e2cbe\",\n  \"as_of_time\": \"2019-07-23 18:05:06.123Z\",\n  \"credentials\": [\"cred-001\", \"cred-002\"],\n  \"request-type\": \"by-value\"\n}\n

    Description of attributes:

    "},{"location":"features/0116-evidence-exchange/#evidence-response-message","title":"Evidence Response Message","text":"

    This message is required for an Issuer Agent in response to an evidence_request message. The format of the ~attach attribute will be determined by the value of the request_type attribute in the associated request message from the Holder. If the Issuer relied on one or more IPSPs during the Identity Proofing Process, then this message will also include an inline attachment using the examiner_assertions attribute.

    {\n  \"@type\": \"https://didcomm.org/evidence_exchange/1.0/evidence_response\",\n  \"@id\": \"1517207d-f50e-4ed5-a389-6a4986d718e6\",\n  \"~thread\": { \"thid\": \"6a4986dd-f50e-4ed5-a389-718e61517207\" },\n  \"for\": \"did:peer:1-F1220479cbc07c3f991725836a3aa2a581ca2029198aa420b9d99bc0e131d9f3e2cbe\",\n  \"as_of_time\": \"2019-07-23 18:05:06.123Z\",\n  \"credentials\": [\n    { \"@id\": \"cred-001\",\n      \"evidence\": [\n        {\"evidence_type\": \"Address\", \"evidence_ref\": [\"#kycdoc1\", \"#kycdoc4\"]},\n        {\"evidence_type\": \"Identity\", \"evidence_ref\": [\"#kycdoc2\"]},\n        {\"evidence_type\": \"Photo\", \"evidence_ref\": null}\n      ]\n    },\n    { \"@id\": \"cred-002\",\n      \"evidence\": [\n        {\"evidence_type\": \"Address\", \"evidence_ref\": [\"#kycdoc1\",\"#kycdoc3\"]},\n        {\"evidence_type\": \"Identity\", \"evidence_ref\": [\"#kycdoc3\"]},\n        {\"evidence_type\": \"Photo\", \"evidence_ref\": [\"#kycdoc1\"]}\n      ]\n    }\n  ],\n  \"examiner_assertions\": [ ... ],\n  \"~attach\": [ ... ]\n}\n

    Description of attributes:

    "},{"location":"features/0116-evidence-exchange/#examiner-assertions","title":"Examiner Assertions","text":"

    {\n  \"examiner_assertions\": [\n    {\n      \"@id\": \"kycdoc4\",\n      \"approval_timestamp\": \"2017-06-21 09:04:088\",\n      \"description\": \"driver's license\",\n      \"vetting_process\": {\n        \"method\": \"remote\",\n        \"technology\": \"api\"\n      },\n      \"ipsp_did\": \"~3d5nh7900fn4\",\n      \"ipsp_claim\": <base64url(file)>,\n      \"ipsp_claim_sig\": \"3vvvb68b53d5nh7900fn499040cd9e89fg3kkh0f099c0021233728cf67945faf\",\n      \"examinerSignature\": \"f67945faf9e89fg3kkh3vvvb68b53d5nh7900fn499040cd3728c0f099c002123\"\n    }\n  ]\n}\n
    Description of attributes:

    "},{"location":"features/0116-evidence-exchange/#by-value-attachments","title":"By-value Attachments","text":"
    {\n  \"~attach\": [\n    {\n      \"@id\": \"kycdoc1\",\n      \"mime-type\": \"image/png\",\n      \"filename\": \"nys_dl.png\",\n      \"lastmod_time\": \"2017-06-21 09:04:088\",\n      \"description\": \"driver's license\",\n      \"vetting_process\": {\n        \"method\": \"in-person\",\n        \"technology\": \"barcode\"\n      },\n      \"data\": {\n        \"base64\": <base64url(file)>\n      },\n      \"examinerSignature\": \"f67945faf9e89fg3kkh3vvvb68b53d5nh7900fn499040cd3728c0f099c002123\"\n    },\n    {\n      \"@id\": \"kycdoc2\",\n      \"mime-type\": \"application/pdf\",\n      \"filename\": \"con_ed.pdf\",\n      \"lastmod_time\": \"2017-11-18 10:44:068\",\n      \"description\": \"ACME Electric Utility Bill\",\n      \"vetting_process\": {\n        \"method\": \"in-person\",\n        \"technology\": \"human-visual\"\n      },\n      \"data\": {\n        \"base64\": <base64url(file)>\n      },\n      \"examinerSignature\": \"945faf9e8999040cd3728c0f099c002123f67fg3kkh3vvvb68b53d5nh7900fn4\"\n    },\n    {\n      \"@id\": \"kycdoc3\",\n      \"mime-type\": \"image/jpg\",\n      \"filename\": \"nysccp.jpg\",\n      \"lastmod_time\": \"2015-03-19 14:35:062\",\n      \"description\": \"State Concealed Carry Permit\",\n      \"vetting_process\": {\n        \"method\": \"in-person\",\n        \"technology\": \"barcode\"\n      },\n      \"data\": {\n        \"sha256\": \"1d9eb668b53d99c002123f1ffa4db0cd3728c0f0945faf525c5ee4a2d4289904\",\n        \"base64\": <base64url(file)>\n      },\n      \"examinerSignature\": \"5nh7900fn499040cd3728c0f0945faf9e89kkh3vvvb68b53d99c002123f67fg3\"\n    }\n  ]\n}\n

    This message adheres to the attribute content formats outlined in the Aries Attachments RFC with the following additions:

    "},{"location":"features/0116-evidence-exchange/#by-reference-attachments","title":"By-reference Attachments","text":"
    {\n  \"~attach\": [\n    {\n      \"@id\": \"kycdoc1\",\n      \"mime-type\": \"image/png\",\n      \"filename\": \"nys_dl.png\",\n      \"lastmod_time\": \"2017-06-21 09:04:088\",\n      \"description\": \"driver's license\",\n      \"vetting_process\": {\n        \"method\": \"in-person\",\n        \"technology\": \"barcode\"\n      },\n      \"data\": {\n        \"sha256\": \"1d9eb668b53d99c002123f1ffa4db0cd3728c0f0945faf525c5ee4a2d4289904\",\n        \"links\": [\n          { \"url\": \"https://www.dropbox.com/s/r8rjizriaHw8T79hlidyAfe4DbWFcJYocef5/myDL.png\",\n            \"accesscode\": \"some_secret\"\n          }\n        ]\n      },\n      \"examinerSignature\": \"f67945faf9e89fg3kkh3vvvb68b53d5nh7900fn499040cd3728c0f099c002123\"\n    },\n    {\n      \"@id\": \"kycdoc2\",\n      \"mime-type\": \"application/pdf\",\n      \"filename\": \"con_ed.pdf\",\n      \"lastmod_time\": \"2017-11-18 10:44:068\",\n      \"description\": \"ACME Electric Utility Bill\",\n      \"vetting_process\": {\n        \"method\": \"remote\",\n        \"technology\": \"api\"\n      },\n      \"data\": {\n        \"sha256\": \"1d4db525c5ee4a2d42899040cd3728c0f0945faf9eb668b53d99c002123f1ffa\",\n        \"links\": [\n          { \"url\": \"https://mySSIAgent.com/w8T7AfkeyJYo4DbWFcmyocef5eyH\",\n            \"accesscode\": \"some_secret\"\n          }\n        ]\n      },\n      \"examinerSignature\": \"945faf9e8999040cd3728c0f099c002123f67fg3kkh3vvvb68b53d5nh7900fn4\"\n    },\n    {\n      \"@id\": \"kycdoc3\",\n      \"mime-type\": \"image/jpg\",\n      \"filename\": \"nysccp.jpg\",\n      \"lastmod_time\": \"2015-03-19 14:35:062\",\n      \"description\": \"State Concealed Carry Permit\",\n      \"vetting_process\": {\n        \"method\": \"in-person\",\n        \"technology\": \"barcode\"\n      },\n      \"data\": {\n        \"sha256\": \"b53d99c002123f1ffa2d42899040cd3728c0f0945fa1d4db525c5ee4af9eb668\",\n        \"links\": [\n          { \"url\": \"https://myssiAgent.com/mykeyoyHw8T7Afe4DbWFcJYocef5\",\n            \"accesscode\": null\n          }\n        ]\n      },\n      \"examinerSignature\": \"5nh7900fn499040cd3728c0f0945faf9e89kkh3vvvb68b53d99c002123f67fg3\"\n    }\n  ]\n}\n

    This message adheres to the attribute content formats outlined in the Aries Attachments RFC and builds on the By Value Attachments with the following additions:

    Upon completion of the Evidence Request and Response exchange, the Holder's Agent is now able to present any Verifier that has accepted a specific Issuer credential with the supporting evidence from the Issuer. This evidence, depending on the Holder's preferences may be direct or via a link to an external resource. For example, regardless of the delivery method used between the Issuer and Holder, the Holder's Agent may decide to fetch all documents and store them itself and then provide Verifiers with by-reference access upon request.

    "},{"location":"features/0116-evidence-exchange/#evidence-access-request-message","title":"Evidence Access Request Message","text":"

    Upon the successful processing of a credential proof presentation message, a Verifier may desire to request supporting evidence for the processed credential. This evidence_access_request message is built by the Verifier and sent to the Holder's agent. Similar to the request_evidence message, the Verifier may use this message to get an update for new and existing credentials associated with the Holder. The intent of this message is for the Verifier to establish trust by obtaining a copy of the available evidence and performing the necessary content validation.

    {\n  \"@type\": \"https://didcomm.org/evidence_exchange/1.0/evidence_access_request\",\n  \"@id\": \"7c3f991836-4ed5-f50e-7207-718e6151a389\",\n  \"for\": \"did:peer:1-F1220479cbc07c3f991725836a3aa2a581ca2029198aa420b9d99bc0e131d9f3e2cbe\",\n  \"as_of_time\": \"2019-07-23 18:05:06.123Z\",\n  \"credentials\": [\n      { \"@id\": \"cred-001\", \"issuerDID\": \"~BzCbsNYhMrjHiqZD\" },\n      { \"@id\": \"cred-002\", \"issuerDID\": \"~BzCbsNYhMrjHiqZD\" }\n  ]\n}\n

    Description of attributes:

    This protocol is intended to be flexible and applicable to a variety of use cases. While our discussion has circulated around the use of the protocol as follow-up to the processing of a credential proof presentment flow, the fact is that the protocol can be used at any point after a Pair-wise DID Exchange has been successfully established and is therefore in the complete state as defined by the DID Exchange Protocol. An IssuerDID (or DID of the an entity that is one of the two parties in a private pair-wise relationship) is assumed to be known under all possible conditions once the relationship is in the complete state.

    "},{"location":"features/0116-evidence-exchange/#evidence-access-response-message","title":"Evidence Access Response Message","text":"

    This message is required for a Holder Agent in response to an evidence_access_request message. The format of the ~attach attribute will be determined by the storage management preferences of the Holder's Agent. As such the Holder can respond by-value or by-reference. To build the response, the Holder will validate that the supplied Issuer DID corresponds to the credential represented by the supplied ID. If the Issuer relied on one or more IPSPs during the Identity Proofing Process, then this message will also include an inline attachment using the examiner_assertions attribute. Upon successful processing of a evidence_access_response message, the Verifier will store evidence details in its system of record.

    {\n  \"@type\": \"https://didcomm.org/evidence_exchange/1.0/evidence_access_response\",\n  \"@id\": \"1517207d-f50e-4ed5-a389-6a4986d718e6\",\n  \"~thread\": { \"thid\": \"7c3f991836-4ed5-f50e-7207-718e6151a389\" },\n  \"for\": \"did:peer:1-F1220479cbc07c3f991725836a3aa2a581ca2029198aa420b9d99bc0e131d9f3e2cbe\",\n  \"as_of_time\": \"2019-07-23 18:05:06.123Z\",\n  \"credentials\": [\n    { \"@id\": \"cred-001\",\n      \"evidence\": [\n        {\"evidence_type\": \"Address\", \"evidence_ref\": [\"#kycdoc1\", \"#kycdoc4\"]},\n        {\"evidence_type\": \"Identity\", \"evidence_ref\": [\"#kycdoc2\"]},\n        {\"evidence_type\": \"Photo\", \"evidence_ref\": null}\n      ]\n    },\n    { \"@id\": \"cred-002\",\n      \"evidence\": [\n        {\"evidence_type\": \"Address\", \"evidence_ref\": [\"#kycdoc1\",\"#kycdoc3\"]},\n        {\"evidence_type\": \"Identity\", \"evidence_ref\": [\"#kycdoc3\"]},\n        {\"evidence_type\": \"Photo\", \"evidence_ref\": [\"#kycdoc1\"]}\n      ]\n    }\n  ],\n  \"examiner_assertions\": [ ... ],\n  \"~attach\": [ ...\n  ]\n}\n

    This message adheres to the attribute content formats outlined in the Aries Attachments RFC and leverages the same Evidence Response Message attribute descriptions.

    "},{"location":"features/0116-evidence-exchange/#reference","title":"Reference","text":""},{"location":"features/0116-evidence-exchange/#drawbacks","title":"Drawbacks","text":"

    This protocol does not vary much from a generic document exchange protocol. It can be argued that a special KYC Document exchange protocol is not needed. However, given the emphasis placed on KYC compliance during the early days of DIDComm adoption, we want to make sure that any special cases are addressed upfront so that we avoid adoption derailment factors.

    "},{"location":"features/0116-evidence-exchange/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    As noted in the references section, there are a number of trending KYC Document proofing options that are being considered. Many leverage the notion of a centralized blockchain ledger for sharing documents. This effectively places control outside of the Holder and enables the sharing of documents in a B2B manner. Such approaches do not capitalize on the advantages of Pair-wise Peer DIDs.

    "},{"location":"features/0116-evidence-exchange/#prior-art","title":"Prior art","text":"

    This protocol builds on the foundational capabilities of DIDComm messages, most notable being the attachment decorator within DIDComm.

    "},{"location":"features/0116-evidence-exchange/#unresolved-questions","title":"Unresolved questions","text":"
    1. Should this be a separate protocol or an update to issuer-credential?
    2. What is the best way to handle access control for by-reference attachments?
    3. Are there best practices to be considered for when/why/how a Holder's Agent should store and manage attachments?
    4. Can this protocol help bootstrap a prototype for a Digital Notary and thereby demonstrate to the broader ecosystem the unnecessary attention being placed on alternative domain specific credential solutions like ISO-18013-5(mDL)?
    "},{"location":"features/0116-evidence-exchange/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0116-evidence-exchange/digital_notary_usecase/","title":"Decentralized Digital Notary","text":""},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#preface","title":"Preface","text":"

    The intent of this document is to describe the concepts of a Decentralized Digital Notary with respect to the bootstrapping of the decentralized identity ecosystem and to demonstrate using example user stories1 the applicability of the Evidence Exchange Protocol.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#overview","title":"Overview","text":""},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#problem-statement","title":"Problem Statement","text":"

    How do we bootstrap the digital credential ecosystem when many of the issuing institutions responsible for foundational credentials (i.e.: brith certificate, drivers license, etc) tend to be laggards2 when it comes to the adoption of emerging technology? What if we did not need to rely on these issuing institutions and instead leveraged the attestations of trusted third parties?

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#concept","title":"Concept","text":"

    During the identity verification process, an entity may require access to the genesis documents from the Issuers of Origination before issuing credentials. We see such requirements in some of the routine identity instrument interactions of our daily lives such as obtaining a Driver's License or opening a Bank Account.

    We assume that government agencies such and the DMV (drivers license) and Vital Records (brith certificate) will not be early adopters of digital credentials yet their associated Tier 1 Proofs are critical to the the creation of a network effect for the digital credential ecosystem.

    We therefore need a forcing function that will disrupt behavior. Image a trusted business entity, a Decentralized Digital Notary (DDN), that would take the responsibility of vouching for the existence of Original Documents (or Digital Assertions) and have the certitude to issue verifiable credentials attesting to personal data claims made by the Issuer of Origination.

    Today (blue shaded activity), an individual receives Original Documents from issuing institutions and presents these as evidence to each Verifier. Moving forward (beige shaded activity), as a wide range of businesses consider acting as DDNs, our reliance on Issuers of Origination to be the on-ramps for an individuals digital identity experience diminishes. Overtime, our dependency on the proactive nature of such institutions becomes mute. Furthermore, the more successful DDNs become the more reactionary the laggards will need to be to protect their value in the ecosystem.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#applicable-businesses","title":"Applicable Businesses","text":"

    Any entity that has the breath and reach to connect with consumers at scale would be an ideal candidate for the role of a DDN. Some examples include:

    The monetization opportunities for such businesses will also vary. The linkages between proof-of-identity and proof-of-value can be achieved in several manners:

    1. Individual pays for issuance of certificates
    2. Verifier pays the underwriter with a payment instrument (i.e.: fiat or cryptocurrency). The payment is for the service of underwriting the screening of an individual so that the Verifier does not have to do it.\u2028
    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#stories","title":"Stories","text":"

    Presented herein are a series of user stories that incorporate the concepts of a DDN and the ability of a verifier to gain access to Issuer vetted Identity Evidence using the Evidence Exchange Protocol.

    The stories focus on the daily lifecycle activities of a two individuals who needs to open a brokerage account and/or update a Life Insurance Policy.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#persona","title":"Persona","text":"Name Role Eric An individual that desires to open a brokerage account. Stacy An individual that desires to open a brokerage account and also apply a Life Insurance Policy. Retail Bank DDN (Issuer) Thomas Notary at the Retail Bank familiar with the DDN Process. Brokerage Firm Verifier Dropbox Document Management Service iCertitude A hypothetical IPSP that provides a mobile convenient identity verification service that is fast, trusted and reliable. Financial Cooperative A small local financial institution that is owned and operated by its members. It has positioned itself as a DDN (Issuer) by OEMing the iCertitude platform."},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#identity-proofing-examination-process","title":"Identity Proofing (Examination) Process","text":""},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#financial-cooperative-ddn-awareness","title":"Financial Cooperative (DDN Awareness)","text":"

    Eric is a member of his neighborhood Financial Cooperative. He received an email notification that as a new member benefit, the bank is now offering members with the ability to begin their digital identity journey. Eric is given access to literature describing the extend to the bank's offering and a video of the process for how to get started. Eric watches the video, reads the online material and decides to take advantage of the banks offer.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#financial-cooperative-3rd-party-remote-vetting-process","title":"Financial Cooperative (3rd Party Remote Vetting Process)","text":"

    Following his banks instructions, Eric downloads, installs and configures a Wallet App on his smartphone from the list of apps recommended by the bank. He also downloads the bank's iCertitude app. He uses the iCertitude Mobile app to step through a series of ID Proofing activities that allow the bank to establish a NIST IAL3 assurance rating. These steps include the scanning or some biometrics as well as his plastic drivers license. Upon completion of these activities, which are all performed using his smartphone without any human interactions with the bank, Eric receives a invite in his Wallet App to accept a new verifiable credential which is referred to as a Basic Assurance Credential. Eric opens the Wallet App, accepts the new credential and inspects it.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#retail-bank-ddn-awareness","title":"Retail Bank (DDN Awareness)","text":"

    Stacy is a member of her neighborhood Retail Bank. She received an email notification that as a new member benefit, the bank is now offering members with the ability to begin their digital identity journey. Stacey is given access to literature describing the extend to the bank's offering and a video of the process for how to get started. Stacey watches the video, reads the online material and decides to make an appointment with her local bank notary and fill out the preliminary online forms.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#retail-bank-paper-vetting-process","title":"Retail Bank (Paper Vetting Process)","text":"

    Stacey attends her appointment with Thomas. She came prepared to request digital credentials for the following Official Documents: SSN, Birth Certificate, proof of employment (paystub) and proof of address (utility bill). Thomas explains to Stacey that given the types of KYC Documents she desires to be digitally notarized, bank policy is to issue a single digital credential that attests to all the personal data she is prepared to present. The bank refers to this verifiable credential as the Basic KYC Credential and they use a common schema that is used by many DDNs in the Sovrin ecosystem.

    Note: This story depicts one approach. Clearly, the bank's policy could be to have a schema and credential for each Original Document.

    Stacy supplied Thomas with the paper based credentials for each of the aforementioned documents. Thomas scans each document and performs the necessary vetting process according to business policies. Thomas explains that while the bank can issue Stacey her new digital credential for a fee of $10 USD renewable annually, access to her scanned documents would only be possible if she opts-in to the digital document management service on her online banking account. Through this support she is able to provide digital access to the scanned copies of her paper credentials that were vetted by the bank. Stacey agrees to opt-in.

    While Stacey is waiting for her documents to be digitally notarized, she downloads, installs and configures a Wallet App on her smartphone from the list of apps recommended by the bank. Upon completion of the vetting process, Thomas returns all Original Documents back to Stacey and explains to her where she can now request the delivery of her new digital credential in her online account. Stacey leaves the bank with her first digital credential on her device.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#retail-bank-hybrid-vetting-process","title":"Retail Bank (Hybrid Vetting Process)","text":"

    During Stacey's preparation activity when she was filling out the preliminary online forms before her appointment with Thomas, she remembered that she had scanned her recent proof of employment (paystub) and proof of address (utility bill) at home and stored them on her Dropbox account. She decides to use the section of the form to grant the bank access (url and password) to these files. When she attends her appointment with Thomas, the meeting is altered only by the fact that she has limited her requirement of physical document presentment. However, Thomas does explain to her that bank policy is that the bank does not use remote links in their digital document management service. Instead, the bank uses the Dropbox link to obtain a copy, perform the vetting process and then store the copy in-house and allow Stacey to gain access to a link for the document stored at the bank.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#credential-management","title":"Credential Management","text":"

    Later that evening, Stacey decides to explore her new Digital Credential features within her online bank account. She sees that she has the ability to request access to the vetted resources the bank has used to vouch for her digital identity. She opens her Wallet App and sends a evidence_request message to the bank. Within a few seconds she receives and processes the bank's evidence_response message. Her Wallet App allows her to view the evidence available to her:

    Issuer Credential Evidence Type Original Document Retail Bank Basic KYC Credential Address Utility Bill Retail Bank Basic KYC Credential Address Employment PayStub Retail Bank Basic KYC Credential Identity SSN Retail Bank Basic KYC Credential Identity Birth Certificate Retail Bank Basic KYC Credential Photo Bank Member Photo

    Recalling his review of the bank's new digital identity journey benefits, Eric decides to use his Wallet App to request access to the vetted resources the bank used to vouch for his new Basic Assurance Credential. He uses the Wallet App to initiate a evidence_request and evidence_response message flow.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#verification-process","title":"Verification Process","text":""},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#brokerage-account-digital-assertion-evidence","title":"Brokerage Account (Digital Assertion Evidence)","text":"

    Eric decides to open a new brokerage account with a local Brokerage Firm. He opens the firms account registration page using his laptop web browser. The firm allows him to establish a new account and obtain a brokerage member credential if he can provide digitally verifiable proof of identity and address. Eric clicks to begin the onboarding process. He scans a QRCode using his Wallet App and accepts a connection request from the firm. He then receives a proof request from the firm and his Wallet App parsing the request and suggests he can respond using attributes from his Basic Assurance Credential. He responds to the proof request. Upon verification of his proof response, the firm sends Eric an offer for a Brokerage Membership Credential which he accepts. The firm also sends him an evidence_access_request and an explanation that the firm's policy for regulatory reasons is to obtain access to proof that the proper due-diligence was performed for Address, Identity and Photo. Eric uses his Wallet App to instruct his Cloud Agent to send an evidence_access_response. Upon processing of Eric's response, the firm establishes a Trust Score based on their policy for evidence based only on Digital Assertions and Remote Proofing processes.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#brokerage-account-document-evidence","title":"Brokerage Account (Document Evidence)","text":"

    Stacey decides she will open a new brokerage account with a local Brokerage Firm. She opens the firms account registration page using her laptop web browser. The firm allows her to establish a new account and obtain a brokerage member credential if she can provide digitally verifiable proof of identity, address and employment. Stacey clicks to begin the onboarding process. She scans a QRCode using her Wallet App and accepts a connection request from the firm. Using her Wallet App she responds to the proof request using digital credentials from her employer and her Retail Bank. Upon verification of her proof response, the firm sends Stacey an offer for a Brokerage Membership Credential which she accepts. The firm also sends her an evidence_access_request and an explanation that the firm's policy for regulatory reasons is to obtain access to the proof that the proper due-diligence was performed for Address, Identity, Photo and Employment. Stacey uses her Wallet App to instruct her Cloud Agent to send an evidence_access_response. Upon processing of Stacey's response, the firm establishes a Trust Score based on their policy for evidence based on Original Documents and In-person Proofing processes.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#life-insurance-policy-didcomm-doc-sharing","title":"Life Insurance Policy (DIDComm Doc Sharing)","text":"

    Stacey receives notification from her Insurance Company that they require an update to her life insurance policy account. The firm has undertaken a digital transformation strategy that impacts her 15yr old account. She has been given access to a new online portal and the choices on how to supply digital copies of her SSN and Birth Certificate. Stacey is too busy to take time to visit the Insurance Company to provide Original Documents for their vetting and digitization. She decides to submit her notarized digital copies. She opens the companies account portal page using her laptop web browser. Stacey registers, signs in and scans a QRCode using her Wallet App. She accepts a connection request from the firm. She then responds to an evidence_access_request for proof of that KYC due-diligence was performed for Identity and Photo. Stacey uses her Wallet App to instruct her Cloud Agent to send an evidence_access_response.

    "},{"location":"features/0116-evidence-exchange/digital_notary_usecase/#commentary","title":"Commentary","text":"
    1. The concepts of a digital notary can be applied today in application domains such as (but not limited to) indirect auto lending and title management (auto, recreational vehicle, etc).
    2. Since 2015, AAMVA in conjunction with the ISO JTC1/SC27/WG10 18013-5 mDL Team has been working on a single credential solution for cross jurisdictional use amongst DMVs. This public sector activity is a key source of IAM industry motivation for alternative solutions to Credential Lifecycle Management. Government agencies will eventually need to address discussions around technical debit investments and defacto open source standards.
    "},{"location":"features/0116-evidence-exchange/eep_glossary/","title":"Evidence Exchange Protocol Glossary","text":"

    The following terms are either derived from terminology found in the NIST 800-63A Digital Identity Guidelines or introduced to help reinforce protocol concepts and associated use cases.

    Term Definition Credential Service Provider (CSP) A trusted entity that issues verifiable credentials credentials. Either an Issuer of Origination for an Original Document or an Issuer of a Derived Credential. A DDN is an example of a CSP that issues Derived Credential. Decentralized Digital Notary (DDN) A trusted third party that enables digital interactions between Holders and Verifiers. As an issuer of digitally verifiable credentials, it creates permanent evidence that an Original Document existed in a certain form at a particular point in time. This role will be especially important to address scalability and the bootstrapping of the decentralized identity ecosystem since many Issuers of Origination may be laggards. DDN Insurer An entity (party) in an insurance contract that underwrites insurance risks associated with the activities of a DDN. This includes a willingness to pay compensation for any negligence on the part of the DDN for failure to perform the necessary due-diligence associated with the examination and vetting of Original Documents. Derived Credential An issued verifiable credential based on an identity proofing process over Original Documents or other Derived Credentials. Digital Assertion A non-physical (digital) form of evidence. Often in the form of a Digital Signature. A CSP may leverage the services of an IPSP and may then require the IPSP to digitally sign the content that is the subject of the assertion. Original Document Any issued artifact that satisfies the Original-Document Rule in accordance with principle of evidence law. The original artifact may be in writing or a mechanical, electronic form of publication. Such a document may also be referred to as a Foundational Document. Identity Evidence Information or documentation provided by the Holder to support the issuance of an Original Document. Identity evidence may be physical (e.g. a driver license) or Digital Assertion Identity Instrument Digital or physical, paper or plastic renderings of some subset of our personal data as defined by the providers of the instruments. The traditional physical object is an identification card. Many physical identity instruments contain public and encoded information about an entity. The encoded information, which is often stored using machine readable technologies like magnetic strips or barcodes, represents another rendering format of an individual\u2019s personal data. Digital identity instruments, pertain to an individual\u2019s personal data in a form that can be processed by a software program. Identity Proofing The process by which a CSP collects, validates, and verifies Identity Evidence. This process yields the attestations (claims of confidence) by which a CSP is then able to use to issue a Verifiable Credential. The sole objective of this process is to ensure the Holder of Identity Evidence is who they claim to be with a stated level of certitude. Identity Proofing Inquiry The objectives of an identity verification exercise. Some inquiries are focused topics such as address verification while others may be on achievement. Identity Proofing Service Provider (IPSP) A remote 3rd party service provider that carries out one or more aspects of an Identity Proofing process. Issuer of Origination The entity (business, organization, individual or government) that is the original publisher of an Original Document. Mobile Field Agents Location-based service providers that allow agencies to bring their services to remote (rural) customers. Tier 1 Proofs A category of foundational credentials (Original Documents) that are often required to prove identity and address during KYC or onboarding processes. Trust Framework Certification Authority An entity that adheres to a governance framework for a specific ecosystem and is responsible for overseeing and auditing the Level of Assurance a DDN (Relying Party) has within the ecosystem. Verifiable Credential A digital credential that is compliant with the W3C Verifiable Credential Specification."},{"location":"features/0124-did-resolution-protocol/","title":"Aries RFC 0124: DID Resolution Protocol 0.9","text":""},{"location":"features/0124-did-resolution-protocol/#summary","title":"Summary","text":"

    Describes a DIDComm request-response protocol that can send a request to a remote DID Resolver to resolve DIDs and dereference DID URLs.

    "},{"location":"features/0124-did-resolution-protocol/#motivation","title":"Motivation","text":"

    DID Resolution is an important feature of Aries. It is a prerequisite for the unpack() function in DIDComm, especially in Cross-Domain Messaging, since cryptographic keys must be discovered from DIDs in order to enable trusted communication between the agents associated with DIDs. DID Resolution is also required for other operations, e.g. for verifying credentials or for discovering DIDComm service endpoints.

    Ideally, DID Resolution should be implemented as a local API (TODO: link to other RFC?). In some cases however, the DID Resolution function may be provided by a remote service. This RFC describes a DIDComm request-response protocol for such a remote DID Resolver.

    "},{"location":"features/0124-did-resolution-protocol/#tutorial","title":"Tutorial","text":"

    DID Resolution is a function that returns a DID Document for a DID. This function can be accessed via \"local\" bindings (e.g. SDK calls, command line tools) or \"remote\" bindings (e.g. HTTP(S), DIDComm).

    A DID Resolver MAY invoke another DID Resolver in order to delegate (part of) the DID Resolution and DID URL Dereferencing algorithms. For example, a DID Resolver may be invoked via a \"local\" binding (such as an Aries library call), which in turn invokes another DID Resolver via a \"remote\" binding (such as HTTP(S) or DIDComm).

    "},{"location":"features/0124-did-resolution-protocol/#name-and-version","title":"Name and Version","text":"

    This defines the did_resolution protocol, version 0.1, as identified by the following PIURI:

    https://didcomm.org/did_resolution/0.1\n
    "},{"location":"features/0124-did-resolution-protocol/#key-concepts","title":"Key Concepts","text":"

    DID Resolution is the process of obtaining a DID Document for a given DID. This is one of four required operations that can be performed on any DID (\"Read\"; the other ones being \"Create\", \"Update\", and \"Deactivate\"). The details of these operations differ depending on the DID method. Building on top of DID Resolution, DID URL Dereferencing is the process of obtaining a resource for a given DID URL. Software and/or hardware that is able to execute these processes is called a DID Resolver.

    "},{"location":"features/0124-did-resolution-protocol/#roles","title":"Roles","text":"

    There are two parties and two roles (one for each party) in the did_resolution protocol: A requester and resolver.

    The requester wishes to resolve DIDs or dereference DID URLs.

    The\u00a0resolver conforms with the DID Resolution Specification. It is capable of resolving DIDs for at least one DID method.

    "},{"location":"features/0124-did-resolution-protocol/#states","title":"States","text":""},{"location":"features/0124-did-resolution-protocol/#states-for-requester-role","title":"States for requester role","text":"EVENTS: send resolve receive resolve_result STATES preparing-request transition to \"awaiting-response\" different interaction awaiting-response impossible transition to \"done\" done"},{"location":"features/0124-did-resolution-protocol/#states-for-resolver-role","title":"States for resolver role","text":"EVENTS: receive resolve send resolve_result STATES awaiting-request transition to \"resolving\" impossible resolving new interaction transition to \"done\" done"},{"location":"features/0124-did-resolution-protocol/#states-for-requester-role-in-a-failure-scenario","title":"States for requester role in a failure scenario","text":"EVENTS: send resolve receive resolve_result STATES preparing-request transition to \"awaiting-response\" different interaction awaiting-response impossible error reporting problem reported"},{"location":"features/0124-did-resolution-protocol/#states-for-resolver-role-in-a-failure-scenario","title":"States for resolver role in a failure scenario","text":"EVENTS: receive resolve send resolve_result STATES awaiting-request transition to \"resolving\" impossible resolving new interaction error reporting problem reported"},{"location":"features/0124-did-resolution-protocol/#messages","title":"Messages","text":"

    All messages in this protocol are part of the \"did_resolution 0.1\" message family uniquely identified by this DID reference: https://didcomm.org/did_resolution/0.1

    "},{"location":"features/0124-did-resolution-protocol/#resolve-message","title":"resolve message","text":"

    The protocol begins when the requester sends a resolve message to the resolver. It looks like this:

    {\n    \"@type\": \"https://didcomm.org/did_resolution/0.1/resolve\",\n    \"@id\": \"xhqMoTXfqhvAgtYxUSfaxbSiqWke9t\",\n    \"did\": \"did:sov:WRfXPg8dantKVubE3HX8pw\",\n    \"input_options\": {\n        \"result_type\": \"did-document\",\n        \"no_cache\": false\n    }\n}\n

    @id is required here, as it establishes a message thread that makes it possible to connect a subsequent response to this request.

    did is required.

    input_options is optional.

    For further details on the did and input_options fields, see Resolving a DID in the DID Resolution Spec.

    "},{"location":"features/0124-did-resolution-protocol/#resolve_result-message","title":"resolve_result message","text":"

    The resolve_result is the only allowed direct response to the resolve message. It represents the result of the DID Resolution function and contains a DID Document.

    It looks like this:

    {\n    \"@type\": \"https://didcomm.org/did_resolution/0.1/resolve_result\",\n    \"~thread\": { \"thid\": \"xhqMoTXfqhvAgtYxUSfaxbSiqWke9t\" },\n    \"did_document\": {\n        \"@context\": \"https://w3id.org/did/v0.11\",\n        \"id\": \"did:sov:WRfXPg8dantKVubE3HX8pw\",\n        \"service\": [{\n            \"type\": \"did-communication\",\n            \"serviceEndpoint\": \"https://agent.example.com/\"\n        }],\n        \"publicKey\": [{\n            \"id\": \"did:sov:WRfXPg8dantKVubE3HX8pw#key-1\",\n            \"type\": \"Ed25519VerificationKey2018\",\n            \"publicKeyBase58\": \"~P7F3BNs5VmQ6eVpwkNKJ5D\"\n        }],\n        \"authentication\": [\"did:sov:WRfXPg8dantKVubE3HX8pw#key-1\"]\n    }\n}\n

    If the input_options field of the resolve message contains an entry result_type with value resolution-result, then the resolve_result message contains a more extensive DID Resolution Result, which includes a DID Document plus additional metadata:

    {\n    \"@type\": \"https://didcomm.org/did_resolution/0.1/resolve_result\",\n    \"~thread\": { \"thid\": \"xhqMoTXfqhvAgtYxUSfaxbSiqWke9t\" },\n    \"did_document\": {\n        \"@context\": \"https://w3id.org/did/v0.11\",\n        \"id\": \"did:sov:WRfXPg8dantKVubE3HX8pw\",\n        \"service\": [{\n            \"type\": \"did-communication\",\n            \"serviceEndpoint\": \"https://agent.example.com/\"\n        }],\n        \"publicKey\": [{\n            \"id\": \"did:sov:WRfXPg8dantKVubE3HX8pw#key-1\",\n            \"type\": \"Ed25519VerificationKey2018\",\n            \"publicKeyBase58\": \"~P7F3BNs5VmQ6eVpwkNKJ5D\"\n        }],\n        \"authentication\": [\"did:sov:WRfXPg8dantKVubE3HX8pw#key-1\"]\n    },\n    \"resolver_metadata\": {\n        \"driverId\": \"did:sov\",\n        \"driver\": \"HttpDriver\",\n        \"retrieved\": \"2019-07-09T19:73:24Z\",\n        \"duration\": 1015\n    },\n    \"method_metadata\": {\n        \"nymResponse\": { ... },\n        \"attrResponse\": { ... }\n    }\n}\n
    "},{"location":"features/0124-did-resolution-protocol/#problem-report-failure-message","title":"problem-report failure message","text":"

    The resolve_result will also report failure messages in case of impossibility to resolve a DID. It represents the problem report indicating that the resolver could not resolve the DID, and the reason of the failure. It looks like this:

    {\n    \"@type\": \"https://didcomm.org/did_resolution/0.1/resolve_result\",\n    \"~thread\": { \"thid\": \"xhqMoTXfqhvAgtYxUSfaxbSiqWke9t\" },\n    \"explain_ltxt\": \"Could not resolve DID did:sov:WRfXPg8dantKVubE3HX8pw not found by resolver xxx\",\n        ...\n}\n
    "},{"location":"features/0124-did-resolution-protocol/#reference","title":"Reference","text":""},{"location":"features/0124-did-resolution-protocol/#messages_1","title":"Messages","text":"

    In the future, additional messages dereference and dereference_result may be defined in addition to resolve and resolve_result (see Unresolved questions).

    "},{"location":"features/0124-did-resolution-protocol/#message-catalog","title":"Message Catalog","text":"

    Status and error codes will be inherited from the DID Resolution Spec.

    "},{"location":"features/0124-did-resolution-protocol/#drawbacks","title":"Drawbacks","text":"

    Using a remote DID Resolver should only be considered a fallback when a local DID Resolver cannot be used. Relying on a remote DID Resolver raises the question of who operates it, can you trust its responses, and can MITM and other attacks occur? There is essentially a chicken-and-egg problem insofar as the purpose of DID Resolution is to discover metadata needed for trustable interaction with an entity, but the precondition is that interation with a DID Resolver must itself be trustable.

    Furthermore, the use of remote DID Resolvers may introduce central bottlenecks and undermine important design principles such as decentralization.

    See Binding Architectures and w3c-ccg/did-resolution#28 for additional thoughts.

    The security and trust issues may outweigh the benefits. Defining and implementing this RFC may lead developers to underestimate or ignore these issues associated with remote DID Resolvers.

    "},{"location":"features/0124-did-resolution-protocol/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Despite the drawbacks of remote DID Resolvers, in some situations they can be useful, for example to support DID methods that are hard to implement in local agents with limited hard- and software capabilities.

    A special case of remote DID Resolvers occurs in the case of the Peer DID Method, where each party of a relationship essentially acts as a remote DID Resolver for other parties, i.e. each party fulfills both the requester and resolver roles defined in this RFC.

    An alternative to the DIDComm binding defined by this RFC is an HTTP(S) binding, which is defined by the DID Resolution Spec.

    "},{"location":"features/0124-did-resolution-protocol/#prior-art","title":"Prior art","text":"

    Resolution and dereferencing of identifiers have always played a key role in digital identity infrastructure.

    "},{"location":"features/0124-did-resolution-protocol/#unresolved-questions","title":"Unresolved questions","text":"

    This RFC inherits a long list of unresolved questions and issues that currently exist in the DID Resolution Spec.

    We need to decide whether the DID Resolution and DID URL Dereferencing functions (resolve() and dereference()) should be exposed as the same message type, or as two different message types (including two different responses).

    "},{"location":"features/0124-did-resolution-protocol/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0160-connection-protocol/","title":"0160: Connection Protocol","text":""},{"location":"features/0160-connection-protocol/#summary","title":"Summary","text":"

    This RFC describes the protocol to establish connections between agents.

    "},{"location":"features/0160-connection-protocol/#motivation","title":"Motivation","text":"

    Indy agent developers want to create agents that are able to establish connections with each other and exchange secure information over those connections. For this to happen there must be a clear connection protocol.

    "},{"location":"features/0160-connection-protocol/#tutorial","title":"Tutorial","text":"

    We will explain how a connection is established, with the roles, states, and messages required.

    "},{"location":"features/0160-connection-protocol/#roles","title":"Roles","text":"

    Connection uses two roles: inviter and invitee.

    The inviter is the party that initiates the protocol with an invitation message. This party must already have an agent and be capable of creating DIDs and endpoints at which they are prepared to interact. It is desirable but not strictly required that inviters have the ability to help the invitee with the process and/or costs associated with acquiring an agent capable of participating in the ecosystem. For example, inviters may often be sponsoring institutions. The inviter sends a connection-response message at the end of the share phase.

    The invitee has less preconditions; the only requirement is that this party be capable of receiving invitations over traditional communication channels of some type, and acting on it in a way that leads to successful interaction. The invitee sends a connection-request message at the beginning of the share phase.

    In cases where both parties already possess SSI capabilities, deciding who plays the role of inviter and invitee might be a casual matter of whose phone is handier.

    "},{"location":"features/0160-connection-protocol/#states","title":"States","text":""},{"location":"features/0160-connection-protocol/#null","title":"null","text":"

    No connection exists or is in progress

    "},{"location":"features/0160-connection-protocol/#invited","title":"invited","text":"

    The invitation has been shared with the intended invitee(s), and they have not yet sent a connection_request.

    "},{"location":"features/0160-connection-protocol/#requested","title":"requested","text":"

    A connection_request has been sent by the invitee to the inviter based on the information in the invitation.

    "},{"location":"features/0160-connection-protocol/#responded","title":"responded","text":"

    A connection_response has been sent by the inviter to the invitee based on the information in the connection_request.

    "},{"location":"features/0160-connection-protocol/#complete","title":"complete","text":"

    The connection is valid and ready for use.

    "},{"location":"features/0160-connection-protocol/#errors","title":"Errors","text":"

    There are no errors in this protocol during the invitation phase. For the request and response, there are two error messages possible for each phase: one for an active rejection and one for an unknown error. These errors are sent using a problem_report message type specific to the connection message family. The following list details problem-codes that may be sent:

    request_not_accepted - The error indicates that the request has been rejected for a reason listed in the error_report. Typical reasons include not accepting the method of the provided DID, unknown endpoint protocols, etc. The request can be resent after the appropriate corrections have been made.

    request_processing_error - This error is sent when the inviter was processing the request with the intent to accept the request, but some processing error occurred. This error indicates that the request should be resent as-is.

    response_not_accepted - The error indicates that the response has been rejected for a reason listed in the error_report. Typical reasons include not accepting the method of the provided DID, unknown endpoint protocols, invalid signature, etc. The response can be resent after the appropriate corrections have been made.

    response_processing_error - This error is sent when the invitee was processing the response with the intent to accept the response, but some processing error occurred. This error indicates that the response should be resent as-is.

    No errors are sent in timeout situations. If the inviter or invitee wishes to retract the messages they sent, they record so locally and return a request_not_accepted or response_not_accepted error when the other party sends a request or response .

    "},{"location":"features/0160-connection-protocol/#error-message-example","title":"Error Message Example","text":"
    {\n  \"@type\": \"https://didcomm.org/connections/1.0/problem_report\",\n  \"@id\": \"5678876542345\",\n  \"~thread\": { \"thid\": \"<@id of message related to problem>\" },\n  \"~i10n\": { \"locale\": \"en\"},\n  \"problem-code\": \"request_not_accepted\", // matches codes listed above\n  \"explain\": \"Unsupported DID method for provided DID.\"\n}\n
    "},{"location":"features/0160-connection-protocol/#error-message-attributes","title":"Error Message Attributes","text":""},{"location":"features/0160-connection-protocol/#flow-overview","title":"Flow Overview","text":"

    The inviter gives provisional connection information to the invitee. The invitee uses provisional information to send a DID and DID document to the inviter. The inviter uses received DID document information to send a DID and DID document to the invitee. The invitee sends the inviter an ack or any other message that confirms the response was received.

    "},{"location":"features/0160-connection-protocol/#0-invitation-to-connect","title":"0. Invitation to Connect","text":"

    An invitation to connect may be transferred using any method that can reliably transmit text. The result must be the essential data necessary to initiate a Connection Request message. A connection invitation is an agent message with agent plaintext format, but is an out-of-band communication and therefore not communicated using wire level encoding or encryption. The necessary data that an invitation to connect must result in is:

    OR

    This information is used to create a provisional connection to the inviter. That connection will be made complete in the connection_response message.

    These attributes were chosen to parallel the attributes of a DID document for increased meaning. It is worth noting that recipientKeys and routingKeys must be inline keys, not DID key references when contained in an invitation. As in the DID document with Ed25519VerificationKey2018 key types, the key must be base58 encoded.

    When considering routing and options for invitations, keep in mind that the more detail is in the connection invitation, the longer the URL will be and (if used) the more dense the QR code will be. Dense QR codes can be harder to scan.

    The inviter will either use an existing invitation DID, or provision a new one according to the DID method spec. They will then create the invitation message in one of the following forms.

    Invitation Message with Public Invitation DID:

    {\n    \"@type\": \"https://didcomm.org/connections/1.0/invitation\",\n    \"@id\": \"12345678900987654321\",\n    \"label\": \"Alice\",\n    \"did\": \"did:sov:QmWbsNYhMrjHiqZDTUTEJs\"\n}\n

    Invitation Message with Keys and URL endpoint:

    {\n    \"@type\": \"https://didcomm.org/connections/1.0/invitation\",\n    \"@id\": \"12345678900987654321\",\n    \"label\": \"Alice\",\n    \"recipientKeys\": [\"8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K\"],\n    \"serviceEndpoint\": \"https://example.com/endpoint\",\n    \"routingKeys\": [\"8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K\"]\n}\n

    Invitation Message with Keys and DID Service Endpoint Reference:

    {\n    \"@type\": \"https://didcomm.org/connections/1.0/invitation\",\n    \"label\": \"Alice\",\n    \"recipientKeys\": [\"8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K\"],\n    \"serviceEndpoint\": \"did:sov:A2wBhNYhMrjHiqZDTUYH7u;routeid\",\n    \"routingKeys\": [\"8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K\"]\n}\n
    "},{"location":"features/0160-connection-protocol/#implicit-invitation","title":"Implicit Invitation","text":"

    Any Public DID serves as an implicit invitation. If an invitee wishes to connect to any Public DID, They designate their own label and skip to the end of the Invitation Processing step. There is no need to encode the invitation or transmit the invitation.

    "},{"location":"features/0160-connection-protocol/#routing-keys","title":"Routing Keys","text":"

    If routingKeys is present and non-empty, additional forwarding wrapping will be necessary for the request message. See the explanation in the Request section.

    "},{"location":"features/0160-connection-protocol/#agency-endpoint","title":"Agency Endpoint","text":"

    The endpoint for the connection is either present in the invitation or available in the DID document of a presented DID. If the endpoint is not a URI but a DID itself, that DID refers to an Agency.

    In that case, the serviceEndpoint of the DID must be a URI, and the recipientKeys must contain a single key. That key is appended to the end of the list of routingKeys for processing. For more information about message forwarding and routing, see RFC 0094.

    "},{"location":"features/0160-connection-protocol/#standard-invitation-encoding","title":"Standard Invitation Encoding","text":"

    Using a standard invitation encoding allows for easier interoperability between multiple projects and software platforms. Using a URL for that standard encoding provides a built in fallback flow for users who are unable to automatically process the invitation. Those new users will load the URL in a browser as a default behavior, and will be presented with instructions on how to install software capable of processing the invitation. Already onboarded users will be able to process the invitation without loading in a browser via mobile app URL capture, or via capability detection after being loaded in a browser.

    The standard invitation format is a URL with a Base64URLEncoded json object as a query parameter.

    The Invitation URL format is as follows, with some elements described below:

    https://<domain>/<path>?c_i=<invitationstring>\n

    <domain> and <path> should be kept as short as possible, and the full URL should return human readable instructions when loaded in a browser. This is intended to aid new users. The c_i query parameter is required and is reserved to contain the invitation string. Additional path elements or query parameters are allowed, and can be leveraged to provide coupons or other promise of payment for new users.

    The <invitationstring> is an agent plaintext message (not a wire level message) that has been base64 url encoded. For brevity, the json encoding should minimize unnecessary white space.

    invitation_string = b64urlencode(<invitation_message>)\n

    During encoding, whitespace from the json string should be eliminated to keep the resulting invitation string as short as possible.

    "},{"location":"features/0160-connection-protocol/#example-invitation-encoding","title":"Example Invitation Encoding","text":"

    Invitation:

    {\n    \"@type\": \"https://didcomm.org/connections/1.0/invitation\",\n    \"@id\": \"12345678900987654321\",\n    \"label\": \"Alice\",\n    \"recipientKeys\": [\"8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K\"],\n    \"serviceEndpoint\": \"https://example.com/endpoint\",\n    \"routingKeys\": [\"8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K\"]\n}\n

    Base 64 URL Encoded, with whitespace removed:

    eyJAdHlwZSI6ImRpZDpzb3Y6QnpDYnNOWWhNcmpIaXFaRFRVQVNIZztzcGVjL2Nvbm5lY3Rpb25zLzEuMC9pbnZpdGF0aW9uIiwiQGlkIjoiMTIzNDU2Nzg5MDA5ODc2NTQzMjEiLCJsYWJlbCI6IkFsaWNlIiwicmVjaXBpZW50S2V5cyI6WyI4SEg1Z1lFZU5jM3o3UFlYbWQ1NGQ0eDZxQWZDTnJxUXFFQjNuUzdaZnU3SyJdLCJzZXJ2aWNlRW5kcG9pbnQiOiJodHRwczovL2V4YW1wbGUuY29tL2VuZHBvaW50Iiwicm91dGluZ0tleXMiOlsiOEhINWdZRWVOYzN6N1BZWG1kNTRkNHg2cUFmQ05ycVFxRUIzblM3WmZ1N0siXX0=\n

    Example URL:

    http://example.com/ssi?c_i=eyJAdHlwZSI6ImRpZDpzb3Y6QnpDYnNOWWhNcmpIaXFaRFRVQVNIZztzcGVjL2Nvbm5lY3Rpb25zLzEuMC9pbnZpdGF0aW9uIiwiQGlkIjoiMTIzNDU2Nzg5MDA5ODc2NTQzMjEiLCJsYWJlbCI6IkFsaWNlIiwicmVjaXBpZW50S2V5cyI6WyI4SEg1Z1lFZU5jM3o3UFlYbWQ1NGQ0eDZxQWZDTnJxUXFFQjNuUzdaZnU3SyJdLCJzZXJ2aWNlRW5kcG9pbnQiOiJodHRwczovL2V4YW1wbGUuY29tL2VuZHBvaW50Iiwicm91dGluZ0tleXMiOlsiOEhINWdZRWVOYzN6N1BZWG1kNTRkNHg2cUFmQ05ycVFxRUIzblM3WmZ1N0siXX0=\n

    Invitation URLs can be transferred via any method that can send text, including an email, SMS, posting on a website, or via a QR Code.

    Example URL encoded as a QR Code:

    "},{"location":"features/0160-connection-protocol/#invitation-publishing","title":"Invitation Publishing","text":"

    The inviter will then publish or transmit the invitation URL in a manner available to the intended invitee. After publishing, we have entered the invited state.

    "},{"location":"features/0160-connection-protocol/#invitation-processing","title":"Invitation Processing","text":"

    When they invitee receives the invitation URL, there are two possible user flows that depend on the SSI preparedness of the individual. If the individual is new to the SSI universe, they will likely load the URL in a browser. The resulting page will contain instructions on how to get started by installing software or a mobile app. That install flow will transfer the invitation message to the newly installed software. A user that already has those steps accomplished will have the URL received by software directly. That software will base64URL decode the string and can read the invitation message directly out of the c_i query parameter, without loading the URL.

    NOTE: In receiving the invitation, the base64url decode implementation used MUST correctly decode padded and unpadded base64URL encoded data.

    If the invitee wants to accept the connection invitation, they will use the information present in the invitation message to prepare the request

    "},{"location":"features/0160-connection-protocol/#1-connection-request","title":"1. Connection Request","text":"

    The connection request message is used to communicate the DID document of the invitee to the inviter using the provisional connection information present in the connection_invitation message.

    The invitee will provision a new DID according to the DID method spec. For a Peer DID, this involves creating a matching peer DID and key. The newly provisioned DID and DID document is presented in the connection_request message as follows:

    "},{"location":"features/0160-connection-protocol/#example","title":"Example","text":"
    {\n  \"@id\": \"5678876542345\",\n  \"@type\": \"https://didcomm.org/connections/1.0/request\",\n  \"label\": \"Bob\",\n  \"connection\": {\n    \"DID\": \"B.did@B:A\",\n    \"DIDDoc\": {\n        \"@context\": \"https://w3id.org/did/v1\"\n        // DID document contents here.\n    }\n  }\n}\n
    "},{"location":"features/0160-connection-protocol/#attributes","title":"Attributes","text":""},{"location":"features/0160-connection-protocol/#diddoc-example","title":"DIDDoc Example","text":"

    An example of the DID document contents is the following JSON. This format was implemented in some early agents as the DIDComm DIDDoc Conventions RFC was being formalized and so does not match that RFC exactly. For example, the use of the IndyAgent service endpoint. Future versions of this protocol will align precisely with that RFC.

    {\n  \"@context\": \"https://w3id.org/did/v1\",\n  \"id\": \"did:sov:QUmsj7xwB82QAuuzfmvhAi\",\n  \"publicKey\": [\n    {\n      \"id\": \"did:sov:QUmsj7xwB82QAuuzfmvhAi#1\",\n      \"type\": \"Ed25519VerificationKey2018\",\n      \"controller\": \"did:sov:QUmsj7xwB82QAuuzfmvhAi\",\n      \"publicKeyBase58\": \"DoDMNYwMrSN8ygGKabgz5fLA9aWV4Vi8SLX6CiyN2H4a\"\n    }\n  ],\n  \"authentication\": [\n    {\n      \"type\": \"Ed25519SignatureAuthentication2018\",\n      \"publicKey\": \"did:sov:QUmsj7xwB82QAuuzfmvhAi#1\"\n    }\n  ],\n  \"service\": [\n    {\n      \"id\": \"did:sov:QUmsj7xwB82QAuuzfmvhAi;indy\",\n      \"type\": \"IndyAgent\",\n      \"priority\": 0,\n      \"recipientKeys\": [\n        \"DoDMNYwMrSN8ygGKabgz5fLA9aWV4Vi8SLX6CiyN2H4a\"\n      ],\n      \"serviceEndpoint\": \"http://192.168.65.3:8030\"\n    }\n  ]\n}\n
    "},{"location":"features/0160-connection-protocol/#request-transmission","title":"Request Transmission","text":"

    The Request message is encoded according to the standards of the Agent Wire Level Protocol, using the recipientKeys present in the invitation.

    If the routingKeys attribute was present and non-empty in the invitation, each key must be used to wrap the message in a forward request, then encoded according to the Agent Wire Level Protocol. This processing is in order of the keys in the list, with the last key in the list being the one for which the serviceEndpoint possesses the private key.

    The message is then transmitted to the serviceEndpoint.

    We are now in the requested state.

    "},{"location":"features/0160-connection-protocol/#request-processing","title":"Request processing","text":"

    After receiving the connection request, the inviter evaluates the provided DID and DID document according to the DID Method Spec.

    The inviter should check the information presented with the keys used in the wire-level message transmission to ensure they match.

    If the inviter wishes to accept the connection, they will persist the received information in their wallet. They will then either update the provisional connection information to rotate the key, or provision a new DID entirely. The choice here will depend on the nature of the DID used in the invitation.

    The inviter will then craft a connection response using the newly updated or provisioned information.

    "},{"location":"features/0160-connection-protocol/#request-errors","title":"Request Errors","text":"

    See Error Section above for message format details.

    request_rejected

    Possible reasons:

    request_processing_error

    "},{"location":"features/0160-connection-protocol/#2-connection-response","title":"2. Connection Response","text":"

    The connection response message is used to complete the connection. This message is required in the flow, as it updates the provisional information presented in the invitation.

    "},{"location":"features/0160-connection-protocol/#example_1","title":"Example","text":"
    {\n  \"@type\": \"https://didcomm.org/connections/1.0/response\",\n  \"@id\": \"12345678900987654321\",\n  \"~thread\": {\n    \"thid\": \"<@id of request message>\"\n  },\n  \"connection\": {\n    \"DID\": \"A.did@B:A\",\n    \"DIDDoc\": {\n      \"@context\": \"https://w3id.org/did/v1\"\n      // DID document contents here.\n    }\n  }\n}\n

    The above message is required to be signed as described in RFC 0234 Signature Decorator. The connection attribute above will be base64URL encoded and included as part of the sig_data attribute of the signed field. The result looks like this:

    {\n  \"@type\": \"https://didcomm.org/connections/1.0/response\",\n  \"@id\": \"12345678900987654321\",\n  \"~thread\": {\n    \"thid\": \"<@id of request message>\"\n  },\n  \"connection~sig\": {\n    \"@type\": \"https://didcomm.org/signature/1.0/ed25519Sha512_single\",\n    \"signature\": \"<digital signature function output>\",\n    \"sig_data\": \"<base64URL(64bit_integer_from_unix_epoch||connection_attribute)>\",\n    \"signer\": \"<signing_verkey>\"\n  }\n}\n

    The connection attribute has been removed and it's contents combined with the timestamp and encoded into the sig_data field of the new connection~sig attribute.

    Upon receipt, the signed attribute will be automatically unpacked and the signature verified. Signature information will be stored as message context, and the connection attribute will be replaced in it's original format before processing continues.

    The signature data must be used to verify against the invitation's recipientKeys for continuity.

    "},{"location":"features/0160-connection-protocol/#attributes_1","title":"Attributes","text":"

    In addition to a new DID, the associated DID document might contain a new endpoint. This new DID and endpoint are to be used going forward in the connection.

    "},{"location":"features/0160-connection-protocol/#response-transmission","title":"Response Transmission","text":"

    The message should be packaged in the wire level format, using the keys from the request, and the new keys presented in the internal DID document.

    When the message is transmitted, we are now in the responded state.

    "},{"location":"features/0160-connection-protocol/#response-processing","title":"Response Processing","text":"

    When the invitee receives the response message, they will verify the sig_data provided. After validation, they will update their wallet with the new connection information. If the endpoint was changed, they may wish to execute a Trust Ping to verify that new endpoint.

    "},{"location":"features/0160-connection-protocol/#response-errors","title":"Response Errors","text":"

    See Error Section above for message format details.

    response_rejected

    Possible reasons:

    response_processing_error

    "},{"location":"features/0160-connection-protocol/#3-connection-acknowledgement","title":"3. Connection Acknowledgement","text":"

    After the Response is received, the connection is technically complete. This remains unconfirmed to the inviter however. The invitee SHOULD send a message to the inviter. As any message will confirm the connection, any message will do.

    Frequently, the parties of the connection will want to trade credentials to establish trust. In such a flow, those message will serve the function of acknowledging the connection without an extra confirmation message.

    If no message is needed immediately, a trust ping can be used to allow both parties confirm the connection.

    After a message is sent, the invitee in the complete state. Receipt of a message puts the inviter into the complete state.

    "},{"location":"features/0160-connection-protocol/#next-steps","title":"Next Steps","text":"

    The connection between the inviter and the invitee is now established. This connection has no trust associated with it. The next step should be the exchange of proofs to build trust sufficient for the purpose of the relationship.

    "},{"location":"features/0160-connection-protocol/#connection-maintenance","title":"Connection Maintenance","text":"

    Upon establishing a connection, it is likely that both Alice and Bob will want to perform some relationship maintenance such as key rotations. Future RFC updates will add these maintenance ../../features.

    "},{"location":"features/0160-connection-protocol/#reference","title":"Reference","text":""},{"location":"features/0160-connection-protocol/#drawbacks","title":"Drawbacks","text":""},{"location":"features/0160-connection-protocol/#prior-art","title":"Prior art","text":""},{"location":"features/0160-connection-protocol/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0160-connection-protocol/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Framework - .NET passed agent connectathon tests, Feb 2019; MISSING test results Streetcred.id passed agent connectathon tests, Feb 2019; MISSING test results Aries Cloud Agent - Python ported from VON codebase that passed agent connectathon tests, Feb 2019; MISSING test results Aries Static Agent - Python implemented July 2019; MISSING test results Aries Protocol Test Suite ported from Indy Agent codebase that provided agent connectathon tests, Feb 2019; MISSING test results Indy Cloud Agent - Python passed agent connectathon tests, Feb 2019; MISSING test results"},{"location":"features/0183-revocation-notification/","title":"Aries RFC 0183: Revocation Notification 1.0","text":""},{"location":"features/0183-revocation-notification/#summary","title":"Summary","text":"

    This RFC defines the message format which an issuer uses to notify a holder that a previously issued credential has been revoked.

    "},{"location":"features/0183-revocation-notification/#change-log","title":"Change Log","text":""},{"location":"features/0183-revocation-notification/#motivation","title":"Motivation","text":"

    We need a standard protocol for an issuer to notify a holder that a previously issued credential has been revoked.

    For example, suppose a passport agency revokes Alice's passport. The passport agency (an issuer) may want to notify Alice (a holder) that her passport has been revoked so that she knows that she will be unable to use her passport to travel.

    "},{"location":"features/0183-revocation-notification/#tutorial","title":"Tutorial","text":"

    The Revocation Notification protocol is a very simple protocol consisting of a single message:

    This simple protocol allows an issuer to choose to notify a holder that a previously issued credential has been revoked.

    It is the issuer's prerogative whether or not to notify the holder that a credential has been revoked. It is not a security risk if the issuer does not notify the holder that the credential has been revoked, nor if the message is lost. The holder will still be unable to use a revoked credential without this notification.

    "},{"location":"features/0183-revocation-notification/#roles","title":"Roles","text":"

    There are two parties involved in a Revocation Notification: issuer and holder. The issuer sends the revoke message to the holder.

    "},{"location":"features/0183-revocation-notification/#messages","title":"Messages","text":"

    The revoke message sent by the issuer to the holder is as follows:

    {\n  \"@type\": \"https://didcomm.org/revocation_notification/1.0/revoke\",\n  \"@id\": \"<uuid-revocation-notification>\",\n  \"thread_id\": \"<thread_id>\",\n  \"comment\": \"Some comment\"\n}\n

    Description of fields:

    "},{"location":"features/0183-revocation-notification/#reference","title":"Reference","text":""},{"location":"features/0183-revocation-notification/#drawbacks","title":"Drawbacks","text":"

    If we later added support for more general event subscription and notification message flows, this would be redundant.

    "},{"location":"features/0183-revocation-notification/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0183-revocation-notification/#prior-art","title":"Prior art","text":""},{"location":"features/0183-revocation-notification/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0183-revocation-notification/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0193-coin-flip/","title":"Aries RFC 0193: Coin Flip Protocol 1.0","text":""},{"location":"features/0193-coin-flip/#summary","title":"Summary","text":"

    Specifies a safe way for two parties who are remote from one another and who do not trust one another to pick a random, binary outcome that neither can manipulate.

    "},{"location":"features/0193-coin-flip/#change-log","title":"Change Log","text":""},{"location":"features/0193-coin-flip/#motivation","title":"Motivation","text":"

    To guarantee fairness, it is often important to pick one party in a protocol to make a choice about what to do next. We need a way to do this that it more or less mirrors the randomness of flipping a coin.

    "},{"location":"features/0193-coin-flip/#tutorial","title":"Tutorial","text":""},{"location":"features/0193-coin-flip/#name-and-version","title":"Name and Version","text":"

    This defines the coinflip protocol, version 1.x, as identified by the following PIURI:

    https://github.com/hyperledger/aries-rfcs../../features/0193-coin-flip/1.0\n
    "},{"location":"features/0193-coin-flip/#roles","title":"Roles","text":"

    There are 2 roles in the protocol: Recorder and Caller. These role names parallel the roles in a physical coin flip: the Recorder performs a process that freezes/records the state of a flipped coin, and the Caller announces the state that they predict, before the state is known. If the caller predicts the state correctly, then the caller chooses what happens next; otherwise, the recorder chooses.

    "},{"location":"features/0193-coin-flip/#algorithm","title":"Algorithm","text":"

    Before describing the messages, let's review the algorithm that will be used. This algorithm is not new; it is a simple commitment scheme described on wikipedia and implemented in various places. The RFC merely formalizes a simple commitment scheme for DIDComm in a way that the Caller chooses a side without knowing whether it's win or lose.

    1. Recorder chooses a random UUID. A version 4 UUID is recommended, though any UUID version should be accepted. Note that the UUID is represented in lower case, with hyphens, and without enclosing curly braces. Suppose this value is 01bf7abd-aa80-4389-bf8c-dba0f250bb1b. This UUID is called salt.

    2. Recorder builds two side strings by salting win and lose with the salt -- i.e., win01bf7abd-aa80-4389-bf8c-dba0f250bb1b and lose01bf7abd-aa80-4389-bf8c-dba0f250bb1b. Recorder then computes a SHA256 hash of each side string, which are 0C192E004440D8D6D6AF06A7A03A2B182903E9F048D4E7320DF6301DF0C135A5 and C587E50CB48B1B0A3B5136BA9D238B739A6CD599EE2D16994537B75CA595C091 for our example, and randomly selects one side string as side1 and the other one as side2. Recorder sends them to Caller using the propose message described below. Those hashes do commit Recorder to all inputs, without revealing which one is win or lose, and it's the Recorder's way of posing the Caller the question, \"underside or topside\"?

    3. Caller announces their committed choice -- for instance, side2, using the 'call' message described below. This commits Caller to a particular side of the virtual coin.

    4. Recorder uses a 'reveal' message to reveal the salt. Caller is now able to rebuild both side strings and both parties discover whether Caller guessed the win side or not. If Caller guessed the win side, Caller won. Otherwise Recorder won. Neither party is able to manipulate the outcome. This is guaranteed by Caller being able to verify that Recorder proposed two valid options, i.e. one side winning and one side losing, and Recorder knowing not to reveal any disclosed information before Caller made their choice.

    "},{"location":"features/0193-coin-flip/#states","title":"States","text":"

    The algorithm and the corresponding states are pictured in the following diagram:

    Note: This diagram was made in draw.io. To make changes: - upload the drawing HTML from this folder to the [draw.io](https://draw.io) site (Import From... Device), - make changes, - export the picture as PNG and HTML to your local copy of this repo, and - submit a pull request.

    This diagram only depicts the so-called \"happy path\". It is possible to experience problems for various reasons. If either party detects such an event, they should abandon the protocol and emit a problem-report message to the other party. The problem-report message is adopted into this protocol for that purpose. Some values of code that may be used in such messages include:

    "},{"location":"features/0193-coin-flip/#reference","title":"Reference","text":""},{"location":"features/0193-coin-flip/#messages","title":"Messages","text":""},{"location":"features/0193-coin-flip/#propose","title":"propose","text":"

    The protocol begins when Caller sends to Recorder a propose message that embodies Step 1 in the algorithm above. It looks like this:

    {\n  \"@type\": \"https://github.com/hyperledger/aries-rfcs../../features/0193-coin-flip/1.0/propose\",\n  \"@id\": \"518be002-de8e-456e-b3d5-8fe472477a86\",\n  \"side1\": \"C587E50CB48B1B0A3B5136BA9D238B739A6CD599EE2D16994537B75CA595C091\",\n  \"side2\": \"0C192E004440D8D6D6AF06A7A03A2B182903E9F048D4E7320DF6301DF0C135A5\",\n  \"comment\": \"Make your choice and let's who goes first.\",\n  \"choice-id\": \"did:sov:SLfEi9esrjzybysFxQZbfq;spec/tictactoe/1.0/who-goes-first\",\n  \"caller-wins\": \"did:example:abc123\",  // Meaning of value defined in superprotocol\n  \"recorder-wins\": \"did:example:xyz456\", // Meaning of value defined in superprotocol\n  // Optional; connects to superprotocol\n  \"~thread\": { \n    \"pthid\": \"a2be4118-4f60-bacd-c9a0-dfb581d6fd96\" \n  }\n}\n

    The @type and @id fields are standard for DIDComm. The side1 and side2 fields convey the data required by Step 2 of the algorithm. The optional comment field follows localization conventions and is irrelevant unless the coin flip intends to invite human participation. The ~thread.pthid decorator is optional but should be common; it identifies the thread of the parent interaction (the superprotocol).

    The choice-id field formally names a choice that a superprotocol has defined, and tells how the string values of the caller-wins and recorder-wins fields will be interpreted. In the example above, the choice is defined in the Tic-Tac-Toe Protocol, which also specifies that caller-wins and recorder-wins will contain DIDs of the parties playing the game. Some other combinations that might make sense include:

    The ~timing.expires_time decorator may be used to impose a time limit on the processing of this message. If used, the protocol must restart if the subsequent call message is not received by this time limit.

    "},{"location":"features/0193-coin-flip/#call","title":"call","text":"

    This message is sent from Caller to Recorder, and embodies Step 3 of the algorithm. It looks like this:

    {\n  \"@type\": \"https://github.com/hyperledger/aries-rfcs../../features/0193-coin-flip/1.0/call\",\n  \"@id\": \"1173fe5f-86c9-47d7-911b-b8eac7d5f2ad\",\n  \"choice\": \"side2\",\n  \"comment\": \"I pick side 2.\",\n  \"~thread\": { \n    \"thid\": \"518be002-de8e-456e-b3d5-8fe472477a86\",\n    \"sender_order\": 1 \n  }\n}\n

    Note the use of ~thread.thid and sender_order: 1 to connect this call to the preceding propose.

    The ~timing.expires_time decorator may be used to impose a time limit on the processing of this message. If used, the protocol must restart if the subsequent reveal message is not received by this time limit.

    "},{"location":"features/0193-coin-flip/#reveal","title":"reveal","text":"

    This message is sent from Recorder to Caller, and embodies Step 4 of the algorithm. It looks like this:

    {\n  \"@type\": \"https://github.com/hyperledger/aries-rfcs../../features/0193-coin-flip/1.0/reveal\",\n  \"@id\": \"e2a9454d-783d-4663-874e-29ad10776115\",\n  \"salt\": \"01bf7abd-aa80-4389-bf8c-dba0f250bb1b\",\n  \"winner\": \"caller\",\n  \"comment\": \"You win.\",\n  \"~thread\": { \n    \"thid\": \"518be002-de8e-456e-b3d5-8fe472477a86\",\n    \"sender_order\": 1 \n  }\n}\n

    Note the use of ~thread.thid and sender_order: 1 to connect this reveal to the preceding call.

    The Caller should validate this message as follows:

    Having validated the message thus far, Caller determines the winner by checking if the self computed hash of win<salt> equals the given hash at the propose message at the position chosen with the call message or not. If yes, then the value of the winner field must be caller; if not, then it must be recorder. The winner field must be present in the message, and its value must be correct, for the reveal message to be deemed fully valid. This confirms that both parties understand the outcome, and it prevents a Recorder from asserting a false outcome that is accepted by careless validation logic on the Caller side.

    The ~timing.expires_time decorator may be used to impose a time limit on the processing of this message. If used, the protocol must restart if the subsequent ack or the next message in the superprotocol is not received before the time limit.

    "},{"location":"features/0193-coin-flip/#drawbacks","title":"Drawbacks","text":"

    The protocol is a bit chatty.

    "},{"location":"features/0193-coin-flip/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    It may be desirable to pick among more than 2 alternatives. This RFC could be extended easily to provide more options than win and lose. The algorithm itself would not change.

    "},{"location":"features/0193-coin-flip/#prior-art","title":"Prior art","text":"

    As mentioned in the introduction, the algorithm used in this protocol is a simple and well known form of cryptographic commitment, and is documented on Wikipedia. It is not new to this RFC.

    "},{"location":"features/0193-coin-flip/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0193-coin-flip/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0211-route-coordination/","title":"0211: Mediator Coordination Protocol","text":""},{"location":"features/0211-route-coordination/#summary","title":"Summary","text":"

    A protocol to coordinate mediation configuration between a mediating agent and the recipient.

    "},{"location":"features/0211-route-coordination/#application-scope","title":"Application Scope","text":"

    This protocol is needed when using an edge agent and a mediator agent from different vendors. Edge agents and mediator agents from the same vendor may use whatever protocol they wish without sacrificing interoperability.

    "},{"location":"features/0211-route-coordination/#motivation","title":"Motivation","text":"

    Use of the forward message in the Routing Protocol requires an exchange of information. The Recipient must know which endpoint and routing key(s) to share, and the Mediator needs to know which keys should be routed via this relationship.

    "},{"location":"features/0211-route-coordination/#protocol","title":"Protocol","text":"

    Name: coordinate-mediation

    Version: 1.0

    Base URI: https://didcomm.org/coordinate-mediation/1.0/

    "},{"location":"features/0211-route-coordination/#roles","title":"Roles","text":"

    mediator - The agent that will be receiving forward messages on behalf of the recipient. recipient - The agent for whom the forward message payload is intended.

    "},{"location":"features/0211-route-coordination/#flow","title":"Flow","text":"

    A recipient may discover an agent capable of routing using the Feature Discovery Protocol. If protocol is supported with the mediator role, a recipient may send a mediate-request to initiate a routing relationship.

    First, the recipient sends a mediate-request message to the mediator. If the mediator is willing to route messages, it will respond with a mediate-grant message. The recipient will share the routing information in the grant message with other contacts.

    When a new key is used by the recipient, it must be registered with the mediator to enable route identification. This is done with a keylist-update message.

    The keylist-update and keylist-query methods are used over time to identify and remove keys that are no longer in use by the recipient.

    "},{"location":"features/0211-route-coordination/#reference","title":"Reference","text":"

    Note on terms: Early versions of this protocol included the concept of terms for mediation. This concept has been removed from this version due to a need for further discussion on representing terms in DIDComm in general and lack of use of these terms in current implementations.

    "},{"location":"features/0211-route-coordination/#mediation-request","title":"Mediation Request","text":"

    This message serves as a request from the recipient to the mediator, asking for the permission (and routing information) to publish the endpoint as a mediator.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/mediate-request\",\n}\n
    "},{"location":"features/0211-route-coordination/#mediation-deny","title":"Mediation Deny","text":"

    This message serves as notification of the mediator denying the recipient's request for mediation.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/mediate-deny\",\n}\n
    "},{"location":"features/0211-route-coordination/#mediation-grant","title":"Mediation Grant","text":"

    A route grant message is a signal from the mediator to the recipient that permission is given to distribute the included information as an inbound route.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/mediate-grant\",\n    \"endpoint\": \"http://mediators-r-us.com\",\n    \"routing_keys\": [\"did:key:z6Mkfriq1MqLBoPWecGoDLjguo1sB9brj6wT3qZ5BxkKpuP6\"]\n}\n

    endpoint: The endpoint reported to mediation client connections.

    routing_keys: List of keys in intended routing order. Key used as recipient of forward messages.

    "},{"location":"features/0211-route-coordination/#keylist-update","title":"Keylist Update","text":"

    Used to notify the mediator of keys in use by the recipient.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/keylist-update\",\n    \"updates\":[\n        {\n            \"recipient_key\": \"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\",\n            \"action\": \"add\"\n        }\n    ]\n}\n

    recipient_key: Key subject of the update.

    action: One of add or remove.

    "},{"location":"features/0211-route-coordination/#keylist-update-response","title":"Keylist Update Response","text":"

    Confirmation of requested keylist updates.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/keylist-update-response\",\n    \"updated\": [\n        {\n            \"recipient_key\": \"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\",\n            \"action\": \"\" // \"add\" or \"remove\"\n            \"result\": \"\" // [client_error | server_error | no_change | success]\n        }\n    ]\n}\n

    recipient_key: Key subject of the update.

    action: One of add or remove.

    result: One of client_error, server_error, no_change, success; describes the resulting state of the keylist update.

    "},{"location":"features/0211-route-coordination/#key-list-query","title":"Key List Query","text":"

    Query mediator for a list of keys registered for this connection.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/keylist-query\",\n    \"paginate\": {\n        \"limit\": 30,\n        \"offset\": 0\n    }\n}\n

    paginate is optional.

    "},{"location":"features/0211-route-coordination/#key-list","title":"Key List","text":"

    Response to key list query, containing retrieved keys.

    {\n    \"@id\": \"123456781\",\n    \"@type\": \"<baseuri>/keylist\",\n    \"keys\": [\n        {\n            \"recipient_key\": \"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\"\n        }\n    ],\n    \"pagination\": {\n        \"count\": 30,\n        \"offset\": 30,\n        \"remaining\": 100\n    }\n}\n

    pagination is optional.

    "},{"location":"features/0211-route-coordination/#encoding-of-keys","title":"Encoding of keys","text":"

    All keys are encoded using the did:key method as per RFC0360.

    "},{"location":"features/0211-route-coordination/#prior-art","title":"Prior art","text":"

    There was an Indy HIPE that never made it past the PR process that described a similar approach. That HIPE led to a partial implementation of this inside the Aries Cloud Agent Python

    "},{"location":"features/0211-route-coordination/#future-considerations","title":"Future Considerations","text":""},{"location":"features/0211-route-coordination/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0211-route-coordination/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Cloud Agent - Python Added in ACA-Py 0.6.0 MISSING test results**** DIDComm mediator Open source cloud-based mediator."},{"location":"features/0212-pickup/","title":"0212: Pickup Protocol","text":""},{"location":"features/0212-pickup/#summary","title":"Summary","text":"

    A protocol to coordinate routing configuration between a routing agent and the recipient.

    "},{"location":"features/0212-pickup/#motivation","title":"Motivation","text":"

    Messages can be picked up simply by sending a message to the message holder with a return_route decorator specified. This mechanism is implicit, and lacks some desired behavior made possible by more explicit messages. This protocol is the explicit companion to the implicit method of picking up messages.

    "},{"location":"features/0212-pickup/#tutorial","title":"Tutorial","text":""},{"location":"features/0212-pickup/#roles","title":"Roles","text":"

    message_holder - The agent that has messages waiting for pickup by the recipient. recipient - The agent who is picking up messages. batch_sender - A message_holder that is capable of returning messages in a batch. batch_recipient - A recipient that is capable of receiving and processing a batch message.

    "},{"location":"features/0212-pickup/#flow","title":"Flow","text":"

    status can be used to see how many messages are pending. batch retrieval can be executed when many messages ...

    "},{"location":"features/0212-pickup/#reference","title":"Reference","text":""},{"location":"features/0212-pickup/#statusrequest","title":"StatusRequest","text":"

    Sent by the recipient to the message_holder to request a status message. ``json= { \"@id\": \"123456781\", \"@type\": \"https://didcomm.org/messagepickup/1.0/status-request\" }

    ### Status\nStatus details about pending messages\n```json=\n{\n    \"@id\": \"123456781\",\n    \"@type\": \"https://didcomm.org/messagepickup/1.0/status\",\n    \"message_count\": 7,\n    \"duration_waited\": 3600,\n    \"last_added_time\": \"2019-05-01 12:00:00Z\",\n    \"last_delivered_time\": \"2019-05-01 12:00:01Z\",\n    \"last_removed_time\": \"2019-05-01 12:00:01Z\",\n    \"total_size\": 8096\n}\n
    message_count` is the only required attribute. The others may be present if offered by the message_holder.

    "},{"location":"features/0212-pickup/#batch-pickup","title":"Batch Pickup","text":"

    A request to have multiple waiting messages sent inside a batch message. ```json= { \"@id\": \"123456781\", \"@type\": \"https://didcomm.org/messagepickup/1.0/batch-pickup\", \"batch_size\": 10 }

    ### Batch\nA message that contains multiple waiting messages.\n```json=\n{\n    \"@id\": \"123456781\",\n    \"@type\": \"https://didcomm.org/messagepickup/1.0/batch\",\n    \"messages~attach\": [\n        {\n            \"@id\" : \"06ca25f6-d3c5-48ac-8eee-1a9e29120c31\",\n            \"message\" : \"{\n                ...\n            }\"\n        },\n\n        {\n            \"@id\" : \"344a51cf-379f-40ab-ab2c-711dab3f53a9a\",\n            \"message\" : \"{\n                ...\n            }\"\n        }\n    ]\n}\n

    "},{"location":"features/0212-pickup/#message-query-with-message-id-list","title":"Message Query With Message Id List","text":"

    A request to read single or multiple messages with a message message id array. ```json= { \"@id\": \"123456781\", \"@type\": \"https://didcomm.org/messagepickup/1.0/list-pickup\", \"message_ids\": [ \"06ca25f6-d3c5-48ac-8eee-1a9e29120c31\", \"344a51cf-379f-40ab-ab2c-711dab3f53a9a\" ] }

    `message_ids` message id array for picking up messages. Any message id in `message_ids` could be delivered via several ways to the recipient (Push notification or with an envoloped message).\n### Message List Query Response\nA response to query with message id list.\n```json=\n{\n    \"@type\": \"https://didcomm.org/messagepickup/1.0/list-response\",\n    \"messages~attach\": [\n        {\n            \"@id\" : \"06ca25f6-d3c5-48ac-8eee-1a9e29120c31\",\n            \"message\" : \"{\n                ...\n            }\"\n        },\n        {\n            \"@id\" : \"344a51cf-379f-40ab-ab2c-711dab3f53a9a\",\n            \"message\" : \"{\n                ...\n            }\"\n        }\n    ]\n}\n

    "},{"location":"features/0212-pickup/#noop","title":"Noop","text":"

    Used to receive another message implicitly. This message has no expected behavior when received. json= { \"@id\": \"123456781\", \"@type\": \"https://didcomm.org/messagepickup/1.0/noop\" }

    "},{"location":"features/0212-pickup/#prior-art","title":"Prior art","text":"

    Concepts here borrow heavily from a document written by Andrew Whitehead of BCGov.

    "},{"location":"features/0212-pickup/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0212-pickup/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0213-transfer-policy/","title":"0213: Transfer Policy Protocol","text":""},{"location":"features/0213-transfer-policy/#summary","title":"Summary","text":"

    A protocol to share and request changes to policy that relates to message transfer.

    "},{"location":"features/0213-transfer-policy/#motivation","title":"Motivation","text":"

    Explicit Policy Enables clear expectations.

    "},{"location":"features/0213-transfer-policy/#tutorial","title":"Tutorial","text":""},{"location":"features/0213-transfer-policy/#roles","title":"Roles","text":"

    policy_holder uses the policy to manage messages directed to the recipient. recipient the agent the policy relates to.

    "},{"location":"features/0213-transfer-policy/#reference","title":"Reference","text":""},{"location":"features/0213-transfer-policy/#policy-publish","title":"Policy Publish","text":"

    Used to share current policy by policy holder. This can be sent unsolicited or in response to a policy_share_request. ```json= { \"@id\": \"123456781\", \"@type\": \"https://didcomm.org/transferpolicy/1.0/policy\", \"queue_max_duration\": 86400, \"message_count_limit\": 1000, \"message_size_limit\": 65536, \"queue_size_limit\": 65536000, \"pickup_allowed\": true, \"delivery_retry_count_limit\":5, \"delivery_retry_count_seconds\":86400, \"delivery_retry_backoff\": \"exponential\" }

    ### Policy Share Request\nUsed to ask for a `policy` message to be sent.\n```json=\n\n{\n    \"@id\": \"123456781\",\n    \"@type\": \"https://didcomm.org/transferpolicy/1.0/policy_share_request\"\n}\n

    "},{"location":"features/0213-transfer-policy/#policy-change-request","title":"Policy Change Request","text":"

    Sent to request a policy change. The expected response is a policy message.

    ```json=

    { \"@id\": \"123456781\", \"@type\": \"https://didcomm.org/transferpolicy/1.0/policy_change_request\", \"queue_max_duration\": 86400, \"message_count_limit\": 1000, \"message_size_limit\": 65536, \"queue_size_limit\": 65536000, \"pickup_allowed\": true, \"delivery_retry_count_limit\":5, \"delivery_retry_count_seconds\":86400, \"delivery_retry_backoff\": \"exponential\" } ``` Only attributes that you desire to change need to be included.

    "},{"location":"features/0213-transfer-policy/#prior-art","title":"Prior art","text":"

    Concepts here borrow heavily from a document written by Andrew Whitehead of BCGov.

    "},{"location":"features/0213-transfer-policy/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0213-transfer-policy/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0214-help-me-discover/","title":"Aries RFC 0214: \"Help Me Discover\" Protocol","text":""},{"location":"features/0214-help-me-discover/#summary","title":"Summary","text":"

    Describes how one party can ask another party for help discovering an unknown person, organization, thing, or chunk of data.

    "},{"location":"features/0214-help-me-discover/#motivation","title":"Motivation","text":"

    Asking a friend to help us discover something is an extremely common human interaction: \"Dad, I need a good mechanic. Do you know one who lives near me?\"

    Similar needs exist between devices in highly automated environments, as when a drone lands in hangar and queries a dispatcher agent to find maintenance robots who can repair an ailing motor.

    We need a way to perform these workflows with DIDComm.

    "},{"location":"features/0214-help-me-discover/#tutorial","title":"Tutorial","text":""},{"location":"features/0214-help-me-discover/#name-and-version","title":"Name and version","text":"

    This is the \"Help Me Discover\" protocol, version 1.0. It is uniquely identified by the following PIURI:

    https://didcomm.org/help-me-discover/1.0\n
    "},{"location":"features/0214-help-me-discover/#roles-and-states","title":"Roles and States","text":"

    This protocol embodies a standard request-response pattern, and therefore has requester and responder roles. A request message describes what's wanted. A response message conveys whatever knowledge the responder wants to offer to be helpful. Standard state evolution applies:

    "},{"location":"features/0214-help-me-discover/#requirements","title":"Requirements","text":"

    The following requirements do not change this simple framework, but they introduce some complexity into the messages:

    "},{"location":"features/0214-help-me-discover/#messages","title":"Messages","text":""},{"location":"features/0214-help-me-discover/#request","title":"request","text":"

    A simple request message looks like this:

    {\n    \"@type\": \"https://didcomm.org/help-me-discover/1.0/request\",\n    \"@id\": \"a2248fb5-d46e-4898-a781-2f03e5f23964\"\n    // human-readable, localizable, optional\n    \"comment\": \"any ideas?\",\n    // please help me discover match for this\n    \"desired\": { \n        \"all\": [ // equivalent of boolean AND -- please match this list\n            // first criterion: profession must equal \"mechanic\"\n            {\"profession\": \"mechanic\", \"id\": \"prof\"},\n            // second criterion in \"all\" list: any of the following (boolean OR)\n            {\n                \"any\": [\n                    // average rating > 3.5\n                    {\"averageRating\": 3.5, \"op\": \">\", \"id\": \"rating\"},\n                    // list of certifications contains \"ASE\"\n                    {\"certifications\": \"ASE\", \"op\": \"CONTAINS\", \"id\": \"cert\"},\n                    // zipCode must be in this list\n                    {\"zipCode\": [\"12345\", \"12346\"], \"op\": \"IN\", \"id\": \"where\"}\n                ], // end of \"any\" list\n                \"n\": 2, // match at least 2 from the list\n                \"id\": \"2-of-3\"\n            }\n        ],\n        \"id\": \"everything\"\n    }\n}\n

    In plain language, this particular request says:

    Please help me discover someone who's a mechanic, and who possesses at least 2 of the following 3 characteristis: they have an average rating of at least 3.5 stars; they have an ASE certification; they reside in zip code 12345 or 12346.

    The data type of desired is a criterion object. A criterion object can be of type all (boolean AND), type any (boolean OR), or op (a particular attribute is tested against a value with a specific operator). The all and any objects can nest one another arbitrarily deep.

    Parsing these criteria, and performing matches against them, can be done with the SGL library, which has ports for JavaScript and python. Other ports should be trivial; it's only a couple hundred lines of code. The hardest part of the work is giving the library an object model that contains candidates against which matching can be done.

    Notice that each criterion object has an id property. This is helpful because responses can now refer to the criteria by number to describe what they've matched.

    See Reference for fancier examples of requests.

    "},{"location":"features/0214-help-me-discover/#response","title":"response","text":"

    A response message looks like this:

    {\n    \"@type\": \"https://didcomm.org/help-me-discover/1.0/response\",\n    \"@id\": \"5f2396b5-d84e-689e-78a1-2fa2248f03e4\"\n    \"~thread\": { \"thid\": \"a2248fb5-d46e-4898-a781-2f03e5f23964\" }\n    // human-readable, localizable, optional\n    \"comment\": \"here's the best I've got\", \n    \"candidates\": [\n        {\n            \"id\": \"Alice\",\n            \"@type\": \"person\",\n            \"matches\": [\"prof\",\"rating\",\"cert\",\"2-of-3\",\"everything\"]\n        },\n        {\n            \"id\": \"Bob\",\n            \"@type\": \"drone\",\n            \"matches\": [\"prof\",\"cert\",\"where\",\"2-of-3\",\"everything\"]\n        },\n        {\n            \"id\": \"Carol\",\n            \"matches\": [\"rating\",\"cert\",\"where\"]\n        }\n    ]\n}\n

    In plain language, this response says:

    I found 3 candidates for you. One that I'll call \"Alice\" matches everything except your where criterion. One called \"Bob\" matches everything except your rating criterion. Once called \"Carol\" matches your rating, cert, and where criteria, but because she didn't match prof, she wasn't an overall match.

    "},{"location":"features/0214-help-me-discover/#using-a-help-me-discover-response-in-subsequent-interactions","title":"Using a \"Help me discover\" response in subsequent interactions","text":"

    A candidate in a response message like the one shown above can be referenced in a subsequent interactions by using the RFC 0xxx: Linkable DIDComm Message Paths mechanism. For example, if Fred wanted to ask for an introduction to Bob after engaging in the sample request-response sequence shown above, he could send a request message in the Introduce Protocol, where to (the party to whom he'd like to be introduced) included a discovered property that referenced the candidate with id equal to \"Bob\":

    {\n  \"@type\": \"https://didcomm.org/introduce/1.0/request\",\n  \"@id\": \"df3b699d-3aa9-4fd0-bb67-49594da545bd\",\n  \"to\": {\n    \"discovered\": \"didcomm:///5f2396b5-d84e-689e-78a1-2fa2248f03e4/.candidates%7B.id+%3D%3D%3D+%22Bob%22%7D\"\n  }\n}\n
    "},{"location":"features/0214-help-me-discover/#accuracy-trustworthiness-and-best-effort","title":"Accuracy, Trustworthiness, and Best Effort","text":"

    As with these types of interactions in \"real life\", the \"help me discover\" protocol cannot make any guarantees about the suitability of the answers it generates. The responder could be malicious, misinformed, or simply lazy. The contract for the protocol is:

    The requester must verify results independently, if their need for trust is high.

    "},{"location":"features/0214-help-me-discover/#privacy-considerations","title":"Privacy Considerations","text":"

    Just because Alice knows that Bob is a political dissident who uses a particular handle in online forms does not mean Alice should divulge that information to anybody who engages in the \"Help Me Discover\" protocol with her. When matching criteria focus on people, Alice should be careful and use good judgment about how much she honors a particular request for discovery. In particular, if Alice possesses data about Bob that was shared with her in a previous Present Proof Protocol, the terms of sharing may not permit her to divulge what she knows about Bob to an arbitrary third party. See the Data Consent Receipt RFC.

    These issues probably do not apply when the thing being discovered is not a private individual.

    "},{"location":"features/0214-help-me-discover/#reference","title":"Reference","text":""},{"location":"features/0214-help-me-discover/#discover-someone-who-can-prove","title":"Discover someone who can prove","text":"

    A request message can ask for someone that is capable of proving using verifiable credentials, as per RFC 0037:

    {\n    \"@type\": \"https://didcomm.org/help-me-discover/1.0/request\",\n    \"@id\": \"248fb52a-4898-a781-d46e-e5f239642f03\"\n    \"desired\": { \n        // either subjectRole or subjectDid:\n        //   - subjectRole has value of role in protocol\n        //   - subjectDid has value of a DID (useful in N-Wise settings)\n        \"verb\": \"prove\", \n        \"subjectRole\": \"introducer\", \n        \"car.engine.rating\": \"4\", \n        \"op\": \">\", \n        \"id\": \"engineRating\"\n    }\n}\n

    In plain language, this particular request says:

    Please help me discover someone who can act as introducer in a protocol, and can prove that a car's rating > 4.

    Another example might be:

    {\n    \"@id\": \"a2248fb5-d46e-4898-a781-2f03e5f23964\",\n    \"@type\": \"https://didcomm.org/help-me-discover/1.0/request\",\n    \"comment\": \"blood glucose\",\n    \"desired\": {\n        \"all\": [\n            {\n                \"id\": \"prof\",\n                \"profession\": \"medical-lab\"\n            },\n            {\n                \"id\": \"glucose\",\n                \"provides\": {\n                    \"from\": \"bloodtests\",\n                    \"just\": [\n                        \"glucose\"\n                    ],\n                    \"subject\": \"did:peer:introducer\"\n                }\n            }\n        ],\n        \"id\": \"everything\"\n    }\n}\n

    This says:

    Please help me discover that has profession = \"medical-lab\" and can provide measurements of the introducer's blood-glucose levels"},{"location":"features/0214-help-me-discover/#drawbacks","title":"Drawbacks","text":"

    If we are not careful, this protocol could be used to discover attributes about third parties in a way that subverts privacy. See Privacy Considerations.

    "},{"location":"features/0214-help-me-discover/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0214-help-me-discover/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0234-signature-decorator/","title":"Aries RFC 0234: Signature Decorator","text":""},{"location":"features/0234-signature-decorator/#rfc-archived","title":"RFC ARCHIVED","text":"

    DO NOT USE THIS RFC.

    Use the signed form of the attachment decorator (RFC 0017) instead of this decorator.

    "},{"location":"features/0234-signature-decorator/#summary","title":"Summary","text":"

    The ~sig field-level decorator enables non-repudiation by allowing an Agent to add a digital signature over a portion of a DIDComm message.

    "},{"location":"features/0234-signature-decorator/#motivation","title":"Motivation","text":"

    While today we support a standard way of authenticating messages in a repudiable way, we also see the need for non-repudiable digital signatures for use cases where high authenticity is necessary such as signing a bank loan. There's additional beneficial aspects around having the ability to prove provenance of a piece of data with a digital signature. These are all use cases which would benefit from a standardized format for non-repudiable digital signatures.

    This RFC outlines a field-level decorator that can be used to provide non-repudiable digital signatures in DIDComm messages. It also highlights a standard way to encode data such that it can be deterministically verified later.

    "},{"location":"features/0234-signature-decorator/#tutorial","title":"Tutorial","text":"

    This RFC introduces a new field-level decorator named ~sig and maintains a registry of standard Signature Schemes applicable with it.

    The ~sig field decorator may be used with any field of data. Its value MUST match the json object format of the chosen signature scheme.

    We'll use the following message as an example:

    {\n    \"@type\": \"https://didcomm.org/example/1.0/test_message\",\n    \"@id\": \"df3b699d-3aa9-4fd0-bb67-49594da545bd\",\n    \"msg\": {\n        \"text\": \"Hello World!\",\n        \"speaker\": \"Bob\"\n    }\n}\n

    Digitally signing the msg object with the ed25519sha256_single scheme results in a transformation of the original message to this:

    {\n    \"@type\": \"https://didcomm.org/example/1.0/test_message\",\n    \"@id\": \"df3b699d-3aa9-4fd0-bb67-49594da545bd\",\n    \"msg~sig\": {\n      \"@type\": \"https://didcomm.org/signature/1.0/ed25519Sha512_single\",\n      \"sig_data\": \"base64URL(64bit_integer_from_unix_epoch|msg_object)\",\n      \"signature\": \"base64URL(digital signature function output)\",\n      \"signer\": \"base64URL(inlined_signing_verkey)\"\n    }\n}\n

    The original msg object has been replaced with its ~sig-decorated counterpart in order to prevent message bloat.

    When an Agent receives a DIDComm message with a field decorated with ~sig, it runs the appropriate signature scheme algorithm and restores the DIDComm message's structure back to its original form.

    "},{"location":"features/0234-signature-decorator/#reference","title":"Reference","text":""},{"location":"features/0234-signature-decorator/#applying-the-digital-signature","title":"Applying the digital signature","text":"

    In general, the steps to construct a ~sig are:

    1. Choose a signature scheme. This determines the ~sig decorator's message type URI (the @type seen above) and the signature algorithm.
    2. Serialize the JSON object to be authenticated to a sequence of bytes (msg in the example above). This will be the plaintext input to the signature scheme.
    3. Construct the contents of the new ~sig object according to the chosen signature scheme with the plaintext as input.
    4. Replace the original object (msg in the example above) with the new ~sig object. The new object's label MUST be equal to the label of the original object appended with \"~sig\".
    "},{"location":"features/0234-signature-decorator/#verifying-the-digital-signature","title":"Verifying the digital signature","text":"

    The outcome of a successful signature verification is the replacement of the ~sig-decorated object with its original representation:

    1. Select the signature scheme according to the message type URI (ed25519sha256_single in the example above)
    2. Run the signature scheme's verification algorithm with the ~sig-decorated object as input.
    3. The software MUST cease further processing of the DIDComm message if the verification algorithm fails.
    4. Replace the ~sig-decorated object with the output of the scheme's verification algorithm.

    The end result MUST be semantically identical to the original DIDComm message before application of the signature scheme (eg. the original example message above).

    "},{"location":"features/0234-signature-decorator/#additional-considerations","title":"Additional considerations","text":"

    The data to authenticate is base64URL-encoded and then embedded as-is so as to prevent false negative signature verifications that could potentially occur when sending JSON data which has no easy way to canonicalize the structure. Rather, by including the exact data in Base64URL encoding, the receiver can be certain that the data signed is the same as what was received.

    "},{"location":"features/0234-signature-decorator/#signature-schemes","title":"Signature Schemes","text":"

    This decorator should support a specific set of signatures while being extensible. The list of current supported schemes are outlined below.

    Signature Scheme Scheme Spec ed25519Sha512_single spec

    TODO provide template in this RFC directory.

    To add a new signature scheme to this registry, follow the template provided to detail the new scheme as well as provide some test cases to produce and verify the signature scheme is working.

    "},{"location":"features/0234-signature-decorator/#drawbacks","title":"Drawbacks","text":"

    Since digital signatures are non-repudiable, it's worth noting the privacy implications of using this functionality. In the event that a signer has chosen to share a message using a non-repudiable signature, they forgo the ability to prevent the verifier from sharing this signature on to other parties. This has potentially negative implications with regards to consent and privacy.

    Therefore, this functionality should only be used if non-repudiable digital signatures are absolutely necessary.

    "},{"location":"features/0234-signature-decorator/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    JSON Web Signatures are an alternative to this specification in widespread use. We diverged from this specification for the following reasons:

    "},{"location":"features/0234-signature-decorator/#prior-art","title":"Prior art","text":"

    IETF RFC 7515 (JSON Web Signatures)

    "},{"location":"features/0234-signature-decorator/#unresolved-questions","title":"Unresolved questions","text":"

    Does there need to be an signature suite agreement protocol similar to TLS cipher suites? - No, rather the receiver of the message can send an error response if they're unable to validate the signature.

    How should multiple signatures be represented? - While not supported in this version, one solution would be to support [digital_sig1, digital_sig2] for signature and [verkey1, verkey2] for signer.

    "},{"location":"features/0234-signature-decorator/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Static Agent - Python ed25519sha256_single Aries Framework - .NET ed25519sha256_single Aries Framework - Go ed25519sha256_single"},{"location":"features/0234-signature-decorator/ed25519sha256_single/","title":"The ed25519sha256_single signature scheme","text":""},{"location":"features/0234-signature-decorator/ed25519sha256_single/#tutorial","title":"Tutorial","text":""},{"location":"features/0234-signature-decorator/ed25519sha256_single/#application","title":"Application","text":"

    This scheme computes a single ed25519 digital signature over the input message. Its output is a ~sig object with the following contents:

    {\n    \"@type\": \"https://didcomm.org/signature/1.0/ed25519Sha512_single\",\n    \"sig_data\": \"base64URL(64bit_integer_from_unix_epoch|msg)\",\n    \"signature\": \"base64URL(ed25519 signature)\",\n    \"signer\": \"base64URL(inlined_ed25519_signing_verkey)\"\n}\n
    "},{"location":"features/0234-signature-decorator/ed25519sha256_single/#verification","title":"Verification","text":"

    The successful outcome of this scheme is the plaintext.

    1. base64URL-decode signer
    2. base64URL-decode signature
    3. Verify the ed25519 signature over sig_data with the key provided in signer
    4. Further processing is halted if verification fails and an \"authentication failure\" error is returned
    5. base64URL-decode the sig_data
    6. Strip out the first 8 bytes
    7. Return the remaining bytes
    "},{"location":"features/0249-rich-schema-contexts/","title":"Aries RFC 0249: Aries Rich Schema Contexts","text":""},{"location":"features/0249-rich-schema-contexts/#summary","title":"Summary","text":"

    Every rich schema object may have an associated @context. Contexts are JSON or JSON-LD objects. They are the standard mechanism for defining shared semantic meaning among rich schema objects.

    Context objects are processed in a generic way defined in Rich Schema Objects Common.

    "},{"location":"features/0249-rich-schema-contexts/#motivation","title":"Motivation","text":"

    @context is JSON-LD\u2019s namespacing mechanism. Contexts allow rich schema objects to use a common vocabulary when referring to common attributes, i.e. they provide an explicit shared semantic meaning.

    "},{"location":"features/0249-rich-schema-contexts/#tutorial","title":"Tutorial","text":""},{"location":"features/0249-rich-schema-contexts/#intro-to-context","title":"Intro to @context","text":"

    @context is a JSON-LD construct that allows for namespacing and the establishment of a common vocabulary.

    Context object is immutable, so it's not possible to update existing Context, If the Context needs to be evolved, a new Context with a new version or name needs to be created.

    Context object may be stored in either JSON or JSON-LD format.

    "},{"location":"features/0249-rich-schema-contexts/#example-context","title":"Example Context","text":"

    An example of the content field of a Context object:

    {\n    \"@context\": [\n        \"did:sov:UVj5w8DRzcmPVDpUMr4AZhJ\",\n        \"did:sov:JjmmTqGfgvCBnnPJRas6f8xT\",\n        \"did:sov:3FtTB4kzSyApkyJ6hEEtxNH4\",\n        {\n            \"dct\": \"http://purl.org/dc/terms/\",\n            \"rdf\": \"http://www.w3.org/1999/02/22-rdf-syntax-ns#\",\n            \"rdfs\": \"http://www.w3.org/2000/01/rdf-schema#\",\n            \"Driver\": \"did:sov:2mCyzXhmGANoVg5TnsEyfV8\",\n            \"DriverLicense\": \"did:sov:36PCT3Vj576gfSXmesDdAasc\",\n            \"CategoryOfVehicles\": \"DriverLicense:CategoryOfVehicles\"\n        }\n    ]\n}\n

    "},{"location":"features/0249-rich-schema-contexts/#data-registry-storage","title":"Data Registry Storage","text":"

    Aries will provide a means for writing contexts to and reading contexts from a verifiable data registry (such as a distributed ledger).

    @context will be written to the ledger in a generic way defined in Rich Schema Objects Common.

    "},{"location":"features/0249-rich-schema-contexts/#aries-data-registry-interface","title":"Aries Data Registry Interface","text":"

    Aries Data Registry Interface methods for adding and retrieving @context from the ledger comply with the generic approach described in Rich Schema Objects Common.

    This means the following methods can be used: - write_rich_schema_object - read_rich_schema_object_by_id - read_rich_schema_object_by_metadata

    "},{"location":"features/0249-rich-schema-contexts/#reference","title":"Reference","text":"

    More information on the Verifiable Credential data model use of @context may be found here.

    More information on @context from the JSON-LD specification may be found here and here.

    "},{"location":"features/0249-rich-schema-contexts/#drawbacks","title":"Drawbacks","text":"

    Requiring a @context for each rich schema object introduces more complexity.

    "},{"location":"features/0249-rich-schema-contexts/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0249-rich-schema-contexts/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0249-rich-schema-contexts/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0281-rich-schemas/","title":"Aries RFC 0281: Aries Rich Schemas","text":""},{"location":"features/0281-rich-schemas/#summary","title":"Summary","text":"

    The proposed schemas are JSON-LD objects. This allows credentials issued according to the proposed schemas to have a clear semantic meaning, so that the verifier can know what the issuer intended. They support explicitly typed properties and semantic inheritance. A schema may include other schemas as property types, or extend another schema with additional properties. For example a schema for \"employee\" may inherit from the schema for \"person.\"

    Schema objects are processed in a generic way defined in Rich Schema Objects Common.

    "},{"location":"features/0281-rich-schemas/#motivation","title":"Motivation","text":"

    Many organizations, such as HL7 who publish the FHIR standard for heath care data exchange, have invested time and effort into creating data schemas that are already in use. Many schemas are shared publicly via web sites such as https://schema.org/, whose mission is, \"to create, maintain, and promote schemas for structured data on the Internet, on web pages, in email messages, and beyond.\"

    These schemas ought to be usable as the basis for verifiable credentials.

    Although verifiable credentials are the primary use case for schemas considered in this document, other future uses may include defining message formats or objects in a verifiable data registry.

    "},{"location":"features/0281-rich-schemas/#interoperability","title":"Interoperability","text":"

    Existing applications make use of schemas to organize and semantically describe data. Using those same schemas within Aries verifiable credentials provides a means of connecting existing applications with this emerging technology. This allows for an easy migration path for those applications to incorporate verifiable credentials and join the Aries ecosystem.

    Aries is only one of several verifiable credentials ecosystems. Using schemas which may be shared among these ecosystems allows for semantic interoperability between them, and enables a path toward true multi-lateral credential exchange.

    Using existing schemas, created in accordance with widely-supported common standards, allows Aries verifiable credentials to benefit from the decades of effort and thought that went into those standards and to work with other applications which also adhere to those standards.

    "},{"location":"features/0281-rich-schemas/#re-use","title":"Re-use","text":"

    Rich schemas can be re-used within the Aries credential ecosystem. Because these schemas are hierarchical and composable, even unrelated schemas may share partial semantic meaning due to the commonality of sub-schemas within both. For example, a driver license schema and an employee record are not related schemas, but both may include a person schema.

    A schema that was created for a particular use-case and accepted within a trust framework may be re-used within other trust frameworks for their use-cases. The visibility of these schemas across trust boundaries increases the ability of these schemas to be examined in greater detail and evaluated for fitness of purpose. Over time the schemas will gain reputation.

    "},{"location":"features/0281-rich-schemas/#extensibility","title":"Extensibility","text":"

    Applications that are built on top of the Aries frameworks can use these schemas as a basis for complex data objects for use within the application, or exposed through external APIs.

    "},{"location":"features/0281-rich-schemas/#immutability","title":"Immutability","text":"

    One important aspect of relying on schemas to provide the semantic meaning of data within a verifiable credential, is that the meaning of the credential properties should not change. It is not enough for entities within the ecosystem to have a shared understanding of the data in the present, it may be necessary for them to have an understanding of the credential at the time it was issued and signed. This depends on the trust framework within which the credential was issued and the needs of the parties involved. A verifiable data registry can provide immutable storage of schemas.

    "},{"location":"features/0281-rich-schemas/#tutorial","title":"Tutorial","text":""},{"location":"features/0281-rich-schemas/#intro-to-schemas","title":"Intro to Schemas","text":"

    schema objects are used to enforce structure and semantic meaning on a set of data. They allow Issuers to assert, and Holders and Verifiers to understand, a particular semantic meaning for the properties in a credential.

    Rich schemas are JSON-LD objects. Examples of the type of schemas supported here may be found at schema.org. At this time we do not support other schema representations such as RDFS, JSON Schema, XML Schema, OWL, etc.

    "},{"location":"features/0281-rich-schemas/#properties","title":"Properties","text":"

    Rich Schema properties follow the generic template defined in Rich Schema Common.

    Rich Schema's content field is a JSON-LD-serialized string with the following fields:

    "},{"location":"features/0281-rich-schemas/#id","title":"@id","text":"

    A rich schema must have an @id property. The value of this property must be equal to the id field which is a DID (see Identification of Rich Schema Objects).

    A rich schema may refer to the @id of another rich schema to define a parent schema. A property of a rich schema may use the @id of another rich schema as the value of its @type or @id property.

    A mapping object will contain the @id of the rich schema being mapped.

    A presentation definition will contain the @id of any schemas a holder may use to present proofs to a verifier.

    "},{"location":"features/0281-rich-schemas/#type","title":"@type","text":"

    A rich schema must have a @type property. The value of this property must be (or map to, via a context object) a URI.

    "},{"location":"features/0281-rich-schemas/#context","title":"@context","text":"

    A rich schema may have a @context property. If present, the value of this property must be a context object or a URI which can be dereferenced to obtain a context object.

    "},{"location":"features/0281-rich-schemas/#use-in-verifiable-credentials","title":"Use in Verifiable Credentials","text":"

    These schemas will be used in conjunction with the JSON-LD representation of the verifiable credentials data model to specify which properties may be included as part of the verifiable credential's credentialSubject property, as well as the types of the property values.

    The @id of a rich schema may be used as an additional value of the type property property of a verifiable credential. Because the type values of a verifiable credential are not required to be dereferenced, in order for the rich schema to support assertion of the structure and semantic meaning of the claims in the credential, an additional reference to the rich schema should be made through the credentialSchema property. This may be done as a direct reference to the rich schema @id, or via another rich schema object which references the rich schema @id such as a credential definition as would be the case for anonymous credentials, as discussed in the mapping section of the rich schema overview RFC.

    "},{"location":"features/0281-rich-schemas/#data-registry-storage","title":"Data Registry Storage","text":"

    Aries will provide a means for writing schema objects to and reading schema objects from a verifiable data registry (such as a distributed ledger).

    As discussed previously, the ability to specify the exact schema that was used to issue a verifiable credential, and the assurance that the meaning of that schema has not changed, may be critical for the trust framework. Verifiable data registries which provide immutability guarantees provide this assurance. Some alternative storage mechanisms do not. Hashlinks, which may be used to verify the hash of web-based schemas, are one example. These can be used inform a verifier that a schema has changed, but do not provide access to the original version of the schema in the event the original schema has been updated.

    "},{"location":"features/0281-rich-schemas/#example-schema","title":"Example Schema","text":"

    An example of the content field of a Rich Schema object:

       \"@id\": \"did:sov:2f9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n   \"@type\": \"rdfs:Class\",\n   \"@context\": {\n    \"schema\": \"http://schema.org/\",\n    \"bibo\": \"http://purl.org/ontology/bibo/\",\n    \"dc\": \"http://purl.org/dc/elements/1.1/\",\n    \"dcat\": \"http://www.w3.org/ns/dcat#\",\n    \"dct\": \"http://purl.org/dc/terms/\",\n    \"dcterms\": \"http://purl.org/dc/terms/\",\n    \"dctype\": \"http://purl.org/dc/dcmitype/\",\n    \"eli\": \"http://data.europa.eu/eli/ontology#\",\n    \"foaf\": \"http://xmlns.com/foaf/0.1/\",\n    \"owl\": \"http://www.w3.org/2002/07/owl#\",\n    \"rdf\": \"http://www.w3.org/1999/02/22-rdf-syntax-ns#\",\n    \"rdfa\": \"http://www.w3.org/ns/rdfa#\",\n    \"rdfs\": \"http://www.w3.org/2000/01/rdf-schema#\",\n    \"schema\": \"http://schema.org/\",\n    \"skos\": \"http://www.w3.org/2004/02/skos/core#\",\n    \"snomed\": \"http://purl.bioontology.org/ontology/SNOMEDCT/\",\n    \"void\": \"http://rdfs.org/ns/void#\",\n    \"xsd\": \"http://www.w3.org/2001/XMLSchema#\",\n    \"xsd1\": \"hhttp://www.w3.org/2001/XMLSchema#\"\n  },\n  \"@graph\": [\n    {\n      \"@id\": \"schema:recipeIngredient\",\n      \"@type\": \"rdf:Property\",\n      \"rdfs:comment\": \"A single ingredient used in the recipe, e.g. sugar, flour or garlic.\",\n      \"rdfs:label\": \"recipeIngredient\",\n      \"rdfs:subPropertyOf\": {\n        \"@id\": \"schema:supply\"\n      },\n      \"schema:domainIncludes\": {\n        \"@id\": \"schema:Recipe\"\n      },\n      \"schema:rangeIncludes\": {\n        \"@id\": \"schema:Text\"\n      }\n    },\n    {\n      \"@id\": \"schema:ingredients\",\n      \"schema:supersededBy\": {\n        \"@id\": \"schema:recipeIngredient\"\n      }\n    }\n  ]\n
    recipeIngredient schema from schema.org.

    "},{"location":"features/0281-rich-schemas/#data-registry-storage_1","title":"Data Registry Storage","text":"

    Aries will provide a means for writing contexts to and reading contexts from a verifiable data registry (such as a distributed ledger).

    A Schema will be written to the ledger in a generic way defined in Rich Schema Objects Common.

    "},{"location":"features/0281-rich-schemas/#aries-data-registry-interface","title":"Aries Data Registry Interface","text":"

    Aries Data Registry Interface methods for adding and retrieving a Schema from the ledger comply with the generic approach described in Rich Schema Objects Common.

    This means the following methods can be used: - write_rich_schema_object - read_rich_schema_object_by_id - read_rich_schema_object_by_metadata

    "},{"location":"features/0281-rich-schemas/#reference","title":"Reference","text":"

    More information on the Verifiable Credential data model use of schemas may be found here.

    "},{"location":"features/0281-rich-schemas/#drawbacks","title":"Drawbacks","text":"

    Rich schema objects introduce more complexity.

    "},{"location":"features/0281-rich-schemas/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0281-rich-schemas/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0281-rich-schemas/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0303-v01-credential-exchange/","title":"Aries RFC 0303: V0.1 Credential Exchange (Deprecated)","text":""},{"location":"features/0303-v01-credential-exchange/#summary","title":"Summary","text":"

    The 0.1 version of the ZKP Credential Exchange protocol (based on Hyperledger Indy) covering both issuing credentials and presenting proof. These messages were implemented to enable demonstrating credential exchange amongst interoperating agents for IIW 28 in Mountain View, CA. The use of these message types continues to today (November 2019) and so they are being added as an RFC for historical completeness and to enable reference in Aries Interop Profile.

    "},{"location":"features/0303-v01-credential-exchange/#motivation","title":"Motivation","text":"

    Enables the exchange of Indy ZKP-based verifiable credentials - issuing verifiable credentials and proving claims from issued verifiable credentials.

    "},{"location":"features/0303-v01-credential-exchange/#tutorial","title":"Tutorial","text":"

    This RFC defines a minimal credential exchange protocols. For more details of a complete credential exchange protocol, see the Issue Credentials and Present Proof RFCs.

    "},{"location":"features/0303-v01-credential-exchange/#issuing-a-credential","title":"Issuing a credential:","text":"
    1. The issuer sends the holder a Credential Offer
    2. The holder responds with a Credential Request to the issuer
    3. The issuer sends a Credential Issue to the holder, issuing the credential
    "},{"location":"features/0303-v01-credential-exchange/#presenting-a-proof","title":"Presenting a proof:","text":"
    1. The verifier sends the holder/prover a Proof Request
    2. The holder/prover constructs a proof to satisfy the proof requests and sends the proof to the verifier
    "},{"location":"features/0303-v01-credential-exchange/#reference","title":"Reference","text":"

    The following messages are supported in this credential exchange protocol.

    "},{"location":"features/0303-v01-credential-exchange/#issue-credential-protocol","title":"Issue Credential Protocol","text":"

    The process begins with a credential-offer. The thread decorator is implied for all messages except the first.

    The element is used in most messages and is the string returned from libindy for the given purpose - an escaped JSON string. The agent must process the string if there is a need to extract a data element from the JSON - for example to get the cred-def-id from the credential-offer.

    Acknowledgments and Errors should be signalled via adopting the standard ack and problem-report message types, respectively.

    "},{"location":"features/0303-v01-credential-exchange/#credential-offer","title":"Credential Offer","text":"
    {\n    \"@type\": \"https://didcomm.org/credential-issuance/0.1/credential-offer\",\n    \"@id\": \"<uuid-offer>\",\n    \"comment\": \"some comment\",\n    \"credential_preview\": <json-ld object>,\n    \"offer_json\": <libindy json string>\n}\n
    "},{"location":"features/0303-v01-credential-exchange/#credential-request","title":"Credential Request","text":"
    {\n    \"@type\": \"https://didcomm.org/credential-issuance/0.1/credential-request\",\n    \"@id\": \"<uuid-request>\",\n    \"comment\": \"some comment\",\n    \"request\": <libindy json string>\n}\n
    "},{"location":"features/0303-v01-credential-exchange/#credential-issue","title":"Credential Issue","text":"
    {\n    \"@type\": \"https://didcomm.org/credential-issuance/0.1/credential-issue\",\n    \"@id\": \"<uuid-credential>\",\n    \"issue\": <libindy json string>\n}\n
    "},{"location":"features/0303-v01-credential-exchange/#presentation-protocol","title":"Presentation Protocol","text":"

    The message family to initiate a presentation. The verifier initiates the process. The thread decorator is implied on every message other than the first message. The ack and problem-report messages are to be adopted by this message family.

    "},{"location":"features/0303-v01-credential-exchange/#presentation-request","title":"Presentation Request","text":"
    {\n    \"@type\": \"https://didcomm.org/credential-presentation/0.1/presentation-request\",\n    \"@id\": \"<uuid-request>\",\n    \"comment\": \"some comment\",\n    \"request\": <libindy json string>\n}\n
    "},{"location":"features/0303-v01-credential-exchange/#credential-presentation","title":"Credential Presentation","text":"
    {\n    \"@type\": \"https://didcomm.org/credential-presentation/0.1/credential-presentation\",\n    \"@id\": \"<uuid-presentation>\",\n    \"comment\": \"some comment\",\n    \"presentation\": <libindy json string>\n}\n
    "},{"location":"features/0303-v01-credential-exchange/#drawbacks","title":"Drawbacks","text":"

    The RFC is not technically needed, but is useful to have as an Archived RFC of a feature in common usage.

    "},{"location":"features/0303-v01-credential-exchange/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    N/A

    "},{"location":"features/0303-v01-credential-exchange/#prior-art","title":"Prior art","text":"

    N/A

    "},{"location":"features/0303-v01-credential-exchange/#unresolved-questions","title":"Unresolved questions","text":"

    N/A

    "},{"location":"features/0303-v01-credential-exchange/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Framework - .NET .NET framework for building agents of all types; MISSING test results Streetcred.id Commercial mobile and web app built using Aries Framework - .NET; MISSING test results Aries Cloud Agent - Python Contributed by the government of British Columbia.; MISSING test results OSMA - Open Source Mobile Agent Open SOurce mobile app built on Aries Framework - .NET; MISSING test results"},{"location":"features/0309-didauthz/","title":"Aries RFC 0309: DIDAuthZ","text":""},{"location":"features/0309-didauthz/#summary","title":"Summary","text":"

    DIDAuthZ is an attribute-based resource discovery and authorization protocol for Layer 2 of the ToIP Stack[1]. It enables a requesting party to discover protected resources and obtain credentials that grant limited access to these resources with the authorization of their owner. These credentials can be configured such that the requesting party may further delegate unto others the authority to access these resources on their behalf.

    "},{"location":"features/0309-didauthz/#motivation","title":"Motivation","text":"

    In the online world, individuals frequently consent to a service provider gaining access to their protected resources located on a different service provider. Individuals are challenged with an authentication mechanism, informed of the resources being requested, consent to the use of their resources, and can later revoke access at any time. OAuth 2.0[6], and other frameworks built on top of it, were developed to address this need.

    A DIDComm protocol[2] can address these use cases and enhance them with secure end-to-end encryption[3] independent of the transport used. The risk of correlation of the individual's relationships with other parties can be mitigated with the use of peer DIDs[4]. With a sufficiently flexible grammar, the encoding of the access scope can be fine-grained down to the individual items that are requested, congruent with the principle of selective disclosure[5].

    It is expected that future higher-level protocols and governance frameworks[1] can leverage DIDAuthZ to enable authorized sharing of an identity owner's attributes held by a third party.

    "},{"location":"features/0309-didauthz/#tutorial","title":"Tutorial","text":""},{"location":"features/0309-didauthz/#roles","title":"Roles","text":"

    DIDAuthZ adapts the following roles from OAuth 2.0[6] and UMA 2.0[7]:

    Resource Server (RS) An agent holding the protected resources. These resources MAY be credentials of which the subject MAY be a third party identity owner. The RS is also a resource owner at the root of the chain of delegation. Resource Owner (RO) An agent capable of granting access to a protected resource. The RO is a delegate of the RS to the extent encoded in a credential issued by the RS. Authorization Server (AS) An agent that protects, on the resource owner's behalf, resources held by the resource server. The AS is a delegate of the RO capable of issuing and refreshing access credentials. Requesting Party (RP) An agent that requests access to the resources held by the resource server. The RP is a delegate of the AS to the extent encoded in a credential issued by the AS.

    "},{"location":"features/0309-didauthz/#transaction-flow","title":"Transaction Flow","text":"

    The requesting party initiates a transaction by communicating directly with the authorization server with prior knowledge of their location.

    (1) RP requests a resource

    The requesting party requests the authorization server for a resource. A description of the resource requested and the desired access is included in this request.

    TODO does the request need to also include proof of \"user engagement\"?

    (2) AS requests authorization from the RO

    The authorization server processes the request and determines if the resource owner's authorization is needed. The authorization server MUST obtain the resource owner's authorization if no previous grant of authorization is found in valid state. Otherwise, the authorization server MAY issue new access credentials to the requesting party without any interaction with the resource owner. In such a case, the authorization server MUST revoke all access credentials previously issued to the requesting party.

    The authorization server interacts with the resource owner through their existing DIDComm connection to obtain their authorization.

    (3) AS issues an access token to the RP

    The authorization server issues access credentials to the requesting party.

    (4) AS introduces the RP to the RS

    The authorization server connects the requesting party to the resource server via the Introduce Protocol[9].

    "},{"location":"features/0309-didauthz/#access-credentials","title":"Access Credentials","text":"

    Access credentials are chained delegate credentials[17] used to access the protected resources. Embedded in them is proof of the chain of delegation and authorization.

    (1) RS delegates unto RO

    The resource server issues a credential to the resource owner that identifies the latter as the owner of a subset of resources hosted by the former. It also identifies the resource owner's chosen authorization server in their respective role. The resources will also have been registered at the authorization server.

    (2) RO delegates unto AS

    The resource owner issues a grant-type credential to the authorization server at the end of each AS-RO interaction. This credential is derived from the one issued by the RS. It authorizes the AS to authorize access to the RP with a set scope.

    (3) AS issues access credential to RP

    The authorization server issues an access credential to the requesting party derived from the grant credential issued by the resource owner for this transaction. This credential encodes the same access scope as found in the parent credential.

    (4) RP presents proof of access credential to RS

    The requesting party shows proof of this access credential when attempting to access the resource on the resource server.

    "},{"location":"features/0309-didauthz/#revocation","title":"Revocation","text":"

    The resource server makes available a revocation registry and grants read/write access to both the resource owner and the authorization server.

    "},{"location":"features/0309-didauthz/#reference","title":"Reference","text":""},{"location":"features/0309-didauthz/#discovery-of-authorization-servers","title":"Discovery of authorization servers","text":"

    The resource owner advertises their chosen authorization server to other parties with a new type of service definition in their DID document[8]:

    {\n  \"@context\": [\"https://www.w3.org/ns/did/v1\", \"https://w3id.org/security/v1\"],\n  \"id\": \"did:example:123456789abcdefghi\",\n  \"publicKey\": [{\n    \"id\": \"did:example:123456789abcdefghi#keys-1\",\n    \"type\": \"Ed25519VerificationKey2018\",\n    \"controller\": \"did:example:123456789abcdefghi\",\n    \"publicKeyBase58\": \"H3C2AVvLMv6gmMNam3uVAjZpfkcJCwDwnZn6z3wXmqPV\"\n  }],\n  \"service\": [{\n    \"id\": \"did:example:123456789abcdefghi#did-authz\",\n    \"type\": \"did-authz\",\n    \"serviceEndpoint\": \"did:example:xyzabc456#auth-svc\"\n  }]\n}\n

    TODO define json-ld context for new service type

    The mechanisms by which the resource owner discovers authorization servers are beyond the scope of this specification.

    Authorization servers MUST make available a DID document containing metadata about their service endpoints and capabilities at a well-known location.

    TODO define \"well-known locations\" in several transports

    TODO register well-known URI for http transport as per IETF RFC 5785

    "},{"location":"features/0309-didauthz/#discovery-of-revocation-registry","title":"Discovery of revocation registry","text":"

    TODO

    "},{"location":"features/0309-didauthz/#resources","title":"Resources","text":""},{"location":"features/0309-didauthz/#describing-resources","title":"Describing resources","text":"

    TODO

    "},{"location":"features/0309-didauthz/#describing-access-scope","title":"Describing access scope","text":"

    TODO

    "},{"location":"features/0309-didauthz/#registering-resources","title":"Registering resources","text":"

    TODO

    "},{"location":"features/0309-didauthz/#requesting-resources","title":"Requesting resources","text":"

    TODO

    "},{"location":"features/0309-didauthz/#protocol-messages","title":"Protocol messages","text":"

    TODO

    "},{"location":"features/0309-didauthz/#gathering-consent-from-the-resource-owner","title":"Gathering consent from the resource owner","text":"

    TODO didcomm messages

    "},{"location":"features/0309-didauthz/#credentials","title":"Credentials","text":"

    TODO format of these credentials, JWTs or JSON-LDs?

    TODO

    "},{"location":"features/0309-didauthz/#drawbacks","title":"Drawbacks","text":"

    (None)

    "},{"location":"features/0309-didauthz/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0309-didauthz/#prior-art","title":"Prior art","text":"

    Aries RFC 0167

    The Data Consent Lifecycle[10] is a reference implementation of data privacy agreements based on the GDPR framework[11]. The identity owner grants access to a verifier in the form of a proof of possession of a credential issued by the issuer. The identity owner may grant access to several verifiers in this manner. Access cannot be revoked on a per-verifier basis. To revoke access to a verifier, the identity owner's credential needs to be revoked, which in turn revokes all existing proofs the identity owner may have provided. The identity owner does not have the means to revoke access to a third party without directly involving the issuer.

    OAuth 2.0

    OAuth 2.0[6] is a role-based authorization framework in widespread use that enables a third-party to obtain limited access to HTTP services on behalf of a resource owner. The access token's scope is a simple data structure composed of space-delimited strings more suitable for a role-based authorization model than an attribute-based model.

    Although allowing for different types of tokens to be issued to clients as credentials, only the use of bearer tokens was formalized[12]. As a result, most implementations use bearer tokens as credentials. An expiry is optionally set on these tokens, but they nevertheless pose an unacceptable security risk in an SSI context and other contexts with high-value resources and need extra layers of security to address the risks of theft and impersonation. M. Jones and D. Hardt recommend the use of TLS to protect these tokens [12], but this transport is not guaranteed as a DIDComm message travels from the sender to the recipient. The specification for mac tokens[13] never materialized and its TLS Channel Binding Support was never specified, therefore not solving the issue of unwanted TLS termination in a hop. There is ongoing work in the draft for OAuth 2.0 Token Binding[14] that binds tokens to the cryptographic key material produced by the client, but it also relies on TLS as the means of transport.

    OpenID Connect 1.0

    OIDC[15] is \"a simple layer on top of the OAuth 2.0 protocol\" that standardizes simple data structures that contain claims about the end-user's identity.

    Being based upon OAuth 2.0, it suffers from the same security weaknesses - see the extensive section on Security Considerations that references the OAuth 2.0 Thread Model and Security Considerations[16].

    User-Managed Access 2.0

    UMA[7] is an extension to OAuth 2.0 that formalizes the authorization server's role as a delegate of the resource owner in order for the latter to grant access to requesting parties asynchronously and independently from the time of access. It relies on pre-defined resource scopes18 and is thus more suited to role-based access control.

    "},{"location":"features/0309-didauthz/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0309-didauthz/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0309-didauthz/#references","title":"References","text":"
    1. Matthew Davie, Dan Gisolfi, Daniel Hardman, John Jordan, Darrell O'Donnell, Drummond Reed: Aries RFC 0289: Trust over IP Stack, status PROPOSED
    2. Daniel Hardman: Aries RFC 0005 - DIDComm, status DEMONSTRATED
    3. Kyle Den Hartog, Stephen Curran, Mike Lodder: Aries RFC 0019 - Encryption Envelope, status ACCEPTED
    4. Oskar Deventer, Christian Lundkvist, Marton Csernai, Kyle Den Hartor, Markus Sabadello, Sam Curren, Dan Gisolfi, Mike Varley, Sven Hammannn, John Jordan, Lovesh Harchandani, Devin Fisher, Tobias Looker, Brent Zundel, Stephen Currant: Peer DID Method Specification, W3C Editor's Draft 16 October 2019
    5. The Sovrin Foundation: The Sovrin Glossary, v2.0
    6. Ed. D. Hardt: IETF RFC 6749 - The OAuth 2.0 Authorization Framework, October 2012
    7. Ed. E. Maler, M. Machulak, J. Richer, T. Hardjono: IETF I-D - User-Managed Access (UMA) 2.0 Grant for OAuth 2.0 Authorization, February 2019
    8. Drummond Reed, Manu Sporny, Dave Longley, Christopher Allen, Ryan Grant, Markus Sabadello: Decentralized Identifiers (DIDs) v1.0, W3C First Public Working Draft 07 November 2019
    9. Daniel Hardman, Sam Curren, Stephen Curran, Tobias Looker, George Aristy: Aries RFC 0028 - Introduce Protocol 1.0, status PROPOSED
    10. Jan Lindquist, Dativa; Paul Knowles, Dativa; Mark Lizar, OpenConsent; Harshvardhan J. Pandit, ADAPT Centre, Trinity College Dublin: Aries RFC 0167 - Data Consent Lifecycle, status PROPOSED
    11. Intersoft Consulting: General Data Protection Regulation, November 2019
    12. M. Jones, D. Hardt: IETF RFC 6750 - The OAuth 2.0 Authorization Framework: Bearer Token Usage, October 2012
    13. J. Richer, W. Mills, P. Hunt: IETF I-D - OAuth 2.0 Message Authentication Code (MAC) Tokens, January 2014
    14. M. Jones, B. Campbell, J. Bradley, W. Denniss: IETF I-D - OAuth 2.0 Token Binding, October 2018
    15. N. Sakimura, J. Bradley, M. Jones, B. de Medeiros, C. Mortimore: OpenID Connect Core 1.0, November 2014
    16. T. Lodderstedt, M. McGloin, P. Hunt: OAuth 2.0 Thread Model nd Security Considerations, January 2013
    17. Daniel Hardman, Lovesh Harchandani: Aries RFC 0104 - Chained Credentials, status PROPOSED
    18. E. Maler, M. Machulak, J. Richer, T. Hardjono: Federated Authorization for User-Managed Access (UMA) 2.0, February 2019
    "},{"location":"features/0317-please-ack/","title":"Aries RFC 0317: Please ACK Decorator","text":""},{"location":"features/0317-please-ack/#retirement-of-please_ack","title":"Retirement of ~please_ack","text":"

    The please_ack decorator was initially added to Aries Interop Protocol 2.0. However, this was done prior to attempts at an implementation. When such an attempt was made, it was found that the decorator is not practical as a general purpose mechanism. The capability assumed that the feature would be general purpose and could be applied outside of the protocols with which it was used. That assumption proved impossible to implement. The inclusion of the ~please_ack decorator cannot be implemented without altering any protocol with which it is used, and so it is not practical. Instead, any protocols that can benefit from such a feature can be extended to explicitly support the feature.

    For the \"on\": [\"OUTCOME\"] type of ACK, the problem manifests in two ways. First, the definition of OUTCOME is protocol (and in fact, protocol message) specific. The definition of \"complete\" for each message is specific to each message, so there is no \"general purpose\" way to know when an OUTCOME ACK is to be sent. Second, the addition of a ~please_ack decorator changes the protocol state machine for a given protocol, introducing additional states, and hence, additional state handling. Supporting \"on\": [\"OUTCOME\"] processing requires making changes to all protocols, which would be better handled on a per protocol basis, and where useful (which, it was found, is rare), adding messages and states. For example, what is the point of an extra ACK message on an OUTCOME in the middle of a protocol that itself results in the sending of the response message?

    Our experimentation found that it would be easier to achieve a general purpose \"on\": [\"RECEIPT\"] capability, but even then there were problems. Most notably, the capability is most useful when added to the last message of a protocol, where the message sender would like confirmation that the recipient got the message. However, it is precisely that use of the feature that also introduces breaking changes to the protocol state machine for the protocols to which it applies, requiring per protocol updates. So while the feature would be marginally useful in some cases, the complexity cost of the capability -- and the lack of demand for its creation -- led us to retire the entire RFC.

    For more details on the great work done by Alexander Sukhachev @alexsdsr, please see these pull requests, including both the changes proposed in the PRs, and the subsequent conversations about the ../../features.

    Much thanks for Alexander for the effort he put into trying to implement this capability.

    "},{"location":"features/0317-please-ack/#summary","title":"Summary","text":"

    Explains how one party can request an acknowledgment to and clarify the status of processes.

    "},{"location":"features/0317-please-ack/#motivation","title":"Motivation","text":"

    An acknowledgment or ACK is one of the most common procedures in protocols of all types. The ACK message is defined in Aries RFC 0015-acks and is adopted into other protocols for use at explicit points in the execution of a protocol. In addition to receiving ACKs at predefined places in a protocol, agents also need the ability to request additional ACKs at other points in an instance of a protocol. Such requests may or may not be answered by the other party, hence the \"please\" in the name of decorator.

    "},{"location":"features/0317-please-ack/#tutorial","title":"Tutorial","text":"

    If you are not familiar with the tutorial section of the ACK message,please review that first.

    Agents interact in very complex ways. They may use multiple transport mechanisms, across varied protocols, through long stretches of time. While we usually expect messages to arrive as sent, and to be processed as expected, a vital tool in the agent communication repertoire is the ability to request and receive acknowledgments to confirm a shared understanding.

    "},{"location":"features/0317-please-ack/#requesting-an-ack-please_ack","title":"Requesting an ack (~please_ack)","text":"

    A protocol may stipulate that an ack is always necessary in certain circumstances. Launch mechanics for spacecraft do this, because the stakes for a miscommunication are so high. In such cases, there is no need to request an ack, because it is hard-wired into the protocol definition. However, acks make a channel more chatty, and in doing so they may lead to more predictability and correlation for point-to-point communications. Requiring an ack is not always the right choice. For example, an ack should probably be optional at the end of credential issuance (\"I've received your credential. Thanks.\") or proving (\"I've received your proof, and it satisfied me. Thanks.\").

    In addition, circumstances at a given moment may make an ad hoc ack desirable even when it would normally NOT be needed. Suppose Alice likes to bid at online auctions. Normally she may submit a bid and be willing to wait for the auction to unfold organically to see the effect. But if she's bidding on a high-value item and is about to put her phone in airplane mode because her plane's ready to take off, she may want an immediate ACK that the bid was accepted.

    The dynamic need for acks is expressed with the ~please_ack message decorator. An example of the decorator looks like this:

    {\n  \"~please_ack\": {\n    \"on\": [\"RECEIPT\"]\n  }\n}\n

    This says, \"Please send me an ack as soon as you receive this message.\"

    "},{"location":"features/0317-please-ack/#examples","title":"Examples","text":"

    Suppose AliceCorp and Bob are involved in credential issuance. Alice is an issuer; Bob wants to hold the issued credential.

    "},{"location":"features/0317-please-ack/#on-receipt","title":"On Receipt","text":"

    In the final required message of the issue-credential protocol, AliceCorp sends the credential to Bob. But AliceCorp wants to know for sure that Bob has received it, for its own accounting purposes. So it decorates the final message with an ack request:

    {\n  \"~please_ack\": {\n    \"on\": [\"RECEIPT\"]\n  }\n}\n

    Bob honors this request and returns an ack as soon as he receives it and stores its payload.

    "},{"location":"features/0317-please-ack/#on-outcome","title":"On Outcome","text":"

    Same as with the previous example, AliceCorp wants an acknowledgement from Bob. However, in contrast to the previous example that just requests an acknowledgement on receipt of message, this time AliceCorp wants to know for sure Bob verified the validity of the credential. To do this AliceCorp decorates the issue-credential message with an ack request for the OUTCOME.

    {\n  \"~please_ack\": {\n    \"on\": [\"OUTCOME\"]\n  }\n}\n

    Bob honors this request and returns an ack as soon as he has verified the validity of the issued credential.

    "},{"location":"features/0317-please-ack/#reference","title":"Reference","text":""},{"location":"features/0317-please-ack/#please_ack-decorator","title":"~please_ack decorator","text":""},{"location":"features/0317-please-ack/#on","title":"on","text":"

    The only field for the please ack decorator. Required array. Describes the circumstances under which an ack is desired. Possible values in this array include RECEIPT and OUTCOME.

    If both values are present, it means an acknowledgement is requested for both the receipt and outcome of the message

    "},{"location":"features/0317-please-ack/#receipt","title":"RECEIPT","text":"

    The RECEIPT acknowledgement is the easiest ack mechanism and requests that an ack is sent on receipt of the message. This way of requesting an ack is to verify whether the other agent successfully received the message. It implicitly means the agent was able to unpack the message, to see

    "},{"location":"features/0317-please-ack/#outcome","title":"OUTCOME","text":"

    The OUTCOME acknowledgement is the more advanced ack mechanism and requests that an ack is sent on outcome of the message. By default outcome means the message has been handled and processed without business logic playing a role in the decision.

    In the context of the issue credential protocol, by default, this would mean an ack is requested as soon as the received credentials is verified to be valid. It doesn't mean the actual contents of the credential are acknowledged. For the issue credential protocol it makes more sense to send the acknowledgement after the contents of the credential have also been verified.

    Therefore protocols can override the definition of outcome in the context of that protocol. Examples of protocols overriding this behavior are Issue Credential Protocol 2.0, Present Proof Protocol 2.0 and Revocation Notification Protocol 1.0

    "},{"location":"features/0317-please-ack/#drawbacks","title":"Drawbacks","text":"

    None specified.

    "},{"location":"features/0317-please-ack/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    The first version of this RFC was a lot more advanced, but also introduced a lot of complexities. A lot of complex ../../features have been removed so it could be included in AIP 2.0 in a simpler form. More advanced ../../features from the initial RFC can be added back in when needed.

    "},{"location":"features/0317-please-ack/#prior-art","title":"Prior art","text":"

    None specified.

    "},{"location":"features/0317-please-ack/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0317-please-ack/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0327-crypto-service/","title":"Aries RFC 0327: Crypto service Protocol 1.0","text":""},{"location":"features/0327-crypto-service/#summary","title":"Summary","text":"

    Within decentralized data economy with user-centric approach the user is the one who should control data flows and all interaction even on 3rd parties platforms. To achieve that we start to talk about access instead of ownership of the data. Within the space we can identify some services which are dealing with the users data but they don't necessarily need to be able to see the data. In this category we have services like archive, data vaults, data transportation (IM, Email, Sent file), etc. To be able to better support privacy in such cases this document proposes a protocol which uses underlying security mechanisms within agents like Lox to provide an API for cryptographic operations like asymetric encryption/decryption, signature/verification, delegation (proxy re-encryption) to let those services provide an additional security layer on their platform within connection to SSI wallet agents.

    "},{"location":"features/0327-crypto-service/#motivation","title":"Motivation","text":"

    Identity management and key management are complex topics with which even big players have problems. To help companies and their products build secure and privacy preserving services with SSI they need a simple mechanism to get access to the cryptographic operations within components of the wallets.

    "},{"location":"features/0327-crypto-service/#todays-best-practice-approach-to-cryptographically-secured-services","title":"Todays 'Best Practice' approach to cryptographically secured Services","text":"

    Many 3rd party services today provide solutions like secure storage, encrypted communication, secure data transportation and to achieve that they are using secure keys to provide cryptography for their use cases. The problem is that in many cases those keys are generated and/or stored within the 3rd party Services - either in the Client App or in the Backend - which requires the users explicit trust into the 3rd parties good intentions.

    Even in the case that a 3rd party has the best possible intentions in keeping the users secrets save and private. There is still the increased risk for the users keys of leakage or beeing compromised while beeing stored with a (centralized) 3rd party Service.

    Last but not least the users usage of multiple such cryptografically secured services would lead to the distribution of the users secrets over different systems where the user needs to keep track of them and manage them via differnt 3rd party tools.

    "},{"location":"features/0327-crypto-service/#vision-seperation-of-service-business-logic-and-identity-artefacts","title":"Vision - seperation of Service-(Business-)Logic and Identity Artefacts","text":"

    In the context of SSI and decentralized identity the ideal solution is that the keys are generated within user agent and that the private (secret) key never leaves that place. This would be a clear seperation of a services business logic and the users keys which we also count to the users unique sets of identifying information (identity artefacts).

    After seperating these two domains their follows the obvious need for providing a general crypto API to the user wallet which allows to support generic use cases where a cryptographic layer is required in the 3rd party service business logic, for example:

    The desired outcome would be to have an Agent which is able to expose a standardized Crypto Services API to external 3rd party services which then can implement cryptographically secured aplications without the need to have access to the actual user secrets.

    "},{"location":"features/0327-crypto-service/#tutorial","title":"Tutorial","text":""},{"location":"features/0327-crypto-service/#name-and-version","title":"Name and Version","text":"

    This defines the crypto-service protocol. version 1.0, as identified by the following PIURI:

    TODO: Add PIURI when ready\n
    "},{"location":"features/0327-crypto-service/#roles","title":"Roles","text":"

    The minimum amount of roles involved in the crypto-service protocol are: a sender and a receiver. The sender requests a specific cryptographic operation from the receiver and the receiver provides the result in a form of a payload or an error. The protocol could include more roles (e.g. a proxy) which could be involved in processes like delegation (proxy re-encryption), etc.

    "},{"location":"features/0327-crypto-service/#constraints","title":"Constraints","text":"

    Each message which is send to the agent requires an up front established relationship between sender and receiver in the form of an authorization. This means that the sender is allowed to use only the specific key which is meant for him. There should not be the case that the sender is able to trigger any operation with keys which where never used within his service.

    "},{"location":"features/0327-crypto-service/#reference","title":"Reference","text":""},{"location":"features/0327-crypto-service/#examples","title":"Examples","text":"

    Specific use case example:

    A platform providing secure document transportation between parties and archiving functionality.

    Actors:

    Here is how it could work:

    In this scenario DocuArch has no way to learn about what is in the payload which is sent between Sender and Receiver as only the person who is in possession of the private key is able to decrypt the payload - which is the Receiver. Therfore the decrypted payload is only available on the Receivers client side app which is in communication with the Agent on behalf of the users DID identity.

    Such ../../features within the Agent allow companies to build faster and more secure systems as the identity management and key management part comes from Agents and they just interact with it via API.

    "},{"location":"features/0327-crypto-service/#messages","title":"Messages","text":"

    Protocol: did:sov:1234;spec/crypto-service/1.0

    encrypt

        {\n        \"@id\": \"1234567889\",\n        \"@type\": \"did:sov:1234;spec/crypto-service/1.0/encrypt\",\n        \"payload\": \"Text to be encrypted\",\n        \"key_id\": \"did:example:123456789abcdefghi#keys-1\"\n\n    }\n

    decrypt

        {\n        \"@id\": \"1234567889\",\n        \"@type\": \"did:sov:1234;spec/crypto-service/1.0/decrypt\",\n        \"encryptedPayload\": \"ASDD@J(!@DJ!DASD!@F\",\n        \"key_id\": \"did:example:123456789abcdefghi#keys-1\"\n\n    }\n

    sign

        {\n        \"@id\": \"1234567889\",\n        \"@type\": \"did:sov:1234;spec/crypto-service/1.0/sign\",\n        \"payload\": \"I say so!\",\n        \"key_id\": \"did:example:123456789abcdefghi#keys-1\"\n\n    }\n

    verify

        {\n        \"@id\": \"1234567889\",\n        \"@type\": \"did:sov:1234;spec/crypto-service/1.0/verify\",\n        \"signature\": \"12312d8u182d812d9182d91827d179\",\n        \"key_id\": \"did:example:123456789abcdefghi#keys-1\"\n\n    }\n

    delegate

        {\n        \"@id\": \"1234567889\",\n        \"@type\": \"did:sov:1234;spec/crypto-service/1.0/delegate\",\n        \"delegate\": \"did:example:ihgfedcba987654321\",\n        \"key_id\": \"did:example:123456789abcdefghi#keys-1\"\n\n    }\n
    "},{"location":"features/0327-crypto-service/#message-catalog","title":"Message Catalog","text":"

    TODO: add error codes and response messages/statuses

    "},{"location":"features/0327-crypto-service/#responses","title":"Responses","text":"

    TODO

    "},{"location":"features/0327-crypto-service/#drawbacks","title":"Drawbacks","text":""},{"location":"features/0327-crypto-service/#-potentialy-expose-agent-for-different-types-of-attacts-eg-someone-would-try-to-decrypt-your-private-documents-without-you-being-notice-of-that","title":"- Potentialy expose Agent for different types of attacts: e.g. someone would try to decrypt your private documents without you being notice of that.","text":""},{"location":"features/0327-crypto-service/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    We can not expect that each services will switch directly to the DIDComm and other ../../features of the agents. Not all ../../features are even desier to have within agent. But if the Agent can exposer base API for identity management and crypto operations this would allow others to build on top of it much more richer ans secure applications and platforms.

    We are not aware of any alternatives atm. Anyone?

    "},{"location":"features/0327-crypto-service/#prior-art","title":"Prior art","text":"

    Similar approach is taken within HSM world where API is expose to the outside world without exposing keys. Here we take same approach in the context of KMS within Agent.

    "},{"location":"features/0327-crypto-service/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0327-crypto-service/#implementations","title":"Implementations","text":"

    Implementation Notes

    Name / Link Implementation Notes"},{"location":"features/0334-jwe-envelope/","title":"Aries RFC 0334: JWE envelope 1.0","text":""},{"location":"features/0334-jwe-envelope/#summary","title":"Summary","text":"

    Agents need to use a common set of algorithms when exchanging and persisting data. This RFC supplies a cipher suite and examples for DIDComm envelopes.

    "},{"location":"features/0334-jwe-envelope/#motivation","title":"Motivation","text":"

    The goal of this RFC is to define cipher suites for Anoncrypt and Authcrypt such that we can achieve better compatibility with JOSE. We also aim to supply both a compliant suite and a constrained device suite. The compliant suite is suitable for implementations that contain AES hardware acceleration or desire to use NIST / FIPS algorithms (where possible).

    "},{"location":"features/0334-jwe-envelope/#encryption-algorithms","title":"Encryption Algorithms","text":"

    The next two sub-sections describe the encryption algorithms that must be supported. On devices with AES hardware acceleration or requiring compliance, AES GCM is the recommended algorithm. Otherwise, XChacha20Poly1305 should be used.

    "},{"location":"features/0334-jwe-envelope/#content-encryption-algorithms","title":"Content Encryption Algorithms","text":"

    The following table defines the supported content encryption algorithms for DIDComm JWE envelopes:

    Content Encryption Encryption Algorithm identifier Authcrypt/Anoncrypt Reference A256CBC-HS512 (512 bit) AES_256_CBC_HMAC_SHA_512 Authcrypt/Anoncrypt ECDH-1PU section 2.1 and RFC 7518 section 5.2.5 AES-GCM (256 bit) A256GCM Anoncrypt RFC7518 section 5.1 and more specifically RFC7518 section 5.3 XChacha20Poly1305 XC20P Anoncrypt xchacha draft 03"},{"location":"features/0334-jwe-envelope/#key-encryption-algorithms","title":"Key Encryption Algorithms","text":"

    The following table defines the supported key wrapping encryption algorithms for DIDComm JWE envelopes:

    Key Encryption Encryption algorithm identifier Anoncrypt/Authcrypt ECDH-ES + AES key wrap ECDH-ES+A256KW Anoncrypt ECDH-1PU + AES key wrap ECDH-1PU+A256KW Authcrypt"},{"location":"features/0334-jwe-envelope/#curves-support","title":"Curves support","text":"

    The following curves are supported:

    Curve Name Curve identifier X25519 (aka Curve25519) X25519 (default) NIST P256 (aka SECG secp256r1 and ANSI X9.62 prime256v1, ref here) P-256 NIST P384 (aka SECG secp384r1, ref here) P-384 NIST P521 (aka SECG secp521r1, ref here) P-521

    Other curves are optional.

    "},{"location":"features/0334-jwe-envelope/#security-consideration-for-curves","title":"Security Consideration for Curves","text":"

    As noted in the ECDH-1PU IETF draft security considerations section, all implementations must ensure the following:

    When performing an ECDH key agreement between a static private key\nand any untrusted public key, care should be taken to ensure that the\npublic key is a valid point on the same curve as the private key.\nFailure to do so may result in compromise of the static private key.\nFor the NIST curves P-256, P-384, and P-521, appropriate validation\nroutines are given in Section 5.6.2.3.3 of [NIST.800-56A]. For the\ncurves used by X25519 and X448, consult the security considerations\nof [RFC7748].\n

    "},{"location":"features/0334-jwe-envelope/#jwe-examples","title":"JWE Examples","text":"

    AES GCM encryption and key wrapping examples are found in Appendix C of the JSON Web Algorithm specs.

    The Proposed JWE Formats below lists a combination of content encryption and key wrapping algorithms formats.

    "},{"location":"features/0334-jwe-envelope/#drawbacks","title":"Drawbacks","text":""},{"location":"features/0334-jwe-envelope/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Our approach for AuthCrypt compliance is to use the NIST approved One-Pass Unified Model for ECDH scheme described in SP 800-56A Rev. 3. The JOSE version is defined as ECDH-1PU in this IETF draft.

    Aries agents currently use the envelope described in RFC0019. This envelope uses libsodium (NaCl) encryption/decryption, which is based on Salsa20Poly1305 algorithm.

    Another prior effort towards enhancing JWE compliance is to use XChacha20Poly1305 encryption and ECDH-SS key wrapping mode. See Aries-RFCs issue-133 and Go JWE Authcrypt package for an implementation detail. As ECDH-SS is not specified by JOSE, a new recipient header field, spk, was needed to contain the static encrypted public key of the sender. Additionally (X)Chacha20Poly1305 key wrapping is also not specified by JOSE. For these reasons, this option is mentioned here as reference only.

    "},{"location":"features/0334-jwe-envelope/#jwe-formats","title":"JWE formats","text":""},{"location":"features/0334-jwe-envelope/#anoncrypt-using-ecdh-es-key-wrapping-mode-and-xc20p-content-encryption","title":"Anoncrypt using ECDH-ES key wrapping mode and XC20P content encryption","text":"
     {\n  \"protected\": base64url({\n      \"typ\": \"didcomm-envelope-enc\",\n      \"enc\": \"XC20P\", // or \"A256GCM\"\n  }),\n  \"recipients\": [\n    {\n      \"header\": {\n        \"kid\": base64url(recipient KID), // e.g: base64url(\"urn:123\") or base64url(jwk thumbprint as KID)\n        \"alg\": \"ECDH-ES+A256KW\",\n        \"epk\": { // defining X25519 key as an example JWK, but this can be EC key as well \n          \"kty\": \"OKP\",\n          \"crv\": \"X25519\",\n          \"x\": \"-3bLMSHYDG3_LVNh-MJvoYs_a2sAEPr4jwFfFjTrmUo\" // sender's ephemeral public key value raw (no padding) base64url encoded\n        },\n        \"apu\": base64url(epk.x value above),\n        \"apv\": base64url(recipients[*].header.kid)\n      },\n      \"encrypted_key\": \"Sls6zrMW335GJsJe0gJU4x1HYC4TRBZS1kTS1GATEHfH_xGpNbrYLg\"\n    }\n  ],\n  \"aad\": \"base64url(sha256(concat('.',sort([recipients[0].kid, ..., recipients[n].kid])))))\",\n  \"iv\": \"K0PfgxVxLiW0Dslx\",\n  \"ciphertext\": \"Sg\",\n  \"tag\": \"PP31yGbQGBz9zgq9kAxhCA\"\n}\n

    typ header field is the DIDComm Transports value as mentioned in RFC-0025. This RFC states the prefix application/ but according to IANA Media types the prefix is implied therefore not needed here.

    "},{"location":"features/0334-jwe-envelope/#anoncrypt-using-ecdh-es-key-wrapping-mode-and-a256gcm-content-encryption","title":"Anoncrypt using ECDH-ES key wrapping mode and A256GCM content encryption","text":"
    {\n  \"protected\": base64url({\n          \"typ\": \"didcomm-envelope-enc\",\n          \"enc\": \"A256GCM\", // \"XC20P\"\n  }),\n  \"recipients\": [\n    {\n      \"header\": {\n        \"kid\": base64url(recipient KID),\n        \"alg\": \"ECDH-ES+XC20PKW\", // or \"ECDH-ES+A256KW\" with \"epk\" as EC key\n        \"epk\": {\n          \"kty\": \"OKP\",\n          \"crv\": \"X25519\",\n          \"x\": \"aOH-76BRwkHf0nbGokaBsO6shW9McEs6jqVXaF0GNn4\" // sender's ephemeral public key value raw (no padding) base64url encoded\n        },\n        \"apu\": base64url(epk.x value above),\n        \"apv\": base64url(recipients[*].header.kid)\n      },\n      \"encrypted_key\": \"wXzKi-XXb6fj_KSY5BR5hTUsZIiAQKrxblTo3d50B1KIeFwBR98fzQ\"\n    }\n  ],\n  \"aad\": \"base64url(sha256(concat('.',sort([recipients[0].kid, ..., recipients[n].kid])))))\",\n  \"iv\": \"9yjR8zvgeQDZFbIS\",\n  \"ciphertext\": \"EvIk_Rr6Nd-0PqQ1LGimSqbKyx_qZjGnmt6nBDdCWUcd15yp9GTeYqN_q_FfG7hsO8c\",\n  \"tag\": \"9wP3dtNyJERoR7FGBmyF-w\"\n}\n

    In the above two examples, apu is the encoded ephemeral key used to encrypt the cek stored in encrypted_key and apv is the encoded key id of the static public key of the recipient. Both are raw (no padding) base64Url encoded. kid is the value of a key ID in a DID document that should be resolvable to fetch the raw public key used.

    "},{"location":"features/0334-jwe-envelope/#authcrypt-using-ecdh-1pu-key-wrapping-mode","title":"Authcrypt using ECDH-1PU key wrapping mode","text":"
    {\n    \"protected\": base64url({\n        \"typ\": \"didcomm-envelope-enc\",\n        \"enc\":\"A256CBC-HS512\", // or one of: \"A128CBC-HS256\", \"A192CBC-HS384\"\n        \"skid\": base64url(sender KID),\n        \"alg\": \"ECDH-1PU+A256KW\", // or \"ECDH-1PU+XC20P\" with \"epk\" as X25519 key\n        \"apu\": base64url(\"skid\" header value),\n        \"apv\": base64url(sha256(concat('.',sort([recipients[0].kid, ..., recipients[n].kid]))))),\n        \"epk\": {\n            \"kty\": \"EC\",\n            \"crv\": \"P-256\",\n            \"x\": \"gfdM68LgZWhHwdVyMAPh1oWqV_NcYGR4k7Bjk8uBGx8\",\n            \"y\": \"Gwtgz-Bl_2BQYdh4f8rd7y85LE7fyfdnb0cWyYCrAb4\"\n        }\n    }),\n    \"recipients\": [\n        {\n            \"header\": {\n                \"kid\": base64url(recipient KID)\n            },\n            \"encrypted_key\": \"base64url(encrypted CEK)\"\n        },\n       ...\n    ],\n    \"aad\": \"base64url(sha256(concat('.',sort([recipients[0].kid, ..., recipients[n].kid])))))\",\n    \"iv\": \"base64url(content encryption IV)\",\n    \"ciphertext\": \"base64url(XC20P(DIDComm payload, base64Url(json($protected)+'.'+$aad), content encryption IV, CEK))\"\n    \"tag\": \"base64url(AEAD Authentication Tag)\"\n}\n

    With the recipients headers representing an ephemeral key that can be used to derive the key to be used for AEAD decryption of the CEK following the ECDH-1PU encryption scheme.

    The function XC20P in the example above is defined as the XChahcha20Poly1035 cipher function. This can be replaced by the AES-CBC+HMAC_SHA family of cipher functions for authcrypt or AES-GCM cipher function for anoncrypt.

    "},{"location":"features/0334-jwe-envelope/#concrete-examples","title":"Concrete examples","text":"

    See concrete anoncrypt and authcrypt examples

    "},{"location":"features/0334-jwe-envelope/#jwe-detached-mode-nested-envelopes","title":"JWE detached mode nested envelopes","text":"

    There are situations in DIDComm messaging where an envelope could be nested inside another envelope -- particularly RFC 46: Mediators and Relays. Normally nesting envelopes implies that the envelope payloads will incur additional encryption and encoding operations at each parent level in the nesting. This section describes a mechanism to extract the nested payloads outside the nesting structure to avoid these additional operations.

    "},{"location":"features/0334-jwe-envelope/#detached-mode","title":"Detached mode","text":"

    JWS defines detached mode where the payload can be removed. As stated in IETF RFC7515, this strategy has the following benefit:

    Note that this method needs no support from JWS libraries, as applications can use this method by modifying the inputs and outputs of standard JWS libraries.

    We will leverage a similar detached mode for JWE in the mechanism described below.

    "},{"location":"features/0334-jwe-envelope/#mechanism","title":"Mechanism","text":"

    Sender:

    1. Creates the \"final\" JWE intended for the recipient (normal JWE operation).
    2. Extracts the ciphertext and replace with an empty string.
    3. Creates the nested envelopes around the \"final\" JWE (but with the empty string ciphertext).
    4. Sends the nested envelope (normal JWE) plus the ciphertext from the \"final\" JWE.

    Mediator:

    1. Decrypt their layer (normal JWE operation). The detached ciphertext(s) are filtered out prior to invoking the JWE library (normal JWE structure).
    2. Remove the next detached ciphertext from the structure and insert back into the ciphertext field for the next nesting level.

    Receiver:

    1. Decrypts the \"final\" JWE (normal JWE operation).

    The detached ciphertext steps are repeated at each nesting level. In this case, an array of ciphertexts is sent along with the nested envelope.

    This solution has the following characteristics:

    "},{"location":"features/0334-jwe-envelope/#serialization","title":"Serialization","text":"

    The extracted ciphertext serialization format should have additional thought for both compact and JSON modes. As a starting point:

    For illustration, the following compact serialization represents nesting due to two mediators (the second mediator being closest to the Receiver).

    First Mediator receives:

      BASE64URL(UTF8(JWE Protected Header for First Mediator)) || '.' ||\n  BASE64URL(JWE Encrypted Key for First Mediator) || '.' ||\n  BASE64URL(JWE Initialization Vector for First Mediator) || '.' ||\n  BASE64URL(JWE Ciphertext for First Mediator) || '.' ||\n  BASE64URL(JWE Authentication Tag for First Mediator) || '.' ||\n  BASE64URL(JWE Ciphertext for Receiver) || '.' ||\n  BASE64URL(JWE Ciphertext for Second Mediator)\n

    Second Mediator receives:

      BASE64URL(UTF8(JWE Protected Header for Second Mediator)) || '.' ||\n  BASE64URL(JWE Encrypted Key for Second Mediator) || '.' ||\n  BASE64URL(JWE Initialization Vector for Second Mediator) || '.' ||\n  BASE64URL(JWE Ciphertext for Second Mediator) || '.' ||\n  BASE64URL(JWE Authentication Tag for Second Mediator) || '.' ||\n  BASE64URL(JWE Ciphertext for Receiver)\n

    Finally, the Receiver has a normal JWE (as usual):

      BASE64URL(UTF8(JWE Protected Header for Receiver)) || '.' ||\n  BASE64URL(JWE Encrypted Key for Receiver) || '.' ||\n  BASE64URL(JWE Initialization Vector for Receiver) || '.' ||\n  BASE64URL(JWE Ciphertext for Receiver) || '.' ||\n  BASE64URL(JWE Authentication Tag for Receiver)\n

    This illustration extends the serialization shown in RFC 7516.

    "},{"location":"features/0334-jwe-envelope/#prior-art","title":"Prior art","text":""},{"location":"features/0334-jwe-envelope/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0334-jwe-envelope/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes

    Note: Aries Framework - Go is almost done with a first draft implementation of this RFC.

    "},{"location":"features/0334-jwe-envelope/anoncrypt-examples/","title":"Table of Contents","text":"

    Created by gh-md-toc

    "},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#anoncrypt-jwe-concrete-examples","title":"Anoncrypt JWE Concrete examples","text":"

    The following examples are for JWE anoncrypt packer for encrypting the payload secret message and aad value set as the concatenation of recipients' KIDs (ASCII sorted) joined by . for non-compact serializations (JWE Compact serializations don't have AAD).

    "},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#notes","title":"Notes","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#1-a256gcm-content-encryption","title":"1 A256GCM Content Encryption","text":"

    The packer generates the following protected headers for A256GCM content encryption in the below examples: - Generated protected headers: {\"cty\":\"application/didcomm-plain+json\",\"enc\":\"A256GCM\",\"typ\":\"application/didcomm-encrypted+json\"} - raw (no padding) base64URL encoded: eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJBMjU2R0NNIiwidHlwIjoiYXBwbGljYXRpb24vZGlkY29tbS1lbmNyeXB0ZWQranNvbiJ9

    "},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#11-multi-recipients-jwes","title":"1.1 Multi recipients JWEs","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#111-nist-p-256-keys","title":"1.1.1 NIST P-256 keys","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#112-nist-p-384-keys","title":"1.1.2 NIST P-384 keys","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#113-nist-p-521-keys","title":"1.1.3 NIST P-521 keys","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#114-x25519-keys","title":"1.1.4 X25519 keys","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#12-single-recipient-jwes","title":"1.2 Single Recipient JWEs","text":"

    Packing a message with 1 recipient using the Flattened JWE JSON serialization and Compact JWE serialization formats as mentioned in the notes above.

    "},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#121-nist-p-256-key","title":"1.2.1 NIST P-256 key","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#122-nist-p-384-key","title":"1.2.2 NIST P-384 key","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#123-nist-p-521-key","title":"1.2.3 NIST P-521 key","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#124-x25519-key","title":"1.2.4 X25519 key","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#2-xc20p-content-encryption","title":"2 XC20P content encryption","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#21-multi-recipients-jwes","title":"2.1 Multi recipients JWEs","text":"

    The packer generates the following protected headers for XC20P content encryption in the below examples with XC20P enc: - Generated protected headers: {\"cty\":\"application/didcomm-plain+json\",\"enc\":\"XC20P\",\"typ\":\"application/didcomm-encrypted+json\"} - raw (no padding) base64URL encoded: eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJYQzIwUCIsInR5cCI6ImFwcGxpY2F0aW9uL2RpZGNvbW0tZW5jcnlwdGVkK2pzb24ifQ

    The same notes above apply here.

    "},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#211-nist-p-256-keys","title":"2.1.1 NIST P-256 keys","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#212-nist-p-384-keys","title":"2.1.2 NIST P-384 keys","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#213-nist-p-521-keys","title":"2.1.3 NIST P-521 keys","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#214-x25519-keys","title":"2.1.4 X25519 keys","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#22-single-recipient-jwes","title":"2.2 Single Recipient JWEs","text":"

    Packing a message with 1 recipient using the Flattened JWE JSON serialization and the Compact JWE serialization formats as mentioned in the notes above.

    "},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#221-nist-p-256-key","title":"2.2.1 NIST P-256 key","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#222-nist-p-384-key","title":"2.2.2 NIST P-384 key","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#223-nist-p-521-key","title":"2.2.3 NIST P-521 key","text":""},{"location":"features/0334-jwe-envelope/anoncrypt-examples/#224-x25519-key","title":"2.2.4 X25519 key","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/","title":"Table of Contents","text":"

    Created by gh-md-toc

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#authcrypt-jwe-concrete-examples","title":"Authcrypt JWE Concrete examples","text":"

    The following examples are for JWE authcrypt packer for encrypting the payload secret message and aad value set as the concatenation of recipients' KIDs (ASCII sorted) joined by . for non-compact serializations (JWE Compact serializations don't have AAD).

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#notes","title":"Notes","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#1-a256gcm-content-encryption","title":"1 A256GCM Content Encryption","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#11-multi-recipients-jwes","title":"1.1 Multi recipients JWEs","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#111-nist-p-256-keys","title":"1.1.1 NIST P-256 keys","text":"

    The packer generates the following protected headers that includes the skid: - Generated protected headers: {\"cty\":\"application/didcomm-plain+json\",\"enc\":\"A256GCM\",\"skid\":\"6PBTUbcLB7-Z4fuAFn42oC1PaMsNmjheq1FeZEUgV_8\",\"typ\":\"application/didcomm-encrypted+json\"} - raw (no padding) base64URL encoded: eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJBMjU2R0NNIiwic2tpZCI6IjZQQlRVYmNMQjctWjRmdUFGbjQyb0MxUGFNc05tamhlcTFGZVpFVWdWXzgiLCJ0eXAiOiJhcHBsaWNhdGlvbi9kaWRjb21tLWVuY3J5cHRlZCtqc29uIn0

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#112-nist-p-384-keys","title":"1.1.2 NIST P-384 keys","text":"

    The packer generates the following protected headers that includes the skid: - Generated protected headers: {\"cty\":\"application/didcomm-plain+json\",\"enc\":\"A256GCM\",\"skid\":\"0Bz8yRwu9eC8Gi7cYOwAKMJ8jysInhAtwH8k8m9MX04\",\"typ\":\"application/didcomm-encrypted+json\"} - raw (no padding) base64URL encoded: eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJBMjU2R0NNIiwic2tpZCI6IjBCejh5Und1OWVDOEdpN2NZT3dBS01KOGp5c0luaEF0d0g4azhtOU1YMDQiLCJ0eXAiOiJhcHBsaWNhdGlvbi9kaWRjb21tLWVuY3J5cHRlZCtqc29uIn0 - Sender key JWK format:

    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-384\",\n  \"x\": \"UW1LtMXuZdFS0gyp0_F19uxHqECvCcJA7SmeeuRSSc_PQfsbZWXt5L0KyLYpNIQb\",\n  \"y\": \"FBdPcUvanB7igwkX0NN5rOvH3OKZ1gQHhcad7cCy6QNYKKz7lBWUUOmzypee31pS\",\n  \"d\": \"wrXW0wsFKjvpTWqOAd1mohRublQs4P014-U4_K-eTRFmzhkyLJqNn91dH_AHUc4-\"\n}\n
    - Sender kid (jwk thumbprint raw base64 URL encoded): 0Bz8yRwu9eC8Gi7cYOwAKMJ8jysInhAtwH8k8m9MX04 - Recipient 1 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-384\",\n  \"x\": \"k3W_RR59uUG3HFlhhqNNmBDyFdMtWHxKAsJaxLBqQgQer3d3aAN-lfdxzGnHtwj1\",\n  \"y\": \"VMSy5zxFEGaGRINailLTUH6NlP0JO2qu0j0_UbS7Ng1b8JkzHDnDbjGgsLqVJaMM\",\n  \"d\": \"iM5K8uqNvFYJnuToMDBguGwUIlu1erT-K0g7NYtJrQnHZOumS8yIC4MCNC60Ch91\"\n}\n
    - Recipient 1 kid (jwk thumbprint raw base64 URL encoded): obCHRLVDx634Cax_Kr3B8fd_-xj5kAj0r0Kvvvmq1z8 - Recipient 2 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-384\",\n  \"x\": \"W3iUHCzh_PWzUKONKeHwIKcjWNN--c7OlL2H23lV13C9tlkqOleFUmioW-AeitEk\",\n  \"y\": \"CIzVD6KsuDLyKQPm0r62LPZikkT2kiXJpLjcVO3op2kgePQkZ31xniKE0VbUBnTH\",\n  \"d\": \"V_vQwOqHVCGxSjX_dN8H5VXvOGYDRTGI00mNXwB0I0mKDd8kqCJmNtGlf-eUrbub\"\n}\n
    - Recipient 2 kid (jwk thumbprint raw base64 URL encoded): PfuTIXG60dvOwnFOfMxJ0i59_L7vqNytROX_bLRR-3M - Recipient 3 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-384\",\n  \"x\": \"bsX8qtEtj5IDLp9iDUKlgdu_3nluupFtFBrfIK1nza1bGZQRlZ3JG3PdBzVAoePz\",\n  \"y\": \"QX_2v0BHloNS7iWoB4CcO9UWHdtirMVmbNcB8ZGczCJOfUyjYcQxGr0RU_tGkFC4\",\n  \"d\": \"rQ-4ZmWn09CsCqRQJhpQhDeUZXeZ3cy_Pei-fchVPFTa2FnAzvjwEF2Nsm2f3MmR\"\n}\n
    - Recipient 3 kid (jwk thumbprint raw, no padding, base64 URL encoded): VTVlkyBsoW4ey0sh7TMJBErLGeBeKQsOttFRrXD6eqI - List of kids used for AAD for the above recipients (sorted kid values joined with .): PfuTIXG60dvOwnFOfMxJ0i59_L7vqNytROX_bLRR-3M.VTVlkyBsoW4ey0sh7TMJBErLGeBeKQsOttFRrXD6eqI.obCHRLVDx634Cax_Kr3B8fd_-xj5kAj0r0Kvvvmq1z8 - Resulting AAD value (sha256 of above list raw, no padding, base64 URL encoded): m72Q9j28hFk0imbFVzqY4KfTE77L8itJoX75N3hwiwA - Finally, packing the payload outputs the following JWE (pretty printed for readability):
    {\n  \"protected\": \"eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJBMjU2R0NNIiwic2tpZCI6IjBCejh5Und1OWVDOEdpN2NZT3dBS01KOGp5c0luaEF0d0g4azhtOU1YMDQiLCJ0eXAiOiJhcHBsaWNhdGlvbi9kaWRjb21tLWVuY3J5cHRlZCtqc29uIn0\",\n  \"recipients\": [\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+A256KW\",\n        \"apu\": \"MEJ6OHlSd3U5ZUM4R2k3Y1lPd0FLTUo4anlzSW5oQXR3SDhrOG05TVgwNA\",\n        \"apv\": \"b2JDSFJMVkR4NjM0Q2F4X0tyM0I4ZmRfLXhqNWtBajByMEt2dnZtcTF6OA\",\n        \"kid\": \"obCHRLVDx634Cax_Kr3B8fd_-xj5kAj0r0Kvvvmq1z8\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-384\",\n          \"x\": \"cjJHtV4VkMCww9ig94-_4e4yMfo2WI4Rh4dZh6NkYFvz-EGylA7RLSO5TRC-JJ_G\",\n          \"y\": \"RJe2QisAYpfuTWTV6KVeoLGshsJqYokbcSUqdMxrFGXSp4ZMNrW4yj410Xsn6hy6\"\n        }\n      },\n      \"encrypted_key\": \"o0ZZ_xNtmUPcpQAK3kzjOLp8xWBJ31tr-ORQjXtwpqgTuvM_nvhk_w\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+A256KW\",\n        \"apu\": \"MEJ6OHlSd3U5ZUM4R2k3Y1lPd0FLTUo4anlzSW5oQXR3SDhrOG05TVgwNA\",\n        \"apv\": \"b2JDSFJMVkR4NjM0Q2F4X0tyM0I4ZmRfLXhqNWtBajByMEt2dnZtcTF6OA\",\n        \"kid\": \"PfuTIXG60dvOwnFOfMxJ0i59_L7vqNytROX_bLRR-3M\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-384\",\n          \"x\": \"u1HYhdUJGx49J6wSLYM_JLHTkJrkR7wMSm5uYZMH7ZpcC3qF8MUyKTuKN0FGCBcN\",\n          \"y\": \"K-XI-KAGd2jHebNq44yQrDA6Ubs5M99mIlre0chzI13bSLDOuUG4RJ8yjYjXysWF\"\n        }\n      },\n      \"encrypted_key\": \"iCV1_peiRwnsrrBQWmp7GOd-taee-Yk8t6XqJCZPGziglDpGBu_ZhA\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+A256KW\",\n        \"apu\": \"MEJ6OHlSd3U5ZUM4R2k3Y1lPd0FLTUo4anlzSW5oQXR3SDhrOG05TVgwNA\",\n        \"apv\": \"b2JDSFJMVkR4NjM0Q2F4X0tyM0I4ZmRfLXhqNWtBajByMEt2dnZtcTF6OA\",\n        \"kid\": \"VTVlkyBsoW4ey0sh7TMJBErLGeBeKQsOttFRrXD6eqI\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-384\",\n          \"x\": \"Twps_QU6ShP18uQFNCcdOx9sU9YrHBznNnSbhQD474tLUcnslq5Trubq3ogp-LTX\",\n          \"y\": \"oSES1a5xve9e-lKQ3NMN5_CW9Sii9rTorqUMggDzodLsRGm0Jud3HAy2-uE956Xq\"\n        }\n      },\n      \"encrypted_key\": \"dLDKyXeZJDcB_i1Tnn_EUxqCc2ukneaummXF_FwcbpnMH8B0eVizvA\"\n    }\n  ],\n  \"aad\": \"m72Q9j28hFk0imbFVzqY4KfTE77L8itJoX75N3hwiwA\",\n  \"iv\": \"nuuuri2fyNl3jBo6\",\n  \"ciphertext\": \"DCWevJuEo5dx-MmqPvw\",\n  \"tag\": \"Pyt1S_Smg9Pnd1u_5Z7nbA\"\n}\n

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#113-nist-p-521-keys","title":"1.1.3 NIST P-521 keys","text":"

    The packer generates the following protected headers that includes the skid: - Generated protected headers: {\"cty\":\"application/didcomm-plain+json\",\"enc\":\"A256GCM\",\"skid\":\"oq-WBIGQm-iHiNRj6nId4-E1QtY8exzp8C56SziUfeU\",\"typ\":\"application/didcomm-encrypted+json\"} - raw (no padding) base64URL encoded: eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJBMjU2R0NNIiwic2tpZCI6Im9xLVdCSUdRbS1pSGlOUmo2bklkNC1FMVF0WThleHpwOEM1NlN6aVVmZVUiLCJ0eXAiOiJhcHBsaWNhdGlvbi9kaWRjb21tLWVuY3J5cHRlZCtqc29uIn0 - Sender key JWK format:

    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-521\",\n  \"x\": \"AXKDGPnD6hlQIre8aEeu33bQffkfl-eQfQXgzNXQX7XFYt5GKA1N6w4-f0_Ci7fQNKGkQuCoAu5-6CNk9M_cHiDi\",\n  \"y\": \"Ae4-APhoZAmM99MdY9io9IZA43dN7dA006wlFb6LJ9bcusJOi5R-o3o3FhCjt5KTv_JxYbo6KU4PsBwQ1eeKyJ0U\",\n  \"d\": \"AP9l2wmQ85P5XD84CkEQVWHaX_46EDvHxLWHEKsHFSQYjEh6BDSuyy1TUNv68v8kpbLCDjvsBc3cIBqC4_T1r4pU\"\n}\n
    - Sender kid (jwk thumbprint raw base64 URL encoded): oq-WBIGQm-iHiNRj6nId4-E1QtY8exzp8C56SziUfeU - Recipient 1 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-521\",\n  \"x\": \"ALmUHkd9Gi2NApJojNzzA34Qdd1-KLnq6jd2UJ9wl-xJzTQ2leg8qi3-hrFs7NqNfxqO6vE5bBoWYFeAcf3LqJOU\",\n  \"y\": \"AN-MutmkAXGzlgzSQJRnctHDcjQQNpRek-8BeqyUDXdZKNGKSMEAzw6Hnl3VdvsvihQfrxcajpx5PSnwxbbdakHq\",\n  \"d\": \"AKv-YbKdI6y8NRMP-e17-RjZyRTfGf0Xh9Og5g7q7aq0xS2mO59ttIJ67XHW5SPTBQDbltdUcydKroWNUIGvhKNv\"\n}\n
    - Recipient 1 kid (jwk thumbprint raw base64 URL encoded): wax1T_hGUvM0NmlbFJi2RizQ_gWajumI5j0Hx7CbgAw - Recipient 2 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-521\",\n  \"x\": \"ALmUHkd9Gi2NApJojNzzA34Qdd1-KLnq6jd2UJ9wl-xJzTQ2leg8qi3-hrFs7NqNfxqO6vE5bBoWYFeAcf3LqJOU\",\n  \"y\": \"AN-MutmkAXGzlgzSQJRnctHDcjQQNpRek-8BeqyUDXdZKNGKSMEAzw6Hnl3VdvsvihQfrxcajpx5PSnwxbbdakHq\",\n  \"d\": \"AKv-YbKdI6y8NRMP-e17-RjZyRTfGf0Xh9Og5g7q7aq0xS2mO59ttIJ67XHW5SPTBQDbltdUcydKroWNUIGvhKNv\"\n}\n
    - Recipient 2 kid (jwk thumbprint raw base64 URL encoded): XmLVV-CqMkTGQIe6-KecWZWtZVwORTMP2y5aqMPV7P4 - Recipient 3 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-521\",\n  \"x\": \"AHCbpo-299Q0Fk71CtBoPu-40-Z0UOu4cGZfgtHHwcu3ciMWVR8IWF4bgvFpAPfKG8Dqx7JJWO8uEgLE67A7aQOL\",\n  \"y\": \"AQ_JBjS3lt8zz3njFhUoJwEdSJMyrSfGPCLpaWkKuRo25k3im-7IjY8T43gvzZXYwV3PKKR3iJ1jnQCrYmfRrmva\",\n  \"d\": \"ACgCw3U3eWTYD5vcygoOpoGPost9TojYJH9FllyRuqwlS3L8dkZu7vKhFyoEg6Bo8AqcOUj5Mtgxhd6Wu02YvqK3\"\n}\n
    - Recipient 3 kid (jwk thumbprint raw, no padding, base64 URL encoded): pRJtTY7V1pClPu8WEgEZonzaHq3K0El9Vcb8qmjucSg - List of kids used for AAD for the above recipients (sorted kid values joined with .): S8s7FFL7f0fUMXt93WOWC-3PJrV1iuAmB_ZlCDyjXqs.XmLVV-CqMkTGQIe6-KecWZWtZVwORTMP2y5aqMPV7P4.pRJtTY7V1pClPu8WEgEZonzaHq3K0El9Vcb8qmjucSg - Resulting AAD value (sha256 of above list raw, no padding, base64 URL encoded): tOS8nLSCERw2V9WOZVo6cenGuM4DJvHse1dsvTk8_As - Finally, packing the payload outputs the following JWE (pretty printed for readability):
    {\n  \"protected\": \"eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJBMjU2R0NNIiwic2tpZCI6Im9xLVdCSUdRbS1pSGlOUmo2bklkNC1FMVF0WThleHpwOEM1NlN6aVVmZVUiLCJ0eXAiOiJhcHBsaWNhdGlvbi9kaWRjb21tLWVuY3J5cHRlZCtqc29uIn0\",\n  \"recipients\": [\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+A256KW\",\n        \"apu\": \"b3EtV0JJR1FtLWlIaU5SajZuSWQ0LUUxUXRZOGV4enA4QzU2U3ppVWZlVQ\",\n        \"apv\": \"WG1MVlYtQ3FNa1RHUUllNi1LZWNXWld0WlZ3T1JUTVAyeTVhcU1QVjdQNA\",\n        \"kid\": \"XmLVV-CqMkTGQIe6-KecWZWtZVwORTMP2y5aqMPV7P4\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-521\",\n          \"x\": \"AP630J9yi2UFBfRWKucXB8eu9-SSKbbbD1fzFhLgbI3xTRTRNMGm-U5EGHbplMLsOfP2pNxtAgo2-d6abiZiD6gg\",\n          \"y\": \"AE1Grtp1iFvySLN4yHVvE0kYWChqVfkO_kHEMujjL6vVu_AAOvl3aogquLv1zgduitCPbKRTno89r3rv0L0Kuj0M\"\n        }\n      },\n      \"encrypted_key\": \"FSYpXFfgPlSfj91VFQ4zAs0Wb3CEpWcBcGeW4nld9szVfb_WRbqTtA\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+A256KW\",\n        \"apu\": \"b3EtV0JJR1FtLWlIaU5SajZuSWQ0LUUxUXRZOGV4enA4QzU2U3ppVWZlVQ\",\n        \"apv\": \"WG1MVlYtQ3FNa1RHUUllNi1LZWNXWld0WlZ3T1JUTVAyeTVhcU1QVjdQNA\",\n        \"kid\": \"S8s7FFL7f0fUMXt93WOWC-3PJrV1iuAmB_ZlCDyjXqs\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-521\",\n          \"x\": \"Acw0XM1IZl63ltysb-ivw8zBhZ-Wz54SaXM_vGGea8Sa5w6VWdZflp1tibzHkfu4novFFpNbKtnCKi-28AqQnOYZ\",\n          \"y\": \"AajoBj0KMrlaIA17RKnShFNzIb1S81oLYZu5MXzAg-XvT8_q83dXajOCiYJLo3taUvHTlcPjkHMG3_8442DgWpU_\"\n        }\n      },\n      \"encrypted_key\": \"3ct4awH6xyp9BjA74Q_j6ot6F32okEYXbS2e6NIkiAgs-JGyEPWoxw\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+A256KW\",\n        \"apu\": \"b3EtV0JJR1FtLWlIaU5SajZuSWQ0LUUxUXRZOGV4enA4QzU2U3ppVWZlVQ\",\n        \"apv\": \"WG1MVlYtQ3FNa1RHUUllNi1LZWNXWld0WlZ3T1JUTVAyeTVhcU1QVjdQNA\",\n        \"kid\": \"pRJtTY7V1pClPu8WEgEZonzaHq3K0El9Vcb8qmjucSg\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-521\",\n          \"x\": \"APu-ArpY-GUntHG7BzTvUauKVP_YpCcVnZFX6r_VvYY2iPbFSZYxvUdUbX3TGK-Q92rTHNaNnutjbPcrCaBpJecM\",\n          \"y\": \"AONhGq1vGU20Wdrx1FT5SBdLOIvqOK_pxhTJZhS0Vwi_JYQdKN6PHrX9GyJ23ZhaY3bBKX6V2uzRJzV8Qam1FUbz\"\n        }\n      },\n      \"encrypted_key\": \"U51txv9yfZASl8tlT7GbNtLjeAqTHUVT4O9MEqBKaYIdAcA7Qd7dnw\"\n    }\n  ],\n  \"aad\": \"tOS8nLSCERw2V9WOZVo6cenGuM4DJvHse1dsvTk8_As\",\n  \"iv\": \"LJl-9ygxPGMAmVHP\",\n  \"ciphertext\": \"HOfi-W7mcQv93scr1z8\",\n  \"tag\": \"zaM6OfzhVhYCsqD2VW5ztw\"\n}\n

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#114-x25519-keys","title":"1.1.4 X25519 keys","text":"

    The packer generates the following protected headers that includes the skid: - Generated protected headers: {\"cty\":\"application/didcomm-plain+json\",\"enc\":\"A256GCM\",\"skid\":\"X5INSMIv_w4Q7pljH7xjeUrRAKiBGHavSmOYyyiRugc\",\"typ\":\"application/didcomm-encrypted+json\"} - raw (no padding) base64URL encoded: eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJBMjU2R0NNIiwic2tpZCI6Ilg1SU5TTUl2X3c0UTdwbGpIN3hqZVVyUkFLaUJHSGF2U21PWXl5aVJ1Z2MiLCJ0eXAiOiJhcHBsaWNhdGlvbi9kaWRjb21tLWVuY3J5cHRlZCtqc29uIn0 - Sender key JWK format:

    {\n  \"kty\": \"OKP\",\n  \"crv\": \"X25519\",\n  \"x\": \"WKkktGWkUB9hDITcqa1Z6MC8rcWy8fWtxuT7xwQF1lw\",\n  \"d\": \"-LEcVt6bW_ah9gY7H_WknTsg1MXq8yc42SrSJhqP0Vo\"\n}\n
    - Sender kid (jwk thumbprint raw base64 URL encoded): X5INSMIv_w4Q7pljH7xjeUrRAKiBGHavSmOYyyiRugc - Recipient 1 key JWK format:
    {\n  \"kty\": \"OKP\",\n  \"crv\": \"X25519\",\n  \"x\": \"NJzDtIa7vjz-isjaI-6GKGDe2EUx26-D44d6jLILeBI\",\n  \"d\": \"MEBNdr6Tpb0XfD60NeHby-Tkmlpgr7pvVe7Q__sBbGw\"\n}\n
    - Recipient 1 kid (jwk thumbprint raw base64 URL encoded): 2UR-nzYjVhsq0cZakWjE38-wUdG0S2EIrLZ8Eh0KVO0 - Recipient 2 key JWK format:
    {\n  \"kty\": \"OKP\",\n  \"crv\": \"X25519\",\n  \"x\": \"_aiA8rwrayc2k9EL-mkqtSh8onyl_-EzVif3L-q-R20\",\n  \"d\": \"ALBfdypF_lAbBtWXhwvq9Rs7TGjcLd-iuDh0s3yWr2Y\"\n}\n
    - Recipient 2 kid (jwk thumbprint raw base64 URL encoded): dvDd4h1rHj-onj-Xz9O1KRIgkMhh3u23d-94brHbBKo - Recipient 3 key JWK format:
    {\n  \"kty\": \"OKP\",\n  \"crv\": \"X25519\",\n  \"x\": \"zuHJfrIarLGFga0OwZqDlvlI5P1bb9DFhAtdnI54pwQ\",\n  \"d\": \"8BVFAqxPHXB5W-EBxr-EjdUmA4HqY1gwDjiYvt0UxUk\"\n}\n
    - Recipient 3 kid (jwk thumbprint raw, no padding, base64 URL encoded): hj57wbrmOygTc_ktMPqKMHdiL85FdiGJa5DKzoLIzeU - List of kids used for AAD for the above recipients (sorted kid values joined with .): 2UR-nzYjVhsq0cZakWjE38-wUdG0S2EIrLZ8Eh0KVO0.dvDd4h1rHj-onj-Xz9O1KRIgkMhh3u23d-94brHbBKo.hj57wbrmOygTc_ktMPqKMHdiL85FdiGJa5DKzoLIzeU - Resulting AAD value (sha256 of above list raw, no padding, base64 URL encoded): L-QV1cHI5u8U9BQa8_S4CFW-LhKNXHCjmqydtQYuSLw - Finally, packing the payload outputs the following JWE (pretty printed for readability):
    {\n  \"protected\": \"eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJBMjU2R0NNIiwic2tpZCI6Ilg1SU5TTUl2X3c0UTdwbGpIN3hqZVVyUkFLaUJHSGF2U21PWXl5aVJ1Z2MiLCJ0eXAiOiJhcHBsaWNhdGlvbi9kaWRjb21tLWVuY3J5cHRlZCtqc29uIn0\",\n  \"recipients\": [\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+A256KW\",\n        \"apu\": \"WDVJTlNNSXZfdzRRN3Bsakg3eGplVXJSQUtpQkdIYXZTbU9ZeXlpUnVnYw\",\n        \"apv\": \"MlVSLW56WWpWaHNxMGNaYWtXakUzOC13VWRHMFMyRUlyTFo4RWgwS1ZPMA\",\n        \"kid\": \"2UR-nzYjVhsq0cZakWjE38-wUdG0S2EIrLZ8Eh0KVO0\",\n        \"epk\": {\n          \"kty\": \"OKP\",\n          \"crv\": \"X25519\",\n          \"x\": \"IcuAA7zPN0mLt4GSZLQJ6f8p3yPALQaSyupbSRpDnwA\"\n        }\n      },\n      \"encrypted_key\": \"_GoKcbrlbPR8hdgpDdpotO4WvAKOzyOEXo5A2RlxVaEb0enFej2DFQ\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+A256KW\",\n        \"apu\": \"WDVJTlNNSXZfdzRRN3Bsakg3eGplVXJSQUtpQkdIYXZTbU9ZeXlpUnVnYw\",\n        \"apv\": \"MlVSLW56WWpWaHNxMGNaYWtXakUzOC13VWRHMFMyRUlyTFo4RWgwS1ZPMA\",\n        \"kid\": \"dvDd4h1rHj-onj-Xz9O1KRIgkMhh3u23d-94brHbBKo\",\n        \"epk\": {\n          \"kty\": \"OKP\",\n          \"crv\": \"X25519\",\n          \"x\": \"_BVh0oInkDiqnTkHKLvNMa8cldr79TZS00MJCYwZo3Y\"\n        }\n      },\n      \"encrypted_key\": \"gacTLNP-U5mYAHJLG9F97R52aG244NfLeWg_Dj4Fy0C96oIIN-3psw\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+A256KW\",\n        \"apu\": \"WDVJTlNNSXZfdzRRN3Bsakg3eGplVXJSQUtpQkdIYXZTbU9ZeXlpUnVnYw\",\n        \"apv\": \"MlVSLW56WWpWaHNxMGNaYWtXakUzOC13VWRHMFMyRUlyTFo4RWgwS1ZPMA\",\n        \"kid\": \"hj57wbrmOygTc_ktMPqKMHdiL85FdiGJa5DKzoLIzeU\",\n        \"epk\": {\n          \"kty\": \"OKP\",\n          \"crv\": \"X25519\",\n          \"x\": \"alPo4cjEjondCmz8mw8tntYxlpGPSLaqe3SSI_wu11s\"\n        }\n      },\n      \"encrypted_key\": \"q2RpqrdZA9mvVBGTvMNHg3P6SysnuCpfraLWhRseiQ1ImJWdLq53TA\"\n    }\n  ],\n  \"aad\": \"L-QV1cHI5u8U9BQa8_S4CFW-LhKNXHCjmqydtQYuSLw\",\n  \"iv\": \"J-OEJGFWvJ6rw9dX\",\n  \"ciphertext\": \"BvFi1vAzq0Uostj0_ms\",\n  \"tag\": \"C6itmqZ7ehMx9FF70fdGGQ\"\n}\n

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#12-single-recipient-jwes","title":"1.2 Single Recipient JWEs","text":"

    Packing a message with 1 recipient using the Flattened JWE JSON serialization and Compact JWE serialization formats as mentioned in the notes above.

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#121-nist-p-256-key","title":"1.2.1 NIST P-256 key","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#122-nist-p-384-key","title":"1.2.2 NIST P-384 key","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#123-nist-p-521-key","title":"1.2.3 NIST P-521 key","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#124-x25519-key","title":"1.2.4 X25519 key","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#2-xc20p-content-encryption","title":"2 XC20P content encryption","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#21-multi-recipients-jwes","title":"2.1 Multi recipients JWEs","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#211-nist-p-256-keys","title":"2.1.1 NIST P-256 keys","text":"

    The packer generates the following protected headers that includes the skid: - Generated protected headers: {\"cty\":\"application/didcomm-plain+json\",\"enc\":\"XC20P\",\"skid\":\"T1jGtZoU-Xa_5a1QKexUU0Jq9WKDtS7TCowVvjoFH04\",\"typ\":\"application/didcomm-encrypted+json\"} - raw (no padding) base64URL encoded: eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJYQzIwUCIsInNraWQiOiJUMWpHdFpvVS1YYV81YTFRS2V4VVUwSnE5V0tEdFM3VENvd1Z2am9GSDA0IiwidHlwIjoiYXBwbGljYXRpb24vZGlkY29tbS1lbmNyeXB0ZWQranNvbiJ9 - Sender key JWK format:

    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-256\",\n  \"x\": \"46OXm1dUTO3MB-8zoxbn-9dk0khgeIqsKFO-nTJ9keM\",\n  \"y\": \"8IlrwB-dl5bFd5RT4YAbgAdj5Y-a9zhc9wCMnXDZDvA\",\n  \"d\": \"58GZDz9_opy-nEeaJ_cyEL63TO-l063aV5nLADCgsGY\"\n}\n
    - Sender kid (jwk thumbprint raw base64 URL encoded): T1jGtZoU-Xa_5a1QKexUU0Jq9WKDtS7TCowVvjoFH04 - Recipient 1 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-256\",\n  \"x\": \"r9MRjEQ7CBxAgMyEG3ZjIlkGCuRX0rTaBdbkAcY17hA\",\n  \"y\": \"MRSgHQycDFPdSABGv5V0Qd-2q7ebs_x0_fNFyabGgXU\",\n  \"d\": \"LK9yfSxuET5n5uZDNO-64sJKWxJs7LTkqhA4mAuKQnE\"\n}\n
    - Recipient 1 kid (jwk thumbprint raw base64 URL encoded): dmfXisqWjRT-tFpODOD-G0CBF6zjHywNUjrrD3IFmcs - Recipient 2 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-256\",\n  \"x\": \"PMhlaU_KNEWou004AEyAFoJi8vNOnY75ROiRzzjhDR0\",\n  \"y\": \"tEcJNRv2rqYlYWeRloRabcp2lRorRaZTLM0ZNBoEyN0\",\n  \"d\": \"t1-QysBdkbkpqEBDo_JPsi-6YqD24UoAGBrruI2XNhA\"\n}\n
    - Recipient 2 kid (jwk thumbprint raw base64 URL encoded): 2_Sf_YshIFhQ11NH9muAxLWwyFUvJnfXbYFOAC-8HTw - Recipient 3 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-256\",\n  \"x\": \"V9dWH69KZ_bvrxdWgt5-o-KnZLcGuWjAKVWMueiQioM\",\n  \"y\": \"lvsUBieuXV6qL4R3L94fCJGu8SDifqh3fAtN2plPWX4\",\n  \"d\": \"llg97kts4YxIF-r3jn7wcZ-zV0hLcn_AydIKHDF-HJc\"\n}\n
    - Recipient 3 kid (jwk thumbprint raw, no padding, base64 URL encoded): mKtrI7SV3z2U9XyhaaTYlQFX1ANi6Wkli8b3NWVq4C4 - List of kids used for AAD for the above recipients (sorted kid values joined with .): 2_Sf_YshIFhQ11NH9muAxLWwyFUvJnfXbYFOAC-8HTw.dmfXisqWjRT-tFpODOD-G0CBF6zjHywNUjrrD3IFmcs.mKtrI7SV3z2U9XyhaaTYlQFX1ANi6Wkli8b3NWVq4C4 - Resulting AAD value (sha256 of above list raw, no padding, base64 URL encoded): PNKzNc6e0MtDtIGamjsx2fytSu6t8GygofQbzTrtMNA - Finally, packing the payload outputs the following JWE (pretty printed for readability):
    {\n  \"protected\": \"eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJYQzIwUCIsInNraWQiOiJUMWpHdFpvVS1YYV81YTFRS2V4VVUwSnE5V0tEdFM3VENvd1Z2am9GSDA0IiwidHlwIjoiYXBwbGljYXRpb24vZGlkY29tbS1lbmNyeXB0ZWQranNvbiJ9\",\n  \"recipients\": [\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"VDFqR3Rab1UtWGFfNWExUUtleFVVMEpxOVdLRHRTN1RDb3dWdmpvRkgwNA\",\n        \"apv\": \"ZG1mWGlzcVdqUlQtdEZwT0RPRC1HMENCRjZ6akh5d05VanJyRDNJRm1jcw\",\n        \"kid\": \"dmfXisqWjRT-tFpODOD-G0CBF6zjHywNUjrrD3IFmcs\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-256\",\n          \"x\": \"80NGcUh0mIy_XrcaAqD7GCHF0FU2W5j4Jt-wfwxvJVs\",\n          \"y\": \"KpsNL9A-FGgL7S97ce8wcWYc9J1Q6_luxKAFIu7BNIw\"\n        }\n      },\n      \"encrypted_key\": \"wGQO8LX7o9JmYI0PIGUruU7i6ybZYefsTanZuo7hIDyn21ix6fSFPOmvgjPxZ8q_-hZF2yGYtudfLiuPzXlybWJkmTlP9PcY\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"VDFqR3Rab1UtWGFfNWExUUtleFVVMEpxOVdLRHRTN1RDb3dWdmpvRkgwNA\",\n        \"apv\": \"ZG1mWGlzcVdqUlQtdEZwT0RPRC1HMENCRjZ6akh5d05VanJyRDNJRm1jcw\",\n        \"kid\": \"2_Sf_YshIFhQ11NH9muAxLWwyFUvJnfXbYFOAC-8HTw\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-256\",\n          \"x\": \"4YrbAQCLLya1XqRvjfcYdonllWQulrLP7zE0ooclKXA\",\n          \"y\": \"B3tI8lsWHRwBQ19pAFzXiBkLgpE6leTeQT6b709gllE\"\n        }\n      },\n      \"encrypted_key\": \"5tY3t1JI8L6s974kmXbzKMaePHygNan2Qqpd1B0BiqBsjaHNUH2Unv1IMGiT3oQD0xXeVPAxQq7vNZgANitxBbgG_uxGiRld\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"VDFqR3Rab1UtWGFfNWExUUtleFVVMEpxOVdLRHRTN1RDb3dWdmpvRkgwNA\",\n        \"apv\": \"ZG1mWGlzcVdqUlQtdEZwT0RPRC1HMENCRjZ6akh5d05VanJyRDNJRm1jcw\",\n        \"kid\": \"mKtrI7SV3z2U9XyhaaTYlQFX1ANi6Wkli8b3NWVq4C4\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-256\",\n          \"x\": \"-e9kPGp2rmtpFs2zzTaY6xfeXjr1Xua1vHCQZRKJ54s\",\n          \"y\": \"Mc7b8U06KHV__1-XMaReilLxa63LcICqsPtkZGXEkEs\"\n        }\n      },\n      \"encrypted_key\": \"zVQUQytYv4EmQS0zye3IsXiN_2ol-Qn2nvyaJgEPvNdwFuzTFPOupTl-PeOhkRvxPfuLlw5TKnSRyPUejP8zyHbBgUZ6gDmz\"\n    }\n  ],\n  \"aad\": \"PNKzNc6e0MtDtIGamjsx2fytSu6t8GygofQbzTrtMNA\",\n  \"iv\": \"UKgm1XTPf1QFDXoRWlf-KrsBRQKSwpBA\",\n  \"ciphertext\": \"pbwy8HEnr1hPA0Jt5ho\",\n  \"tag\": \"nUazXvxpMXGoL1__92CAyA\"\n}\n

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#212-nist-p-384-keys","title":"2.1.2 NIST P-384 keys","text":"

    The packer generates the following protected headers that includes the skid: - Generated protected headers: {\"cty\":\"application/didcomm-plain+json\",\"enc\":\"XC20P\",\"skid\":\"xXdnS3M4Bb497A0ko9c6H0D4NNbj1XpwGr4Tk9Fcw7k\",\"typ\":\"application/didcomm-encrypted+json\"} - raw (no padding) base64URL encoded: eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJYQzIwUCIsInNraWQiOiJ4WGRuUzNNNEJiNDk3QTBrbzljNkgwRDROTmJqMVhwd0dyNFRrOUZjdzdrIiwidHlwIjoiYXBwbGljYXRpb24vZGlkY29tbS1lbmNyeXB0ZWQranNvbiJ9 - Sender key JWK format:

    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-384\",\n  \"x\": \"bfuATmVQ_jxLIgfuhKNYrNRNu-VnK4FzTCCVRvycgekS8fIuC4rZS9uQi6Q2Ujwd\",\n  \"y\": \"XkVJ93cLKpeZeCMEOsHRKk4rse1zXpzY6yUibEtwZG9nFWF05Ro8OQs5fZVK2TWC\",\n  \"d\": \"OVzGxGyyaHGJpx1MoSwPjmWPas28sfq1tj7UkYFoK3ENsujmzUduAW6HwyaBlXRW\"\n}\n
    - Sender kid (jwk thumbprint raw base64 URL encoded): xXdnS3M4Bb497A0ko9c6H0D4NNbj1XpwGr4Tk9Fcw7k - Recipient 1 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-384\",\n  \"x\": \"xhk5K7x4xw9OJpkFhmsY39jceQqx57psvcZstiNZmKbXD7kT9ajfGKFA6YA-ali5\",\n  \"y\": \"7Hj32-JDMNDYWRGy3f-0E9lbUGp6yURMaZ9M36Q_FPgljKgHa9i0Fn1ogr_zEmO3\",\n  \"d\": \"Pc3r6eg15XZeKgTDMPcGjf_SvImZxG4bDzgCh3QShClAwMdmoNbzPZGhBByNrlvO\"\n}\n
    - Recipient 1 kid (jwk thumbprint raw base64 URL encoded): aIlhDTWJmT-_Atad5EBbvbZPkPnz2IYT85I6T44kcE4 - Recipient 2 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-384\",\n  \"x\": \"wqW3DUkUAT0Cyk3hq0KVJbqtPSJOoqulp_Tqa29jBEPliIJ9rnq7cRkJyxArCYAj\",\n  \"y\": \"ZfBtdTTVRh9SeQDCwsgAo15cCX2I-7J6xdyxDPyH8LBhbUA_8npHvNquKGta9p8x\",\n  \"d\": \"krddjYsOD4YIIkNjWXTrYV9rOVlmLNaeoLHChJ5oUr4c21LHxGL4xTI1bEoXKgJ2\"\n}\n
    - Recipient 2 kid (jwk thumbprint raw base64 URL encoded): 02WdA5ip_Amam611KA6fdoTs533yZH-ovfpt8t9zVjg - Recipient 3 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-384\",\n  \"x\": \"If6iEafrkcL53mVYCbm5rmnwAw3kjb13gUjBoDePggO7xMiSFyej4wbTabdCyfbg\",\n  \"y\": \"nLX6lEce-9r19NA_nI5mGK3YFLiX9IYRgXZZCUd_Br91PaE8Mr1JR01utAPoGx36\",\n  \"d\": \"jriJKFpQfzJtOrp7PhGvH0osHJQJbZrAKjD95itivioVawzMz9wcI_h9VsFV3ff0\"\n}\n
    - Recipient 3 kid (jwk thumbprint raw, no padding, base64 URL encoded): zeqnfYLFWtnJ_e5npBs7CtM5KkToyyM9kCKIFlcyId0 - List of kids used for AAD for the above recipients (sorted kid values joined with .): 02WdA5ip_Amam611KA6fdoTs533yZH-ovfpt8t9zVjg.aIlhDTWJmT-_Atad5EBbvbZPkPnz2IYT85I6T44kcE4.zeqnfYLFWtnJ_e5npBs7CtM5KkToyyM9kCKIFlcyId0 - Resulting AAD value (sha256 of above list raw, no padding, base64 URL encoded): CftHmHttuxR6mRrHe-zBXV2UEvL2wvZEt5yeFDhYSF8 - Finally, packing the payload outputs the following JWE (pretty printed for readability):
    {\n  \"protected\": \"eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJYQzIwUCIsInNraWQiOiJ4WGRuUzNNNEJiNDk3QTBrbzljNkgwRDROTmJqMVhwd0dyNFRrOUZjdzdrIiwidHlwIjoiYXBwbGljYXRpb24vZGlkY29tbS1lbmNyeXB0ZWQranNvbiJ9\",\n  \"recipients\": [\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"eFhkblMzTTRCYjQ5N0Ewa285YzZIMEQ0Tk5iajFYcHdHcjRUazlGY3c3aw\",\n        \"apv\": \"YUlsaERUV0ptVC1fQXRhZDVFQmJ2YlpQa1BuejJJWVQ4NUk2VDQ0a2NFNA\",\n        \"kid\": \"aIlhDTWJmT-_Atad5EBbvbZPkPnz2IYT85I6T44kcE4\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-384\",\n          \"x\": \"k7SRlQ7EwCR8VZ-LF92zOgvpFDAed0mN3mmZeCHHDznZp5TLQShFT9TdnwgsvJFP\",\n          \"y\": \"ZHzkS9BD-I2DtNPhbXuTzf6vUnykdZPus9xZnRu1rWgxVtLQ8j-Jp4YoJgdQmcOu\"\n        }\n      },\n      \"encrypted_key\": \"BO597Rs1RU3ZU-WdzWPgRnPmRULcFBihZxE7Jvl3qw3VUmR5RUXY0Xy9k_dWRnuRCh9Yzxef7tXlqVMaL4KBCfaAbAEOReQw\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"eFhkblMzTTRCYjQ5N0Ewa285YzZIMEQ0Tk5iajFYcHdHcjRUazlGY3c3aw\",\n        \"apv\": \"YUlsaERUV0ptVC1fQXRhZDVFQmJ2YlpQa1BuejJJWVQ4NUk2VDQ0a2NFNA\",\n        \"kid\": \"02WdA5ip_Amam611KA6fdoTs533yZH-ovfpt8t9zVjg\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-384\",\n          \"x\": \"QT9Q_zU9VE3K9r_50mKh7iG8SxYeXVvwnhykphMAk8akfnTeB7FIRC2MzFat9JMT\",\n          \"y\": \"3HeQPqQ_BS5vy2e2L7kgMhHNwNQ2K1pmL9LImrBg8XROuc9EaAGnFSQ439bZXg9y\"\n        }\n      },\n      \"encrypted_key\": \"oKVlxrYhp8Bvr6s6CW7DxTSCMIFMkqLjDP9sCIkLoetHlXM5Mngq46CUqHusKTceHdSOL8sGUbeSBo6lXRKArywtjiVVyStW\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"eFhkblMzTTRCYjQ5N0Ewa285YzZIMEQ0Tk5iajFYcHdHcjRUazlGY3c3aw\",\n        \"apv\": \"YUlsaERUV0ptVC1fQXRhZDVFQmJ2YlpQa1BuejJJWVQ4NUk2VDQ0a2NFNA\",\n        \"kid\": \"zeqnfYLFWtnJ_e5npBs7CtM5KkToyyM9kCKIFlcyId0\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-384\",\n          \"x\": \"GGFw14WnABx5S__MLwjy7WPgmPzCNbygbJikSqwx1nQ7APAiIyLeiAeZnAFQSr8C\",\n          \"y\": \"Bjev4lkaRbd4Ery0vnO8Ox4QgIDGbuflmFq0HhL-QHIe3KhqxrqZqbQYGlDNudEv\"\n        }\n      },\n      \"encrypted_key\": \"S8vnyPjW_19Hws3-igk-cVTSqVTY0_D9SWahnYnWBFBqTdx0b0e8hf06Oiou31Ww-Y3p8Z3O_okqQGzZMWUMLSxUPeCR2ZWx\"\n    }\n  ],\n  \"aad\": \"CftHmHttuxR6mRrHe-zBXV2UEvL2wvZEt5yeFDhYSF8\",\n  \"iv\": \"jTaCuNXs4QdX6HuWvl5AsqIEv4nh2JMP\",\n  \"ciphertext\": \"7y463zoRKgVfpKh3EBw\",\n  \"tag\": \"8YKdJpF2DnQQwEkBcbuEnw\"\n}\n

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#213-nist-p-521-keys","title":"2.1.3 NIST P-521 keys","text":"

    The packer generates the following protected headers that includes the skid: - Generated protected headers: {\"cty\":\"application/didcomm-plain+json\",\"enc\":\"XC20P\",\"skid\":\"bq3OI5517dSIMeD9K3lTqvkvvkmsRtifD6tvjlrKYsU\",\"typ\":\"application/didcomm-encrypted+json\"} - raw (no padding) base64URL encoded: eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJYQzIwUCIsInNraWQiOiJicTNPSTU1MTdkU0lNZUQ5SzNsVHF2a3Z2a21zUnRpZkQ2dHZqbHJLWXNVIiwidHlwIjoiYXBwbGljYXRpb24vZGlkY29tbS1lbmNyeXB0ZWQranNvbiJ9 - Sender key JWK format:

    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-521\",\n  \"x\": \"ACN9T83BbPNn1eRyo-TrL0GyC7kBNQvgUxk55fCeQKDSTVhbzCKia7WecCUshyEF-BOQbfEsOIUCq3g7xY3VEeth\",\n  \"y\": \"APDIfDv6abLQ-Zb_p8PxwJe1x3U0-PdgXLNbtS7evGuUROHt79SVkpfXcZ3UaEc6cMoFfd2oMvbmUjCMM4-Sgipn\",\n  \"d\": \"AXCGyR9uXY8vDr7D4HvMxep-d5biQzgHR6WsdOF4R5M9qYb8FhRIQCMbmDSZzCuqgGgXrPRMPm5-omvWVeYqwwa3\"\n}\n
    - Sender kid (jwk thumbprint raw base64 URL encoded): bq3OI5517dSIMeD9K3lTqvkvvkmsRtifD6tvjlrKYsU - Recipient 1 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-521\",\n  \"x\": \"AZi-AxJkB09qw8dBnNrz53xM-wER0Y5IYXSEWSTtzI5Sdv_5XijQn9z-vGz1pMdww-C75GdpAzp2ghejZJSxbAd6\",\n  \"y\": \"AZzRvW8NBytGNbF3dyNOMHB0DHCOzGp8oYBv_ZCyJbQUUnq-TYX7j8-PlKe9Ce5acxZzrcUKVtJ4I8JgI5x9oXIW\",\n  \"d\": \"AHGOZNkAcQCdDZOpQRdbH-f89mpjY_kGmtEpTExd51CcRlHhXuuAr6jcgb8YStwy9FN7vCU1y5LnJfKhGUGrP2a4\"\n}\n
    - Recipient 1 kid (jwk thumbprint raw base64 URL encoded): 7icoqReWFlpF16dzZD3rBgK1cJ265WzfF9sJJXqOe0M - Recipient 2 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-521\",\n  \"x\": \"ASRvWU-d_XI2S1UQb7PoXIXTLXd8vgLmFb-9mmmIrzTMmIXFXpsDN9_1-Xg_r3qkEg-zBjTi327GIseWFGMa0Mrp\",\n  \"y\": \"AJ0VyjDn4Rn6SKamFms4593mW5K936d4Jr7-J5OjJqTZtS6APgNkrwFjhKPHQfg7o8T4pmX7vlfFY5Flx7IOYJuw\",\n  \"d\": \"ALzWMohuwSqkiqqEhijiBoH6kJ580Dtxe7CfgqEboc5DG0pMtAUf-a91VbmR1U8bQox-B4_YRXoFLRns2tI_wPYz\"\n}\n
    - Recipient 2 kid (jwk thumbprint raw base64 URL encoded): BUEVQ3FlDsml4JYrLCwwsL5BUZt-hYwb2B0SoJ6dzHc - Recipient 3 key JWK format:
    {\n  \"kty\": \"EC\",\n  \"crv\": \"P-521\",\n  \"x\": \"AB2ke_2nVg95OP3Xb4Fg0Gg4KgfZZf3wBEYoOlGhXmHNCj56G10vnOe1hGRKIoD-JkPWuulcUtsIUO7r3Rz2mLP0\",\n  \"y\": \"AJTaqfF8d4cFv_fP4Uoqq-uCCObmyPsD1CphbCuCZumarfzjA5SpAQCdfz3No4Nhn53OqdcTkm654Yvfj1vOp5t6\",\n  \"d\": \"Af6Ba1x6i6glhRcR2RmZMZJ5BJXibpMB0TqjY_2Fe2LekS9QQK21JtrF20dj_gahxcrnfcn8oJ2xCrEMKaexgcsb\"\n}\n
    - Recipient 3 kid (jwk thumbprint raw, no padding, base64 URL encoded): C9iN-jkTFBbTz3Yv3FquR3dAsHYnAIg1_hT0jsefLDE - List of kids used for AAD for the above recipients (sorted kid values joined with .): 7icoqReWFlpF16dzZD3rBgK1cJ265WzfF9sJJXqOe0M.BUEVQ3FlDsml4JYrLCwwsL5BUZt-hYwb2B0SoJ6dzHc.C9iN-jkTFBbTz3Yv3FquR3dAsHYnAIg1_hT0jsefLDE - Resulting AAD value (sha256 of above list raw, no padding, base64 URL encoded): VBNrffp39h1F6sg0dzkArcd2WjpKeqEvqt6HNXaVfKU - Finally, packing the payload outputs the following JWE (pretty printed for readability):
    {\n  \"protected\": \"eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJYQzIwUCIsInNraWQiOiJicTNPSTU1MTdkU0lNZUQ5SzNsVHF2a3Z2a21zUnRpZkQ2dHZqbHJLWXNVIiwidHlwIjoiYXBwbGljYXRpb24vZGlkY29tbS1lbmNyeXB0ZWQranNvbiJ9\",\n  \"recipients\": [\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"YnEzT0k1NTE3ZFNJTWVEOUszbFRxdmt2dmttc1J0aWZENnR2amxyS1lzVQ\",\n        \"apv\": \"N2ljb3FSZVdGbHBGMTZkelpEM3JCZ0sxY0oyNjVXemZGOXNKSlhxT2UwTQ\",\n        \"kid\": \"7icoqReWFlpF16dzZD3rBgK1cJ265WzfF9sJJXqOe0M\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-521\",\n          \"x\": \"ABd71Xomy3mv-mkAipKb18UQ-1xXt7tGDDwf0k5fpLADg1qK--Jhn8TdzyjTuve7rJQrlCJH4GjuQjCWVs4T7J_T\",\n          \"y\": \"ANrWrk69QRi4cr8ZbU2vF_0jSjTIUn-fQCHJtxLg3uuvLtzGW7oIEkUFJq_sTZXL_gaPdFIWlI4aIjKRgzOUP_ze\"\n        }\n      },\n      \"encrypted_key\": \"lZa-4LTyaDP01wmN8bvoD69MLl3VY2H_wNaNJ7kYzTFExlgYTPNrFJ5XL6T_h1DUULX0TYJVxbIWQeJ_x_7i-xSv7-BHbFcm\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"YnEzT0k1NTE3ZFNJTWVEOUszbFRxdmt2dmttc1J0aWZENnR2amxyS1lzVQ\",\n        \"apv\": \"N2ljb3FSZVdGbHBGMTZkelpEM3JCZ0sxY0oyNjVXemZGOXNKSlhxT2UwTQ\",\n        \"kid\": \"BUEVQ3FlDsml4JYrLCwwsL5BUZt-hYwb2B0SoJ6dzHc\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-521\",\n          \"x\": \"ALGN2OH1_DKtEZ-990uL1kzHYhYmZD-stOdL6_NMReCKEPZil7Z1tsq0g9l0HNi6DWuMjNyiJCfDd1erWpByFAOX\",\n          \"y\": \"AQgB2aE_3GltqbWzKbWbLa6Fdq6jO4A3LrYUnNDNIuHY6eRH9sRU0yWjmcmWCoukT98wksXJ3isHr9-NqFuZLehi\"\n        }\n      },\n      \"encrypted_key\": \"bybMPkSjuSz8lLAPFJHrxjl1buE8cfONEzvQ2U64h8L0QEZPLK_VewbXVflEPNrOo3oTWlI_878GIKvkxJ8cJOD6a0kZmr87\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"YnEzT0k1NTE3ZFNJTWVEOUszbFRxdmt2dmttc1J0aWZENnR2amxyS1lzVQ\",\n        \"apv\": \"N2ljb3FSZVdGbHBGMTZkelpEM3JCZ0sxY0oyNjVXemZGOXNKSlhxT2UwTQ\",\n        \"kid\": \"C9iN-jkTFBbTz3Yv3FquR3dAsHYnAIg1_hT0jsefLDE\",\n        \"epk\": {\n          \"kty\": \"EC\",\n          \"crv\": \"P-521\",\n          \"x\": \"AZKyI6Mg8OdKUYqo3xuKjHiVrlV56_qGBzdwr86QSnebq3Y69Z0qETiTumQv5J3ECmZzs4DiETryRuzdHc2RkKBZ\",\n          \"y\": \"ARJJT7MWjTWWB7leblQgg7PYn_0deScO7AATlcnukFsLbzly0LHs1msVXaerQUCHPg2t-sYGxDP7w0iaDHB8k3Tj\"\n        }\n      },\n      \"encrypted_key\": \"nMGoNk1brn9uO9hlSa7NwVgFUMXnxpKKPkuFHSE2aM_N8q8wJbVBLC9rJ9sPIiSU20tq2sJXaAcoMteajOX6wj_Hzl1uRT1e\"\n    }\n  ],\n  \"aad\": \"VBNrffp39h1F6sg0dzkArcd2WjpKeqEvqt6HNXaVfKU\",\n  \"iv\": \"h0bbZygiAx9MMO2Huxym_QnwrXZHhdyQ\",\n  \"ciphertext\": \"LABYmf_sfPNGgls0wvk\",\n  \"tag\": \"z1rZOEgyryiW_3d5gxnMUQ\"\n}\n

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#214-x25519-keys","title":"2.1.4 X25519 keys","text":"

    The packer generates the following protected headers that includes the skid: - Generated protected headers: {\"cty\":\"application/didcomm-plain+json\",\"enc\":\"XC20P\",\"skid\":\"j8E-tcw1Z_eOCoKEH-7a9T532r8zXfcavbPZlofN0Ek\",\"typ\":\"application/didcomm-encrypted+json\"} - raw (no padding) base64URL encoded: eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJYQzIwUCIsInNraWQiOiJqOEUtdGN3MVpfZU9Db0tFSC03YTlUNTMycjh6WGZjYXZiUFpsb2ZOMEVrIiwidHlwIjoiYXBwbGljYXRpb24vZGlkY29tbS1lbmNyeXB0ZWQranNvbiJ9 - Sender key JWK format:

    {\n  \"kty\": \"OKP\",\n  \"crv\": \"X25519\",\n  \"x\": \"g3Lpdd_DRgjK28qi0sR0-hI-zv7a1X52vpzKc6ZM1Qs\",\n  \"d\": \"cPU_Io7RRHNb_xkQ_D6u3ER4vSjvsILDCKwOj8kVHXQ\"\n}\n
    - Sender kid (jwk thumbprint raw base64 URL encoded): j8E-tcw1Z_eOCoKEH-7a9T532r8zXfcavbPZlofN0Ek - Recipient 1 key JWK format:
    {\n  \"kty\": \"OKP\",\n  \"crv\": \"X25519\",\n  \"x\": \"VlhpUXj-oGs9ge-VLrmYF7Xuzy73YchIfckaYcQefBw\",\n  \"d\": \"QFHCCy0wzgJ_AlGMnjetTd0tnDaZ_7yqJODSV0d-kkg\"\n}\n
    - Recipient 1 kid (jwk thumbprint raw base64 URL encoded): _DHSbVaMeZxriDJn5VoHXYXo6BJacwZx_fGIBfCiJ5c - Recipient 2 key JWK format:
    {\n  \"kty\": \"OKP\",\n  \"crv\": \"X25519\",\n  \"x\": \"y52sexwATOR5J5znNp94MFx19J0rkgzNyLESMVhkE2M\",\n  \"d\": \"6NwEk3_8lKOwLaZM2YkLdW9MF2zDqMjAx_G-uDoAAkw\"\n}\n
    - Recipient 2 kid (jwk thumbprint raw base64 URL encoded): n2MxD23PaCkz7vptma_1j9X2JdUoCFLzrtYuDvOA0Kc - Recipient 3 key JWK format:
    {\n  \"kty\": \"OKP\",\n  \"crv\": \"X25519\",\n  \"x\": \"BYL51mNvx1LKD2wDfga_7GZc0YYI82HhRmHtXfiz_ko\",\n  \"d\": \"MLd_nsRRb_CSzc6Ou8TZFm-A17ZpT1Aen6fIvC6ZuV8\"\n}\n
    - Recipient 3 kid (jwk thumbprint raw, no padding, base64 URL encoded): HHN2ZcES5ps7gCjK-06bCE4EjX_hh7nq2cWd-GfnI5s - List of kids used for AAD for the above recipients (sorted kid values joined with .): HHN2ZcES5ps7gCjK-06bCE4EjX_hh7nq2cWd-GfnI5s._DHSbVaMeZxriDJn5VoHXYXo6BJacwZx_fGIBfCiJ5c.n2MxD23PaCkz7vptma_1j9X2JdUoCFLzrtYuDvOA0Kc - Resulting AAD value (sha256 of above list raw, no padding, base64 URL encoded): K1oFStibrX4x6LplTB0-tO3cwGiZzMvG_6w0LfguVuI - Finally, packing the payload outputs the following JWE (pretty printed for readability):
    {\n  \"protected\": \"eyJjdHkiOiJhcHBsaWNhdGlvbi9kaWRjb21tLXBsYWluK2pzb24iLCJlbmMiOiJYQzIwUCIsInNraWQiOiJqOEUtdGN3MVpfZU9Db0tFSC03YTlUNTMycjh6WGZjYXZiUFpsb2ZOMEVrIiwidHlwIjoiYXBwbGljYXRpb24vZGlkY29tbS1lbmNyeXB0ZWQranNvbiJ9\",\n  \"recipients\": [\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"ajhFLXRjdzFaX2VPQ29LRUgtN2E5VDUzMnI4elhmY2F2YlBabG9mTjBFaw\",\n        \"apv\": \"X0RIU2JWYU1lWnhyaURKbjVWb0hYWVhvNkJKYWN3WnhfZkdJQmZDaUo1Yw\",\n        \"kid\": \"_DHSbVaMeZxriDJn5VoHXYXo6BJacwZx_fGIBfCiJ5c\",\n        \"epk\": {\n          \"kty\": \"OKP\",\n          \"crv\": \"X25519\",\n          \"x\": \"77VAbpx5xn2iavmhzZATXwGnxjRyxjBbtNzojdWP7wo\"\n        }\n      },\n      \"encrypted_key\": \"dvBscDJj2H6kZJgfdqazZ9pXZxUzai-mcExsdr11-RNvxxPd4_Cy6rolLSsY6ugm1sCo9BgRhAW1e6vxgTnY3Ctv0_xZIhvr\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"ajhFLXRjdzFaX2VPQ29LRUgtN2E5VDUzMnI4elhmY2F2YlBabG9mTjBFaw\",\n        \"apv\": \"X0RIU2JWYU1lWnhyaURKbjVWb0hYWVhvNkJKYWN3WnhfZkdJQmZDaUo1Yw\",\n        \"kid\": \"n2MxD23PaCkz7vptma_1j9X2JdUoCFLzrtYuDvOA0Kc\",\n        \"epk\": {\n          \"kty\": \"OKP\",\n          \"crv\": \"X25519\",\n          \"x\": \"sZtHwxjaS51BR2SBGC32jFvUgVlABZ7rkBFqJk8ktXM\"\n        }\n      },\n      \"encrypted_key\": \"2gIQKw_QpnfGbIOso_XesSGWC9ZKu4-ox1eqRu71aS-nBWAbFrdJPqSY7gzAOGUNqg_o6mC1q7coG69G9yen37DIjcoR6mD1\"\n    },\n    {\n      \"header\": {\n        \"alg\": \"ECDH-1PU+XC20PKW\",\n        \"apu\": \"ajhFLXRjdzFaX2VPQ29LRUgtN2E5VDUzMnI4elhmY2F2YlBabG9mTjBFaw\",\n        \"apv\": \"X0RIU2JWYU1lWnhyaURKbjVWb0hYWVhvNkJKYWN3WnhfZkdJQmZDaUo1Yw\",\n        \"kid\": \"HHN2ZcES5ps7gCjK-06bCE4EjX_hh7nq2cWd-GfnI5s\",\n        \"epk\": {\n          \"kty\": \"OKP\",\n          \"crv\": \"X25519\",\n          \"x\": \"48AJF8kNoxfHXpUtBApRMUcTf8B0Ho4i_6CvGT4arGY\"\n        }\n      },\n      \"encrypted_key\": \"o_toInYq_NP45UqqFg461O6ruUNSQNKrBXRDA06JQ-faMUUfMGRtzNHK-FzrhtodZLW5bRFFFry9aFjwg5aYloe2JG9-fEcw\"\n    }\n  ],\n  \"aad\": \"K1oFStibrX4x6LplTB0-tO3cwGiZzMvG_6w0LfguVuI\",\n  \"iv\": \"tcThx2bVV8jhteYknijC-vxSED_BKPF8\",\n  \"ciphertext\": \"DUZLQAnWzApBFdwlZDg\",\n  \"tag\": \"YLuHzCD4xSTDxe_0AWukyw\"\n}\n

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#22-single-recipient-jwes","title":"2.2 Single Recipient JWEs","text":"

    Packing a message with 1 recipient using the Flattened JWE JSON serialization and the Compact JWE serialization formats as mentioned in the notes above.

    "},{"location":"features/0334-jwe-envelope/authcrypt-examples/#221-nist-p-256-key","title":"2.2.1 NIST P-256 key","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#222-nist-p-384-key","title":"2.2.2 NIST P-384 key","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#223-nist-p-521-key","title":"2.2.3 NIST P-521 key","text":""},{"location":"features/0334-jwe-envelope/authcrypt-examples/#224-x25519-key","title":"2.2.4 X25519 key","text":""},{"location":"features/0335-http-over-didcomm/","title":"0335: HTTP Over DIDComm","text":""},{"location":"features/0335-http-over-didcomm/#summary","title":"Summary","text":"

    Allows HTTP traffic to be routed over a DIDComm channel, so applications built to communicate over HTTP can make use of DID-based communication.

    "},{"location":"features/0335-http-over-didcomm/#motivation","title":"Motivation","text":"

    This protocol allows a client-server system that doesn't use DIDs or DIDComm to piggyback on top of a DID-based infrastructure, gaining the benefits of DIDs by using agents as HTTP proxies.

    Example use case: Carl wants to apply for a car loan from his friendly neighborhood used car dealer. The dealer wants a proof of his financial stability from his bank, but he doesn't want to expose the identity of his bank, and his bank doesn't want to develop a custom in-house system (using DIDs) for anonymity. HTTP over DIDComm allows Carl to introduce his car dealer to his bank, using Aries agents and protocols, while all they need to do is install a standard agent to carry arbitrary HTTP messages.

    HTTP over DIDComm turns a dev + ops problem, of redesigning and deploying your server and client to use DID communication, into an ops problem - deploying Aries infrastructure in front of your server and to your clients.

    Using HTTP over DIDComm as opposed to HTTPS between a client and server offers some key benefits: - The client and server can use methods provided by Aries agents to verify their trust in the other party - for example, by presenting verifiable credential proofs. In particular, this allows decentralized client verification and trust, as opposed to client certs. - The client and server can be blind to each others' identities (for example, using fresh peer DIDs and communicating through a router), even while using their agents to ensure trust.

    "},{"location":"features/0335-http-over-didcomm/#tutorial","title":"Tutorial","text":""},{"location":"features/0335-http-over-didcomm/#name-and-version","title":"Name and Version","text":"

    This is the HTTP over DIDComm protocol. It is uniquely identified by the URI:

    \"https://didcomm.org/http-over-didcomm/1.0\"\n
    "},{"location":"features/0335-http-over-didcomm/#concepts","title":"Concepts","text":"

    This RFC assumes that you are familiar with DID communication, and the ~purpose decorator.

    This protocol introduces a new message type which carries an HTTP message, and a method by which an Aries agent can serve as an HTTP proxy. The Aries agent determines the target agent to route the HTTP message through (for example, by parsing the HTTP message's request target), and when the target agent receives the message, it serves the message over HTTP.

    The specifics of determining the target agent or route are not part of this specification, allowing room for a wide array of uses: - A network of enterprise servers behind agents, with the agents being a known, managed pool, with message routing controlled by business logic. - A privacy mix network, with in-browser agents making requests, and routing agents sending messages on random walks through the network until an agent serves the request over the public internet. - A network of service providers behind a routing network, accessed by clients, with any provider able to handle the same class of requests, so routing is based on efficiency/load. - A network of service providers behind a routing network, accessed by clients, where the routing network hides the identity of the service provider and client from each other.

    "},{"location":"features/0335-http-over-didcomm/#protocol-flow","title":"Protocol Flow","text":"

    This protocol takes an HTTP request-response loop and stretches it out across DIDComm, with agents in the middle serving as DIDComm relays, passing along messages.

    The entities involved in this protocol are as follows: - The client and server: the HTTP client and server, which could communicate via HTTP, but in this protocol communicate over DIDComm. - The client agent: the aries agent which receives the HTTP request from the client, and converts it to a DIDComm message, which it sends to the server agent, and translates the reply from the server agent into an HTTP response. - The server agent: the aries agent which receives the DIDComm request message from the client agent, creates an HTTP request for the server, receives the HTTP response, and translates it into a DIDComm message which it sends to the client agent.

    Before a message can be sent, the server must register with its agent using the ~purpose decorator, registering on one or more purpose tags.

    When a client sends an HTTP request to a client agent, the agent may need to maintain an open connection, or store a record of the client's identity/IP address, so the client can receive the coming response.

    The client agent can include some logic to decide whether to send the message, and may need to include some logic to decide where to route the message (note that in some architectures, another agent along the route makes the decision, so the agent might always send to the same target). If it does, it constructs a request DIDComm message (defined below) and sends it to the chosen server agent.

    The route taken by the DIDComm message between the client and server agents is not covered by this RFC.

    The server agent receives the request DIDComm message. It can include some logic to decide whether to permit the message to continue to the server. If so, it makes an HTTP request using the data in the request DIDComm message, and sends it to the server.

    Note: in some use-cases, it might make sense for the server agent to act as a transparent proxy, so the server thinks its talking directly to the client, while in others it might make sense to override client identity information so the server thinks it's connecting to the server agent, for example, as a gateway. In this case, the client agent could anonymize the request, rather than leaving it up to the server agent.

    This same anonymization can be done in the other direction as well.

    The communication happens in reverse when the server sends an HTTP response to its agent, which may again decide whether to permit it to continue. If so, the contents of the HTTP response are encoded into a response DIDComm message (defined below), sent to the client agent, which also makes a go/no-go decision, does some logic (for example, looking up its thread-id to client database) to figure out where the original request in this thread came from, encodes the response data into an HTTP response, and sends that response to the client.

    "},{"location":"features/0335-http-over-didcomm/#message-format","title":"Message Format","text":"

    DIDComm messages for this protocol look as follows:

    "},{"location":"features/0335-http-over-didcomm/#request","title":"request","text":"
    {\n  \"@type\": \"https://didcomm.org/http-over-didcomm/1.0/request\",\n  \"@id\": \"2a0ec6db-471d-42ed-84ee-f9544db9da4b\",\n  \"~purpose\": [],\n  \"method\": <method>,\n  \"resource-uri\": <resource uri value>,\n  \"version\": <version>,\n  \"headers\": [],\n  \"body\": b64enc(body)\n}\n

    The body field is optional.

    The resource-uri field is also optional - if omitted, the server agent needs to set the uri based on the server that is being sent the message.

    Each element of the headers array is an object with two elements: {\"name\": \"<header-name>\", \"value\": \"<header-value>\"}.

    "},{"location":"features/0335-http-over-didcomm/#response","title":"response","text":"
    {\n  \"@type\": \"https://didcomm.org/http-over-didcomm/1.0/response\",\n  \"@id\": \"63d6f6cf-b723-4eaf-874b-ae13f3e3e5c5\",\n  \"~thread\": {\n    \"thid\": \"2a0ec6db-471d-42ed-84ee-f9544db9da4b\",\n    \"sender_order\": 1\n  },\n  \"status\": {\n      \"code\":\"\",\n      \"string\":\"\"\n  },\n  \"version\": <version>,\n  \"headers\": [],\n  \"body\": b64enc(body)\n}\n

    Responses need to indicate their target - the client who sent the request. Response DIDComm messages must include a ~thread decorator so the client agent can correlate thread IDs with its stored HTTP connections.

    The body field is optional.

    Each element of the headers array is an object with two elements: {\"name\": \"<header-name>\", \"value\": \"<header-value>\"}.

    "},{"location":"features/0335-http-over-didcomm/#receiving-http-messages-over-didcomm","title":"Receiving HTTP Messages Over DIDComm","text":"

    Aries agents intended to receive HTTP-over-DIDComm messages have many options for how they handle them, with configuration dependent on intended use. For example: - Serve the message over the internet, configured to use a DNS, etc. - Send the message to a specific server, set in configuration, for an enterprise system where a single server is behind an agent. - Send the message to a server which registered for the message's purpose.

    In cases where a specific server or application is always the target of certain messages, the server/application should register with the server agent on the specific purpose decorator. In cases where the agent may need to invoke additional logic, the agent itself can register a custom handler.

    An agent may implement filtering to accept or reject requests based on any combination of the purpose, sender, and request contents.

    "},{"location":"features/0335-http-over-didcomm/#purpose-value","title":"Purpose Value","text":"

    The purpose values used in the message should be values whose meanings are agreed upon by the client and server. For example, the purpose value can: - indicate the required capabilities of the server that handles a request - contain an anonymous identifier for the server, which has previously been communicated to the client.

    For example, to support the use of DIDComm as a client-anonymizing proxy, agents could use a purpose value like \"web-proxy\" to indicate that the HTTP request (received by the server agent) should be made on the web.

    "},{"location":"features/0335-http-over-didcomm/#reference","title":"Reference","text":""},{"location":"features/0335-http-over-didcomm/#determining-the-recipient-did-by-the-resource-uri","title":"Determining the recipient DID by the Resource URI","text":"

    In an instance of the HTTP over DIDComm protocol, it is assumed that the client agent has the necessary information to be able to determine the DID of the server agent based on the resource-uri provided in the request. It's reasonable to implement a configuration API to allow a sender or administrator to specify the DID to be used for a particular URI.

    "},{"location":"features/0335-http-over-didcomm/#-alive-timeout","title":"-Alive & Timeout","text":"

    The client agent should respect the timeout parameter provided in a keep-alive header if the request header is a keep-alive connection.

    If a client making an HTTP request expects a response over the same HTTP connection, its agent should keep this connection alive while it awaits a DIDComm response from the server agent, which it should recognize by the ~thread decorator in the response message. Timing information can be provided in an optional ~timing decorator.

    Agents implementing this RFC can make use of the ~transport decorator to enable response along the same transport.

    "},{"location":"features/0335-http-over-didcomm/#when-the-client-agent-is-the-server-agent","title":"When the Client Agent is the Server Agent","text":"

    There is a degenerate case of this protocol where the client and server agents are the same agent. In this case, instead of constructing DIDComm messages, sending them to yourself, and then unpacking them, it would be reasonable to take incoming HTTP messages, apply any pre-send logic (filtering, etc), apply any post-receive logic, and then send them out over HTTP, as a simple proxy.

    To support this optimization/simplification, the client agent should recognize if the recipient DID is its own, after determining the DID from the resource URI.

    "},{"location":"features/0335-http-over-didcomm/#http-error-codes","title":"HTTP Error Codes","text":"

    Failures within the DIDComm protocol can inform the status code returned to the client.

    If the client agent waits for the time specified in the request keep-alive timeout field, it should respond with a standard 504 gateway timeout status.

    Error codes which are returned by the server will be transported over DIDComm as normal.

    "},{"location":"features/0335-http-over-didcomm/#why-http1x","title":"Why HTTP/1.x?","text":"

    The DIDComm messages in this protocol wrap HTTP/1(.x) messages for a few reasons: - Wire-level benefits of HTTP/2 are lost by wrapping in DIDComm and sending over another transport (which could itself be HTTP/2) - DIDComm is not, generally, intended to be a streaming or latency-critical transport layer, so HTTP responses, for example, can be sent complete, including their bodies, instead of being split into frames which are sent over DIDComm separately.

    The agents are free to support communicating with the client/server using HTTP/2 - the agents simply wait until they've received a complete request or response, before sending it onwards over DIDComm.

    "},{"location":"features/0335-http-over-didcomm/#https","title":"HTTPS","text":"

    The client and server can use HTTPS to communicate with their agents - this protocol only specifies that the messages sent over DIDComm are HTTP, not HTTPS.

    "},{"location":"features/0335-http-over-didcomm/#partial-use-of-http-over-didcomm","title":"Partial use of HTTP over DIDComm","text":"

    This protocol specifies the behaviour of clients, servers, and their agents. However, the client-side and server-side are decoupled by design, meaning a custom server or client, which obeys all the semantics in this RFC while diverging on technical details, can interoperate with other compliant applications.

    For example, a client-side agent can construct request messages based on internal logic rather than a request from an external application. On the server side, an agent can handle requests and send responses directly by registering its own listener on a purpose value, rather than having a separate application register.

    "},{"location":"features/0335-http-over-didcomm/#drawbacks","title":"Drawbacks","text":"

    You might find the cost too high, with wrapping your messages into HTTP messages, and then wrapping those into DIDComm envelopes. This cost includes the time it takes to wrap and unwrap payloads, as well as the increase in message size. Small messages and simple formats would benefit from being encoded as JSON payloads within custom DIDComm message formats, instead of being wrapped in HTTP messages within DIDComm messages. Large data might benefit from being sent over another channel, encrypted, with identification, decryption, and authentication information sent over DIDComm.

    "},{"location":"features/0335-http-over-didcomm/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    The main alternative to the method proposed in this RFC is to implement DIDComm in your non-DIDComm applications, if you want them to be able to communicate with each other over DIDComm.

    Another alternative to sending HTTP messages over DIDComm is sending HTTPS over DIDComm, by establishing a TLS connection between the client and server over the DIDComm transport. This offers some tradeoffs and drawbacks which make it an edge case - it identifies the server with a certificate, it breaks the anonymity offered by DIDComm, and it is not necessary for security since DIDComm itself is securely encrypted and authenticated, and DIDComm messages can be transported over HTTPS as well.

    "},{"location":"features/0335-http-over-didcomm/#prior-art","title":"Prior art","text":"

    VPNs and onion routing (like Tor) provide solutions for similar use cases, but none so far use DIDs, which enable more complex use cases with privacy preservation.

    TLS/HTTPS, being HTTP over TLS, provides a similar transport-layer secure channel to HTTP over DIDComm. Note, this is why this RFC doesn't specify a means to perform HTTPS over DIDComm - DIDComm serves the same role as TLS does in HTTPS, but offers additional benefits: - Verifiable yet anonymous authentication of the client, for example, using delegated credentials. - Access to DIDComm mechanisms, such as using the introduce protocol to connect the client and server.

    "},{"location":"features/0335-http-over-didcomm/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0335-http-over-didcomm/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"features/0347-proof-negotiation/","title":"Aries RFC 0347: Proof Negotiation","text":""},{"location":"features/0347-proof-negotiation/#summary","title":"Summary","text":"

    This RFC proposes an extension to Aries RFC 0037: Present Proof Protocol 1.0 by taking the concept of groups out of the DID credential manifest and including them in the present proof protocol. Additionally to the rules described in the credential manifest, an option to provide alternative attributes with a weight is being introduced here. Also, the possibility to include not only attributes, but also credentials and openids in a proof by using a \"type\" was taken from the DID credential manifest. The goal of this is an approach to make proof presentation more flexible, allowing attributes to be required or optional as well as allowing a choose-from-a-list scenario. So far, proof requests were to be replied to with a proof response that contained all attributes listed in the proof request. To this, this RFC adds a way to mark attributes as optional, so that they are communicated as nice-to-have to the user of a wallet.

    "},{"location":"features/0347-proof-negotiation/#motivation","title":"Motivation","text":"

    We see a need in corporate identity and access management for a login process handling not only user authentication against an application, but also determining which privileges the user is being granted inside the application and which data the user must or may provide. Aries can provide this by combining a proof request with proof negotiation.

    "},{"location":"features/0347-proof-negotiation/#use-case-example","title":"Use Case Example","text":"

    A bank needs a customer to prove they are credit-worthy using Aries-based Self-Sovereign Identity. For this, the bank wants to make the proof of credit-worthyness flexible, in that an identity owner can offer different sets and combinations of credentials. For instance, this can be a choice between a certificate of credit-worthiness from another trusted bank or alternatively a set of credentials proving ownership over real estate and a large fortune in a bank account, for example. Optionally, an Identity Owner can add certain credentials to the proof to further prove worthiness in order to be able to obtain larger loans.

    "},{"location":"features/0347-proof-negotiation/#tutorial","title":"Tutorial","text":"

    A proof request sent to an identity owner defines the attributes to be included in the proof response, i.e. the ones to prove. To add a degree of flexibility to the process, it is possible to request attributes as necessary (meaning they have to be included in the response for it to be valid) or to allow the identity owner to pick one of or several of multiple attributes from a list. Furthermore, attributes can be marked as optional. For users, this procedure may look like the example of a privacy-friendly access permission process shown in the manifesto of Okuna, an open-source social network that is still in development at the time of this writing (click on \"continue with Okuna\" to see said example). Backend-wise, this may be implemented as follows:

    "},{"location":"features/0347-proof-negotiation/#proof-request-with-attribute-negotiation","title":"Proof Request with attribute negotiation","text":"

    This feature can be implemented building on top of the credential manifest developed by the Decentralized Identity Foundation. One feature the above concept by the Decentralized Identity Foundation lacks is a way of assigning a weight to attributes within the category \"one of\". It is possible that future implementations using this concept will want to prefer certain attributes over others in the same group if both are given, so a way of assigning these different priorities to attributes should be possible. Below is an the above example of a proof request to which a rule \"pick_weighted\" and a group D were added. Furthermore, the categories \"groups_required\" and \"groups_optional\" were added to be able to differentiate between required and optional attributes which the credential manifest did not.

    Example of a proof presentation request (from verifier):

    {\n    \"@type\": \"https://didcomm.org/present-proof/1.0/request-presentation\",\n    \"@id\": \"98fd8d82-81a6-4409-acc2-c35ea39d0f28\",\n    \"comment\": \"some comment\",\n    \"request_presentations~attach\": [\n        {\n            \"@id\": \"libindy-request-presentation-0\",\n            \"mime-type\": \"application/json\",\n            \"data\":  {\n                \"base64\": \"<yaml-formatted string describing attachments, base64url-encoded because of libindy>\"\n            }\n        }\n    ]\n}\n
    The base64url-encoded content above decodes to the following data structure, a presentation preview:
    {\n    \"@type\": \"https://didcomm.org/present-proof/1.0/presentation-preview\",\n    \"@context\": \"https://path.to/schemas/credentials\",\n    \"comment\":\"some comment\",\n    \"~thread\": {\n        \"thid\": \"98fd8d82-81a6-4409-acc2-c35ea39d0f28\",\n        \"sender_order\": 0\n    }\n    \"credential\":\"proof_request\", // verifiable claims elements\n    \"groups_required\": [ // these groups are the key feature to this RFC\n            {\n                \"rule\":\"all\",\n                \"from\": [\"A\", \"B\"]\n            },\n            {\n                \"rule\": \"pick\",\n                \"count\": 1,\n                \"from\": [\"C\"]\n            },\n            {\n                \"rule\": \"pick_weighted\",\n                \"count\": 1,\n                \"from\": [\"D\"]\n            }\n        ],\n        \"groups_optional\": [\n            {\n                \"rule\": \"all\",\n                \"from\": [\"D\"]\n            }\n        ],\n    \"inputs\": [\n        {\n            \"type\": \"data\",\n            \"name\": \"routing_number\",\n            \"group\": [\"A\"],\n            \"cred_def_id\": \"<cred_def_id>\",\n            // \"mime-type\": \"<mime-type>\" is missing, so this defaults to a json-formatted string; if it was non-null, 'value' would be interpreted as a base64url-encoded string representing a binary BLOB with mime-type telling how to interpret it after base64url-decoding\n            \"value\": {\n                \"type\": \"string\",\n                \"maxLength\": 9\n            },\n        },\n        {\n            \"type\": \"data\",\n            \"name\": \"account_number\",\n            \"group\": [\"A\"], \n            \"cred_def_id\": \"<cred_def_id>\",\n            \"value\": {\n                \"type\": \"string\",\n                \"value\": \"12345678\"\n        },\n        {\n            \"type\": \"data\",\n            \"name\": \"current_residence_duration\",\n            \"group\": [\"A\"],\n            \"cred_def_id\": \"<cred_def_id>\",\n            \"value\": {\n                \"type\": \"number\",\n                \"maximum\": 150\n            }\n        },\n        {\n            \"type\": \"credential\",\n            \"group\": [\"C\"],\n            \"schema\": \"https://eu.com/claims/IDCard\",\n            \"constraints\": {\n                \"subset\": [\"prop1\", \"prop2.foo.bar\"],\n                \"issuers\": [\"did:foo:gov1\", \"did:bar:gov2\"]\n            }\n        },\n        {\n            \"type\": \"credential\",\n            \"group\": [\"C\"],\n            \"schema\": \"hub://did:foo:123/Collections/schema.us.gov/Passport\",\n            \"constraints\": {\n                \"issuers\": [\"did:foo:gov1\", \"did:bar:gov2\"]\n            }\n\n        },\n        {\n            \"type\": \"credential\",\n            \"group\": [\"B\"],\n            \"schema\": [\"https://claims.linkedin.com/WorkHistory\", \"https://about.me/WorkHistory\"],\n            \"constraints\": {\n                \"issuers\": [\"did:foo:auditor1\", \"did:bar:auditor2\"]\n            }\n        },\n        {\n            \"type\": \"credential\",\n            \"group\": [\"B\"],\n            \"schema\": \"https://claims.fico.org/CreditHistory\",\n            \"constraints\": {\n                \"issuers\": [\"did:foo:bank1\", \"did:bar:bank2\"]\n            }\n        },\n        {\n            \"type\": \"openid\",\n            \"group\": [\"A\"],\n            \"redirect\": \"https://login.microsoftonline.com/oauth/\"\n            \"parameters\": {\n                \"client_id\": \"dhfiuhsdre\",\n                \"scope\": \"openid+profile\"                    \n            }\n        },\n        {\n            \"type\": \"credential\",\n            \"group\": [\"D\"],\n            \"schema\": \"https://some.login.com/someattribute\",\n            \"constraints\": {\n                \"issuers\": [\"did:foo:iss1\", \"did:foo:iss2\"]\n            },\n            \"weight\": 0.8\n        },\n        {\n            \"type\": \"credential\",\n            \"group\": [\"D\"],\n            \"schema\": \"https://some.otherlogin.com/someotherattribute\",\n            \"constraints\": {\n                \"issuers\": [\"did:foox:iss1\", \"did:foox:iss2\"]\n            },\n            \"weight\": 0.2\n        }\n    ],\n    \"predicates\": [\n        {\n            \"name\": \"<attribute_name>\",\n            \"cred_def_id\": \"<cred_def_id>\",\n            \"predicate\": \"<predicate>\",\n            \"threshold\": <threshold>\n        }\n    ]\n}\n

    "},{"location":"features/0347-proof-negotiation/#valid-proof-response-with-attribute-negotiation","title":"Valid Proof Response with attribute negotiation","text":"

    The following data structure is an example for a valid answer to the above credential request. It contains all attributes from groups A and B as well as one credential from each C and D. Note that the provided credential from Group D is the one weighted 0.2 as the owner did not have or was not willing to provide the one weighted 0.8.

    Valid proof presentation:

    {\n    \"@type\": \"https://didcomm.org/present-proof/1.0/proof-presentation\",\n    \"@id\": \"98fd8d82-81a6-4409-acc2-c35ea39d0f28\",\n    \"comment\": \"some comment\",\n    \"presentations~attach\": [\n        {\n            \"@id\": \"libindy-presentation-0\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<yaml-formatted string describing attachments, base64url-encoded because of libindy>\"\n            }\n        }\n    ]\n}\n
    The base64url-encoded content above would decode to this data:
    {\n    \"@type\": \"https://didcomm.org/present-proof/1.0/presentation-preview\",\n    \"@context\": \"https://path.to/schemas/credentials\"\n    \"comment\":\"some comment\",\n    \"~thread\": {\n        \"thid\": \"98f38d22-71b6-4449-adc2-c33ea39d1f29\",\n        \"sender_order\": 1,\n        \"received_orders\": {did:sov:abcxyz\":1}\n    }\n    \"credential\":\"proof_response\", // verifiable claims elements\n    \"inputs_provided\": [\n        {\n            \"type\": \"data\",\n            \"field\": \"routing_number\",\n            \"value\": \"123456\"\n        },\n        {\n            \"type\": \"data\",\n            \"field\": \"account_number\",\n            \"value\": \"12345678\"\n        },\n        {\n            \"type\": \"data\",\n            \"field\": \"current_residence_duration\",\n            \"value\": 8\n        },      \n        {\n            \"type\": \"credential\",\n            \"schema\": [\"https://claims.linkedin.com/WorkHistory\", \"https://about.me/WorkHistory\"],\n            \"issuer\": \"did:foo:auditor1\"\n        },\n        {\n            \"type\": \"credential\",\n            \"schema\": \"https://claims.fico.org/CreditHistory\",\n            \"issuer\": \"did:foo:bank1\"\n        },\n        {\n            \"type\": \"openid\",\n            \"redirect\": \"https://login.microsoftonline.com/oauth/\"\n            \"client_id\": \"dhfiuhsdre\",\n            \"profile\": \"...\"\n        },\n        {\n            \"type\": \"credential\",\n            \"schema\": \"https://eu.com/claims/IDCard\"\n            \"issuer\": \"did:foo:gov1\"\n        },\n        {\n        \"type\": \"credential\",\n            \"group\": [\"D\"],\n            \"schema\": \"https://some.otherlogin.com/someotherattribute\",\n            \"issuer\": \"did:foox:iss1\"\n        }\n    ],\n    \"predicates\": [ // empty in this case\n    ]\n}\n

    "},{"location":"features/0347-proof-negotiation/#reference","title":"Reference","text":"

    The \"@id\"-Tag and thread decorator in the above JSON-messages is taken from RFC 0008.

    "},{"location":"features/0347-proof-negotiation/#drawbacks","title":"Drawbacks","text":"

    If a user needs to choose from a list of credentials each time a proof request with a \"pick_one\"-rule is being requested, some users may dislike this, as this process requires a significant amount of user interaction and, thereby, time. This could be mitigated by an 'optional'-rule which requests all of the options the 'pick one'-rule offers. Wallets can then offer two pre-settings: \"privacy first\", which offers as little data and as many interactions with the user as possible, while \"usability first\" automatically selects the 'optional'-rule and sends more data, not asking the user before everytime. The example dialog from the Okuna manifesto referred to before shows a great way to implement this. It offers the user the most privacy-friendly option by default (which is what the GDPR requires) or the prividing of optional data. Futhermore, the optional data can be customized to include or exclude specific data.

    "},{"location":"features/0347-proof-negotiation/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Not implementing proof negotiation would mean that Aries-based Distributed Ledgers would be limited to a binary yes-or-no approach to authentication and authorization of a user, while this proof negotiation would add flexibility. An alternative way of implementing the proof negotiation is performing it ahead of the proof request in a seperate request and response. The problem with not implementing this feature would be that a proof request may need to be repeated over and over again with a different list of requested attributes each time, until a list is transferred which the specific user can reply to. This process would be unnecessarily complicated and can be facilitated by implementing this here concept.

    "},{"location":"features/0347-proof-negotiation/#prior-art","title":"Prior art","text":"

    RFC0037-present-proof is the foundation which this RFC builds on using groups from the credential manifest by the decentralized identity foundation, a \"format that normalizes the definition of requirements for the issuance of a credential\".

    "},{"location":"features/0347-proof-negotiation/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0347-proof-negotiation/#implementations","title":"Implementations","text":"Name / Link Implementation Notes"},{"location":"features/0348-transition-msg-type-to-https/","title":"Aries RFC 0348: Transition Message Type to HTTPs","text":""},{"location":"features/0348-transition-msg-type-to-https/#summary","title":"Summary","text":"

    Per issue #225, the Aries community has agreed to change the prefix for protocol message types that currently use did:sov:BzCbsNYhMrjHiqZDTUASHg;spec/ to use https://didcomm.org/. Examples of the two message types forms are:

    This RFC follows the guidance in RFC 0345 about community-coordinated updates to (try to) ensure that independently deployed, interoperable agents remain interoperable throughout this transition.

    The transition from the old to new formats will occur in four steps:

    Note: Any RFCs that already use the new \"https\" message type should continue to use the use new format in all cases\u2014accepting and sending. New protocols defined in new and updated RFCs should use the new \"https\" format.

    The community coordination triggers between the steps above will be as follows:

    "},{"location":"features/0348-transition-msg-type-to-https/#motivation","title":"Motivation","text":"

    To enable agent builders to independently update their code bases and deployed agents while maintaining interoperability.

    "},{"location":"features/0348-transition-msg-type-to-https/#tutorial","title":"Tutorial","text":"

    The general mechanism for this type of transition is documented in RFC 0345 about community-coordinated updates.

    The specific sequence of events to make this particular transition is outlined in the summary section of this RFC.

    "},{"location":"features/0348-transition-msg-type-to-https/#reference","title":"Reference","text":"

    See the summary section of this RFC for the details of this transition.

    "},{"location":"features/0348-transition-msg-type-to-https/#drawbacks","title":"Drawbacks","text":"

    None identified.

    "},{"location":"features/0348-transition-msg-type-to-https/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    This approach balances the speed of adoption with the need for independent deployment and interoperability.

    "},{"location":"features/0348-transition-msg-type-to-https/#prior-art","title":"Prior art","text":"

    The approach outlined in RFC 0345 about community-coordinated updates is a well-known pattern for using deprecation to make breaking changes in an ecosystem. That said, this is the first attempt to use this approach in Aries. Adjustments to the transition plan will be made as needed, and RFC 0345 will be updated based on lessons learned in executing this plan.

    "},{"location":"features/0348-transition-msg-type-to-https/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0348-transition-msg-type-to-https/#implementations","title":"Implementations","text":"

    The following table lists the status of various agent code bases and deployments with respect to the steps of this transition. Agent builders MUST update this table as they complete steps of the transition.

    Name / Link Implementation Notes Aries Protocol Test Suite No steps completed Aries Toolbox Completed Step 1 code change. Aries Framework - .NET Completed Step 1 code change Trinsic.id No steps completed Aries Cloud Agent - Python Completed Step 1 code change Aries Static Agent - Python No steps completed Aries Framework - Go Completed Step 2 Connect.Me No steps completed Verity No steps completed Pico Labs Completed Step 2 even though deprecated IBM Completed Step 1 code change IBM Agent Completed Step 1 Aries Cloud Agent - Pico Completed Step 2 code change Aries Framework JavaScript Completed Step 2 code change"},{"location":"features/0351-purpose-decorator/","title":"Aries RFC 0351: Purpose Decorator","text":""},{"location":"features/0351-purpose-decorator/#summary","title":"Summary","text":"

    This RFC allows Aries agents to serve as mediators or relays for applications that don't use DIDComm. It introduces: - A new decorator, the ~purpose decorator, which defines the intent, usage, or contents of a message - A means for a recipient, who is not DIDComm-enabled, to register with an agent for messages with a particular purpose - A means for a sender, who is not DIDComm-enabled, to send messages with a given purpose through its agent to a target agent - Guidance for creating a protocol which uses the ~purpose decorator to relay messages over DIDComm for non-DIDComm applications

    "},{"location":"features/0351-purpose-decorator/#motivation","title":"Motivation","text":"

    This specification allows applications that aren't Aries agents to communicate JSON messages over DIDComm using Aries agents analogously to mediators. Any agent which implements this protocol can relay arbitrary new types of message for clients - without having to be updated and redeployed.

    The purpose decorator can be used to implement client interfaces for Aries agents. For example: - A client application built using an Aries framework can use the purpose decorator for client-level messaging and protocols - Multiple client applications can connect to an agent, for example to process different types of messages, or to log for auditing purposes - A server with a remote API can include an Aries agent using the purpose decorator to provide a remote API over DIDComm - Multiple client applications can use a single agent to perform transactions on the agent owner's identity

    "},{"location":"features/0351-purpose-decorator/#tutorial","title":"Tutorial","text":"

    This RFC assumes familiarity with mediators and relays, attachments, and message threading.

    "},{"location":"features/0351-purpose-decorator/#the-purpose-decorator","title":"The ~purpose Decorator","text":"

    The ~purpose decorator is a JSON array which describes the semantics of a message - the role it plays within a protocol implemented using this RFC, for example, or the meaning of the data contained within. The purpose is the mechanism for determining which recipient(s) should be sent a message.

    Example: \"~purpose\": [\"team:01453\", \"delivery\", \"geotag\", \"cred\"]

    Each element of the purpose array is a string. An agent provides some means for recipients to register on a purpose, or class of purposes, by indicating the particular string values they are interested in.

    The particular registration semantics are TBD. Some possible formats include: - A tagging system, where if a recipient registers on a list \"foo\", \"bar\", it will be forwarded messages with purposes [\"foo\", \"quux\"] and [\"baz\", \"bar\"] - A hierarchical system, where if a recipient registers on a list \"foo\", \"bar\", it will receive any message with purpose [\"foo\", \"bar\", ...] but not [\"foo\", \"baz\", ...] or [\"baz\", \"foo\", \"bar\", ...] - A hierarchical system with wildcards: \"*\", \"foo\" might match any message with purpose [..., \"foo\", ...]

    "},{"location":"features/0351-purpose-decorator/#handling-multiple-listeners","title":"Handling Multiple Listeners","text":""},{"location":"features/0351-purpose-decorator/#priority","title":"Priority","text":"

    When multiple applications register for overlapping purposes, the agent needs a means to determine which application should receive the message, or which should receive it first. When an application registers on a purpose, it should set an integer priority. When the agent receives a message, it compares the priority of all matching listeners, choosing the lowest number value.

    "},{"location":"features/0351-purpose-decorator/#fall-through","title":"Fall-Through","text":"

    In some cases, an application that received a message can allow other listeners to process after it. In these cases, when the application is handling the message, it can indicate to the agent that it can fall-through, in which case the agent will provide the message to the next listener.

    Optionally, agents can support an always-falls-through configuration, for applications which: - Will always fall through on the messages they receive, and - Can always safely process concurrently with subsequent applications handling the same message.

    This allows the agent to send the message to such listeners concurrently with the next highest-priority listener that does not always-fall-through.

    "},{"location":"features/0351-purpose-decorator/#example-protocol","title":"Example Protocol","text":"

    This is an example protocol which makes use of the ~purpose decorator and other Aries ../../concepts to provide a message format that can carry arbitrary payloads for non-DIDComm edge applications.

    "},{"location":"features/0351-purpose-decorator/#key-concepts","title":"Key Concepts","text":"

    This RFC allows messages to be sent over DIDComm by applications that are not DIDComm-enabled, by using Aries agents as intermediaries. Both the sender and the recipient can be non-DIDComm applications.

    "},{"location":"features/0351-purpose-decorator/#non-didcomm-sender","title":"Non-DIDComm Sender","text":"

    If the sender of the message is not a DIDComm-enabled agent, then it must rely on an agent as a trusted intermediary. This agent is assumed to have configured settings for message timing, communication endpoints, etc.

    1. The sender constructs a JSON message, and provides this to its agent, alongside specifying the purpose, and likely some indication of the destination of the message.
    2. The agent determines the recipient agent - this could be by logic, for example, based on the purpose decorator, or a DID specified by the sender.
    3. The agent wraps the sender's message and purpose in a DIDComm message, and sends it to the recipient agent.
    "},{"location":"features/0351-purpose-decorator/#non-didcomm-recipient","title":"Non-DIDComm Recipient","text":"

    A non-DIDComm recipient relies on trusted agents to relay messages to it, and can register with any number of agents for any number of purposes.

    1. The recipient registers with a trusted agent on certain purpose values.
    2. The agent receives a DIDComm message, and sees it has a purpose decorator.
    3. The agent looks through its recipient registry for all recipients which registered on a matching purpose.
    4. The Agent reverses the wrapping done by the sender agent, and forwards the wrapped message to all matching registered recipients.
    "},{"location":"features/0351-purpose-decorator/#message-format","title":"Message Format","text":"

    A DIDComm message, for a protocol implemented using this RFC, requires: - A means to wrap the payload message - A ~purpose decorator

    This example protocol wraps the message within the data field.

    {\n  \"@id\": \"123456789\",\n  \"@type\": \"https://example.org/didcomm-message\",\n  \"~purpose\": [],\n  \"data\" : {}\n}\n

    For example:

    {\n  \"@id\": \"123456789\",\n  \"@type\": \"https://example.org/didcomm-message\",\n  \"~purpose\": [\"metrics\", \"latency\"],\n  \"data\": {\"mean\": 346457, \"median\": 2344}\n}\n

    "},{"location":"features/0351-purpose-decorator/#reference","title":"Reference","text":"

    This section provides guidance for implementing protocols using this decorator.

    "},{"location":"features/0351-purpose-decorator/#threading-timing","title":"Threading & Timing","text":"

    If a protocol implemented using this RFC requires back and forth communication, use the ~thread decorator and transport return routing. This allows the recipient's agent to relay replies from the recipient to the sender.

    For senders and recipients that aren't aries agents, their respective agent must maintain context to correlate the DIDComm message thread, and the message thread of the communication protocol with the non-DIDComm application.

    If a message is threaded, it can be useful to include a ~timing decorator for timing information. The sender's agent can construct this decorator from timing parameters (eg, timeout) in the communication channel with the sender, or have preconfigured settings.

    "},{"location":"features/0351-purpose-decorator/#communication-with-non-didcomm-edge-applications","title":"Communication with Non-DIDComm Edge Applications","text":"

    An organization using agents to relay messages for non-DIDComm edge applications is expected to secure the connections between their relay agents and their non-DIDComm edge applications. For example, running the agent as a service in the same container. If it is necessary for the organization to have a separate endpoint or mediator agent, it is recommended to have a thin relay agent as close as possible to the edge application, so internal messages sent to the mediator are also secured by DIDComm.

    "},{"location":"features/0351-purpose-decorator/#drawbacks","title":"Drawbacks","text":"

    TODO

    "},{"location":"features/0351-purpose-decorator/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0351-purpose-decorator/#prior-art","title":"Prior art","text":""},{"location":"features/0351-purpose-decorator/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0351-purpose-decorator/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"features/0360-use-did-key/","title":"Aries RFC 0360: did:key Usage","text":""},{"location":"features/0360-use-did-key/#summary","title":"Summary","text":"

    A number of RFCs that have been defined reference what amounts to a \"naked\" public key, such that the sender relies on the receiver knowing what type the key is and how it can be used. The application of this RFC will result in the replacement of \"naked\" verkeys (public keys) in some DIDComm/Aries protocols with the did:key ledgerless DID method, a format that concisely conveys useful information about the use of the key, including the public key type. While did:key is less a DID method than a transformation from a public key and type to an opinionated DIDDoc, it provides a versioning mechanism for supporting new/different cryptographic formats and its use makes clear how a public key is intended to be used. The method also enables support for using standard DID resolution mechanisms that may simplify the use of the key. The use of a DID to represent a public key is seen as odd by some in the community. Should a representation be found that is has better properties than a plain public key but is constrained to being \"just a key\", then we will consider changing from the did:key representation.

    To Do: Update link DID Key Method link (above) from Digital Bazaar to W3C repositories when they are created and populated.

    While it is well known in the Aries community that did:key is fundamentally different from the did:peer method that is the basis of Aries protocols, it must be re-emphasized here. This RFC does NOT imply any changes to the use of did:peer in Aries, nor does it change the content of a did:peer DIDDoc. This RFC only changes references to plain public keys in the JSON of some RFCs to use did:key in place of a plain text string.

    Should this RFC be ACCEPTED, a community coordinated update will be used to apply updates to the agent code bases and impacted RFCs.

    "},{"location":"features/0360-use-did-key/#motivation","title":"Motivation","text":"

    When one Aries agent inserts a public key into the JSON of an Aries message (for example, the ~service decorator), it assumes that the recipient agent will use the key in the intended way. At the time this RFC is being written, this is easy because only one key type is in use by all agents. However, in order to enable the use of different cryptography algorithms, the public references must be extended to at least include the key type. The preferred and concise way to do that is the use of the multicodec mechanism, which provides a registry of encodings for known key types that are prefixed to the public key in a standard and concise way. did:key extends that mechanism by providing a templated way to transform the combination of public key and key type into a DID-standard DIDDoc.

    At the cost of adding/building a did:key resolver we get a DID standard way to access the key and key type, including specific information on how the key can be used. The resolver may be trivial or complex. In a trivial version, the key type is assumed, and the key can be easily extracted from the string. In a more complete implementation, the key type can be checked, and standard DID URL handling can be used to extract parts of the DIDDoc for specific purposes. For example, in the ed25519 did:key DIDDoc, the existence of the keyAgreement entry implies that the key can be used in a Diffie-Hellman exchange, without the developer guessing, or using the key incorrectly.

    Note that simply knowing the key type is not necessarily sufficient to be able to use the key. The cryptography supporting the processing data using the key must also be available in the agent. However, the multicodec and did:key capabilities will simplify adding support for new key types in the future.

    "},{"location":"features/0360-use-did-key/#tutorial","title":"Tutorial","text":"

    An example of the use of the replacement of a verkey with did:key can be found in the ~service decorator RFC. Notably in the example at the beginning of the tutorial section, the verkeys in the recipientKeys and routingKeys items would be changed from native keys to use did:key as follows:

    {\n    \"@type\": \"somemessagetype\",\n    \"~service\": {\n        \"recipientKeys\": [\"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"],\n        \"routingKeys\": [\"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"]\n        \"serviceEndpoint\": \"https://example.com/endpoint\"\n    }\n}\n

    Thus, 8HH5gYEeNc3z7PYXmd54d4x6qAfCNrqQqEB3nS7Zfu7K becomes did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th using the following transformations:

    The transformation above is only for illustration within this RFC. The did:key specification is the definitive source for the appropriate transformations.

    The did:key method uses the strings that are the DID, public key and key type to construct (\"resolve\") a DIDDoc based on a template defined by the did:key specification. Further, the did:key resolver generates, in the case of an ed25519 public signing key, a key that can be used as part of a Diffie-Hellman exchange appropriate for encryption in the keyAgreement section of the DIDDoc. Presumably, as the did:key method supports other key types, similar DIDDoc templates will become part of the specification. Key types that don't support a signing/key exchange transformation would not have a keyAgreement entry in the resolved DIDDoc.

    The following currently implemented RFCs would be affected by acceptance of this RFC. In these RFCs, the JSON items that currently contain naked public keys (mostly the items recipientKeys and routingKeys) would be changed to use did:key references where applicable. Note that in these items public DIDs could also be used if applicable for a given use case.

    Service entries in did:peer DIDDocs (such as in RFCs 0094-cross-domain-messaging and 0067-didcomm-diddoc-conventions) should NOT use a did:key public key representation. Instead, service entries in the DIDDoc should reference keys defined internally in the DIDDoc where appropriate.

    To Do: Discuss the use of did:key (or not) in the context of encryption envelopes. This will be part of the ongoing discussion about JWEs and the upcoming discussions about JWMs\u2014a soon-to-be-proposed specification. That conversation will likely go on in the DIF DIDComm Working Group.

    "},{"location":"features/0360-use-did-key/#reference","title":"Reference","text":"

    See the did:key specification. Note that the specification is still evolving.

    "},{"location":"features/0360-use-did-key/#drawbacks","title":"Drawbacks","text":"

    The did:key standard is not finalized.

    The DIDDoc \"resolved\" from a did:key probably has more entries in it than are needed for DIDComm. That said, the entries in the DIDDoc make it clear to a developer how they can use the public key.

    "},{"location":"features/0360-use-did-key/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    We should not stick with the status quo and assume that all agents will always know the type of keys being used and how to use them.

    We should at minimum move to a scheme like multicodecs such that the key is self documenting and supports the versioning of cryptographic algorithms. However, even if we do that, we still have to document for developers how they should (and not should not) use the public key.

    Another logical alternative is to use a JWK. However, that representation only adds the type of the key (same as multicodecs) at the cost of being significantly more verbose.

    "},{"location":"features/0360-use-did-key/#prior-art","title":"Prior art","text":"

    To do - there are other instances of this pattern being used. Insert those here.

    "},{"location":"features/0360-use-did-key/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0360-use-did-key/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes

    Name / Link Implementation Notes"},{"location":"features/0418-rich-schema-encoding/","title":"Aries RFC 0418: Aries Rich Schema Encoding Objects","text":""},{"location":"features/0418-rich-schema-encoding/#summary","title":"Summary","text":"

    The introduction of rich schemas and their associated greater range of possible attribute value data types require correspondingly rich transformation algorithms. The purpose of the new encoding object is to specify the algorithm used to perform transformations of each attribute value data type into a canonical data encoding in a deterministic way.

    The initial use for these will be the transformation of attribute value data into 256-bit integers so that they can be incorporated into the anonymous credential signature schemes we use. The transformation algorithms will also allow for extending the cryptographic schemes and various sizes of canonical data encodings (256-bit, 384-bit, etc.). The transformation algorithms will allow for broader use of predicate proofs, and avoid hashed values as much as possible, as they do not support predicate proofs.

    Encoding objects are processed in a generic way defined in Rich Schema Objects Common.

    "},{"location":"features/0418-rich-schema-encoding/#motivation","title":"Motivation","text":"

    All attribute values to be signed in anonymous credentials must be transformed into 256-bit integers in order to support the [Camenisch-Lysyanskaya signature][CL-signatures] scheme.

    The current methods for creating a credential only accept attributes which are encoded as 256-bit integers. The current possible source attribute types are integers and strings. No configuration method exists at this time to specify which transformation method will be applied to a particular attribute. All encoded attribute values rely on an implicit understanding of how they were encoded.

    The current set of canonical encodings consists of integers and hashed strings. The introduction of encoding objects allows for a means of extending the current set of canonical encodings to include integer representations of dates, lengths, boolean values, and floating point numbers. All encoding objects describe how an input is transformed into an encoding of an attribute value according to the transformation algorithm selected by the issuer.

    "},{"location":"features/0418-rich-schema-encoding/#tutorial","title":"Tutorial","text":""},{"location":"features/0418-rich-schema-encoding/#intro-to-encoding-objects","title":"Intro to Encoding Objects","text":"

    Encoding objects are JSON objects that describe the input types, transformation algorithms, and output encodings. The encoding object is stored on the ledger.

    "},{"location":"features/0418-rich-schema-encoding/#properties","title":"Properties","text":"

    Encoding's properties follow the generic template defined in Rich Schema Common.

    Encoding's content field is a JSON-serialized string with the following fields:

    "},{"location":"features/0418-rich-schema-encoding/#example-encoding","title":"Example Encoding","text":"

    An example of the content field of an Encoding object:

    {\n    \"input\": {\n        \"id\": \"DateRFC3339\",\n        \"type\": \"string\"\n    },\n    \"output\": {\n        \"id\": \"UnixTime\",\n        \"type\": \"256-bit integer\"\n    },\n    \"algorithm\": {\n        \"description\": \"This encoding transforms an\n            RFC3339-formatted datetime object into the number\n            of seconds since January 1, 1970 (the Unix epoch).\",\n        \"documentation\": URL to specific github commit,\n        \"implementation\": URL to implementation\n    },\n    \"testVectors\": URL to specific github commit\n}\n

    "},{"location":"features/0418-rich-schema-encoding/#transformation-algorithms","title":"Transformation Algorithms","text":"

    The purpose of a transformation algorithm is to deterministically convert a value into a different encoding. For example, an attribute value may be a string representation of a date, but the CL-signature signing mechanism requires all inputs to be 256-bit integers. The transformation algorithm takes this string value as input, parses it, and encodes it as a 256-bit integer.

    It is anticipated that the encodings used for CL signatures and their associated transformation algorithms will be used primarily by two entities. First, the issuer will use the transformation algorithm to prepare credential values for signing. Second, the verifier will use the transformation algorithm to verify that revealed values were correctly encoded and signed, and to properly transform values against which predicates may be evaluated.

    "},{"location":"features/0418-rich-schema-encoding/#integer-representation","title":"Integer Representation","text":"

    In order to properly encode values as integers for use in predicate proofs, a common 256-bit integer representation is needed. Predicate proofs are kept simple by requiring all inputs to be represented as positive integers. To accomplish this, we introduce a zero-offset and map all integer results onto a range from 9 to 2256 - 10. The zero point in this range is 2255.

    Any transformation algorithm which outputs an integer value should use this representation.

    "},{"location":"features/0418-rich-schema-encoding/#floating-point-representation","title":"Floating Point Representation","text":"

    In order to retain the provided precision of floating point values, we use Q number format, a binary, fixed-point number format. We use 64 fractional bits.

    "},{"location":"features/0418-rich-schema-encoding/#reserved-values","title":"Reserved Values","text":"

    For integer and floating point representations, there are some reserved numeric strings which have a special meaning.

    Special Value Representation Description -\u221e 8 The largest negative number.Always less than any other valid integer. \u221e 2256 - 9 The largest positive number.Always greater than any other valid integer. NULL 7 Indicates that the value of a field is not supplied.Not a valid value for comparisons. NaN 2256 - 8 Floating point NaN.Not a valid value for comparisons. reserved 1 to 6 Reserved for future use. reserved 2256 - 7 to 2256 - 1 Reserved for future use."},{"location":"features/0418-rich-schema-encoding/#documentation","title":"Documentation","text":"

    The value of the documentation field is intended to be a URL which, when dereferenced, will provide specific information about the transformation algorithm such that it may be implemented. We recommend that the URL reference some immutable content, such as a specific github commit, an IPFS file, etc.

    "},{"location":"features/0418-rich-schema-encoding/#implementation","title":"Implementation","text":"

    The value of the implementation field is intended to be a URL which, when dereferenced, will provide a reference implementation of the transformation algorithm.

    "},{"location":"features/0418-rich-schema-encoding/#test-vectors","title":"Test Vectors","text":"

    Test vectors are very important. Although not comprehensive, a set of public test vectors allows for multiple implementations to verify adherence to the transformation algorithm for the set. Test vectors should consist of a set of comma-separated input/output pairs. The input values should be read from the file as strings. The output values should be byte strings encoded as hex values.

    The value of the test_vectors field is intended to be a URL which, when dereferenced, will provide the file of test vectors. We recommend that the URL reference some immutable content, such as a specific github commit, an IPFS file, etc.

    "},{"location":"features/0418-rich-schema-encoding/#data-registry-storage","title":"Data Registry Storage","text":"

    Aries will provide a means for writing contexts to and reading contexts from a verifiable data registry (such as a distributed ledger).

    An Encoding object will be written to the ledger in a generic way defined in Rich Schema Objects Common.

    "},{"location":"features/0418-rich-schema-encoding/#aries-data-registry-interface","title":"Aries Data Registry Interface","text":"

    Aries Data Registry Interface methods for adding and retrieving an Encoding object from the ledger comply with the generic approach described in Rich Schema Objects Common.

    This means the following methods can be used: - write_rich_schema_object - read_rich_schema_object_by_id - read_rich_schema_object_by_metadata

    "},{"location":"features/0418-rich-schema-encoding/#reference","title":"Reference","text":"

    The following is a reference implementation of various transformation algorithms.

    Here is the paper that defines Camenisch-Lysyanskaya signatures.

    "},{"location":"features/0418-rich-schema-encoding/#drawbacks","title":"Drawbacks","text":"

    This increases the complexity of issuing verifiable credentials and verifiying the accompanying verifiable presentations.

    "},{"location":"features/0418-rich-schema-encoding/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Encoding attribute values as integers is already part of using anonymous credentials, however the current method is implicit, and relies on use of a common implementation library for uniformity. If we do not include encodings as part of the Rich Schema effort, we will be left with an incomplete set of possible predicates, a lack of explicit mechanisms for issuers to specify which encoding methods they used, and a corresponding lack of verifiablity of signed attribute values.

    In another design that was considered, the encoding on the ledger was actually a function an end user could call, with the ledger nodes performing the transformation algorithm and returning the encoded value. The benefit of such a design would have been the guarantee of uniformity across encoded values. This design was rejected because of the unfeasibility of using the ledger nodes for such calculations and the privacy implications of submitting attribute values to a public ledger.

    "},{"location":"features/0418-rich-schema-encoding/#prior-art","title":"Prior art","text":"

    A description of a prior effort to add encodings to Indy may be found in this jira ticket and pull request.

    What the prior effort lacked was a corresponding enhancement of schema infrastructure which would have provided the necessary typing of attribute values.

    "},{"location":"features/0418-rich-schema-encoding/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0418-rich-schema-encoding/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0428-prepare-issue-rich-credential/","title":"0428: Prerequisites to Issue Rich Credential","text":""},{"location":"features/0428-prepare-issue-rich-credential/#summary","title":"Summary","text":"

    Describes the prerequisites an issuer must ensure are in place before issuing a rich credential.

    "},{"location":"features/0428-prepare-issue-rich-credential/#motivation","title":"Motivation","text":"

    To inform issuers of the steps they should take in order to make sure they have the necessary rich schema objects in place before they use them to issue credentials.

    "},{"location":"features/0428-prepare-issue-rich-credential/#tutorial","title":"Tutorial","text":""},{"location":"features/0428-prepare-issue-rich-credential/#rich-schema-credential-workflow","title":"Rich Schema Credential Workflow","text":"
    1. The issuer checks the ledger to see if the credential definition he wants to use is already present.
    2. If not, the issuer checks the ledger to see if the mapping he wants to use is already present.
      1. If not, the issuer checks the ledger to see if the schemas he wants to use are already present.
        1. If not, anchor the context used by each schema to the ledger.
        2. Anchor the schemas on the ledger. Schema objects may refer to one or more context objects.
      2. Anchor to the ledger the mapping object that associates each claim with one or more encoding objects and a corresponding attribute. (The issuer selects schema properties and associated encodings to be included as claims in the credential. Encoding objects refer to transformation algorithms, documentation, and code which implements the transformation. The claim is the data; the attribute is the transformed data represented as a 256 bit integer that is signed. The mapping object refers to the schema objects and encoding objects.)
    3. Anchor a credential definition that refers to a single mapping object. The credential definition contains public keys for each attribute. The credential definition refers to the issuer DID.
    4. Using the credential definition, mapping, and schema(s) issue to the holder a credential based on the credential definition and the supplied claim data. The Issue Credential Protocol 1.0 will be the model for another RFC containing minor modifications to issue a credential using the new rich schema objects.

    Subsequent credentials may be issued by repeating only the last step.

    "},{"location":"features/0428-prepare-issue-rich-credential/#reference","title":"Reference","text":""},{"location":"features/0428-prepare-issue-rich-credential/#unresolved-questions","title":"Unresolved questions","text":"

    RFCs for Rich Schema Mappings and Rich Schema Credential Definitions are incomplete.

    "},{"location":"features/0428-prepare-issue-rich-credential/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0429-prepare-req-rich-pres/","title":"0429: Prerequisites to Request Rich Presentation","text":""},{"location":"features/0429-prepare-req-rich-pres/#summary","title":"Summary","text":"

    Describes the prerequisites a verifier must ensure are in place before requesting a rich presentation.

    "},{"location":"features/0429-prepare-req-rich-pres/#motivation","title":"Motivation","text":"

    To inform verifiers of the steps they should take in order to make sure they have the necessary rich schema objects in place before they use them to request proofs.

    "},{"location":"features/0429-prepare-req-rich-pres/#tutorial","title":"Tutorial","text":""},{"location":"features/0429-prepare-req-rich-pres/#rich-schema-presentation-definition-workflow","title":"Rich Schema Presentation Definition Workflow","text":"
    1. The verifier checks his wallet or the ledger to see if the presentation definition already exists. (The verifier determines which attribute or predicates he needs a holder to present to satisfy the verifier's business rules. Presentation definitions specify desired attributes and predicates).
    2. If not, the verifier creates a new presentation definition and stores the presentation definition in his wallet locally and, optionally, anchors it to the verifiable data registry. (Anchoring the presentation definition to the verifiable data registry allows other verifiers to easily use it. It can be done by writing the full presentation definition's content to the ledger, or just writing a digital fingerprint/hash of the content.)
    3. Using the presentation definition, request a presentation from the holder. The Present Proof Protocol 1.0 will be the model for another RFC containing minor modifications for presenting a proof based on verifiable credentials using the new rich schema objects.
    "},{"location":"features/0429-prepare-req-rich-pres/#reference","title":"Reference","text":""},{"location":"features/0429-prepare-req-rich-pres/#unresolved-questions","title":"Unresolved questions","text":"

    The RFC for Rich Schema Presentation Definitions is incomplete.

    "},{"location":"features/0429-prepare-req-rich-pres/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0434-outofband/","title":"Aries RFC 0434: Out-of-Band Protocol 1.1","text":""},{"location":"features/0434-outofband/#summary","title":"Summary","text":"

    The Out-of-band protocol is used when you wish to engage with another agent and you don't have a DIDComm connection to use for the interaction.

    "},{"location":"features/0434-outofband/#motivation","title":"Motivation","text":"

    The use of the invitation in the Connection and DID Exchange protocols has been relatively successful, but has some shortcomings, as follows.

    "},{"location":"features/0434-outofband/#connection-reuse","title":"Connection Reuse","text":"

    A common pattern we have seen in the early days of Aries agents is a user with a browser getting to a point where a connection is needed between the website's (enterprise) agent and the user's mobile agent. A QR invitation is displayed, scanned and a protocol is executed to establish a connection. Life is good!

    However, with the current invitation processes, when the same user returns to the same page, the same process is executed (QR code, scan, etc.) and a new connection is created between the two agents. There is no way for the user's agent to say \"Hey, I've already got a connection with you. Let's use that one!\"

    We need the ability to reuse a connection.

    "},{"location":"features/0434-outofband/#connection-establishment-versioning","title":"Connection Establishment Versioning","text":"

    In the existing Connections and DID Exchange invitation handling, the inviter dictates what connection establishment protocol all invitee's will use. A more sustainable approach is for the inviter to offer the invitee a list of supported protocols and allow the invitee to use one that it supports.

    "},{"location":"features/0434-outofband/#handling-of-all-out-of-band-messages","title":"Handling of all Out-of-Band Messages","text":"

    We currently have two sets of out-of-band messages that cannot be delivered via DIDComm because there is no channel. We'd like to align those messages into a single \"out-of-band\" protocol so that their handling can be harmonized inside an agent, and a common QR code handling mechanism can be used.

    "},{"location":"features/0434-outofband/#urls-and-qr-code-handling","title":"URLs and QR Code Handling","text":"

    We'd like to have the specification of QR handling harmonized into a single RFC (this one).

    "},{"location":"features/0434-outofband/#tutorial","title":"Tutorial","text":""},{"location":"features/0434-outofband/#key-concepts","title":"Key Concepts","text":"

    The Out-of-band protocol is used when an agent doesn't know if it has a connection with another agent. This could be because you are trying to establish a new connection with that agent, you have connections but don't know who the other party is, or if you want to have a connection-less interaction. Since there is no DIDComm connection to use for the messages of this protocol, the messages are plaintext and sent out-of-band, such as via a QR code, in an email message or any other available channel. Since the delivery of out-of-band messages will often be via QR codes, this RFC also covers the use of QR codes.

    Two well known use cases for using an out-of-band protocol are:

    In both cases, there is only a single out-of-band protocol message sent. The message responding to the out-of-band message is a DIDComm message from an appropriate protocol.

    Note that the website-to-agent model is not the only such interaction enabled by the out-of-band protocol, and a QR code is not the only delivery mechanism for out-of-band messages. However, they are useful as examples of the purpose of the protocol.

    "},{"location":"features/0434-outofband/#roles","title":"Roles","text":"

    The out-of-band protocol has two roles: sender and receiver.

    "},{"location":"features/0434-outofband/#sender","title":"sender","text":"

    The agent that generates the out-of-band message and makes it available to the other party.

    "},{"location":"features/0434-outofband/#receiver","title":"receiver","text":"

    The agent that receives the out-of-band message and decides how to respond. There is no out-of-band protocol message with which the receiver will respond. Rather, if they respond, they will use a message from another protocol that the sender understands.

    "},{"location":"features/0434-outofband/#states","title":"States","text":"

    The state machines for the sender and receiver are a bit odd for the out-of-band protocol because it consists of a single message that kicks off a co-protocol and ends when evidence of the co-protocol's launch is received, in the form of some response. In the following state machine diagrams we generically describe the response message from the receiver as being a DIDComm message.

    The sender state machine is as follows:

    Note the \"optional\" reference under the second event in the await-response state. That is to indicate that an out-of-band message might be a single use message with a transition to done, or reusable message (received by many receivers) with a transition back to await-response.

    The receiver state machine is as follows:

    Worth noting is the first event of the done state, where the receiver may receive the message multiple times. This represents, for example, an agent returning to the same website and being greeted with instances of the same QR code each time.

    "},{"location":"features/0434-outofband/#messages","title":"Messages","text":"

    The out-of-band protocol a single message that is sent by the sender.

    "},{"location":"features/0434-outofband/#invitation-httpsdidcommorgout-of-bandverinvitation","title":"Invitation: https://didcomm.org/out-of-band/%VER/invitation","text":"
    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/invitation\",\n  \"@id\": \"<id used for context as pthid>\",\n  \"label\": \"Faber College\",\n  \"goal_code\": \"issue-vc\",\n  \"goal\": \"To issue a Faber College Graduate credential\",\n  \"accept\": [\n    \"didcomm/aip2;env=rfc587\",\n    \"didcomm/aip2;env=rfc19\"\n  ],\n  \"handshake_protocols\": [\n    \"https://didcomm.org/didexchange/1.0\",\n    \"https://didcomm.org/connections/1.0\"\n  ],\n  \"requests~attach\": [\n    {\n      \"@id\": \"request-0\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"json\": \"<json of protocol message>\"\n      }\n    }\n  ],\n  \"services\": [\"did:sov:LjgpST2rjsoxYegQDRm7EL\"]\n}\n

    The items in the message are:

    If only the handshake_protocols item is included, the initial interaction will complete with the establishment (or reuse) of the connection. Either side may then use that connection for any purpose. A common use case (but not required) would be for the sender to initiate another protocol after the connection is established to accomplish some shared goal.

    If only the requests~attach item is included, no new connection is expected to be created, although one could be used if the receiver knows such a connection already exists. The receiver responds to one of the messages in the requests~attach array. The requests~attach item might include the first message of a protocol from the sender, or might be a please-play-the-role message requesting the receiver initiate a protocol. If the protocol requires a further response from the sender to the receiver, the receiver must include a ~service decorator for the sender to use in responding.

    If both the handshake_protocols and requests~attach items are included in the message, the receiver should first establish a connection and then respond (using that connection) to one of the messages in the requests~attach message. If a connection already exists between the parties, the receiver may respond immediately to the request-attach message using the established connection.

    "},{"location":"features/0434-outofband/#reuse-messages","title":"Reuse Messages","text":"

    While the receiver is expected to respond with an initiating message from a handshake_protocols or requests~attach item using an offered service, the receiver may be able to respond by reusing an existing connection. Specifically, if a connection they have was created from an out-of-band invitation from the same services DID of a new invitation message, the connection MAY be reused. The receiver may choose to not reuse the existing connection for privacy purposes and repeat a handshake protocol to receive a redundant connection.

    If a message has a service block instead of a DID in the services list, you may enable reuse by encoding the key and endpoint of the service block in a Peer DID numalgo 2 and using that DID instead of a service block.

    If the receiver desires to reuse the existing connection and a requests~attach item is included in the message, the receiver SHOULD respond to one of the attached messages using the existing connection.

    If the receiver desires to reuse the existing connection and no requests~attach item is included in the message, the receiver SHOULD attempt to do so with the reuse and reuse-accepted messages. This will notify the inviter that the existing connection should be used, along with the context that can be used for follow-on interactions.

    While the invitation message is passed unencrypted and out-of-band, both the handshake-reuse and handshake-reuse-accepted messages MUST be encrypted and transmitted as normal DIDComm messages.

    "},{"location":"features/0434-outofband/#reuse-httpsdidcommorgout-of-bandverhandshake-reuse","title":"Reuse: https://didcomm.org/out-of-band/%VER/handshake-reuse","text":"
    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/handshake-reuse\",\n  \"@id\": \"<id>\",\n  \"~thread\": {\n    \"thid\": \"<same as @id>\",\n    \"pthid\": \"<The @id of the Out-of-Band invitation>\"\n  }\n}\n

    The items in the message are:

    Sending or receiving this message does not change the state of the existing connection.

    When the inviter receives the handshake-reuse message, they MUST respond with a handshake-reuse-accepted message to notify that invitee that the request to reuse the existing connection is successful.

    "},{"location":"features/0434-outofband/#reuse-accepted-httpsdidcommorgout-of-bandverhandshake-reuse-accepted","title":"Reuse Accepted: https://didcomm.org/out-of-band/%VER/handshake-reuse-accepted","text":"
    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/handshake-reuse-accepted\",\n  \"@id\": \"<id>\",\n  \"~thread\": {\n    \"thid\": \"<The Message @id of the reuse message>\",\n    \"pthid\": \"<The @id of the Out-of-Band invitation>\"\n  }\n}\n

    The items in the message are:

    If this message is not received by the invitee, they should use the regular process. This message is a mechanism by which the invitee can detect a situation where the inviter no longer has a record of the connection and is unable to decrypt and process the handshake-reuse message.

    After sending this message, the inviter may continue any desired protocol interactions based on the context matched by the pthid present in the handshake-reuse message.

    "},{"location":"features/0434-outofband/#responses","title":"Responses","text":"

    The following table summarizes the different forms of the out-of-band invitation message depending on the presence (or not) of the handshake_protocols item, the requests~attach item and whether or not a connection between the agents already exists.

    handshake_protocols Present? requests~attach Present? Existing connection? Receiver action(s) No No No Impossible Yes No No Uses the first supported protocol from handshake_protocols to make a new connection using the first supported services entry. No Yes No Send a response to the first supported request message using the first supported services entry. Include a ~service decorator if the sender is expected to respond. No No Yes Impossible Yes Yes No Use the first supported protocol from handshake_protocols to make a new connection using the first supported services entry, and then send a response message to the first supported attachment message using the new connection. Yes No Yes Send a handshake-reuse message. No Yes Yes Send a response message to the first supported request message using the existing connection. Yes Yes Yes Send a response message to the first supported request message using the existing connection.

    Both the goal_code and goal fields SHOULD be used with the localization service decorator. The two fields are to enable both human and machine handling of the out-of-band message. goal_code is to specify a generic, protocol level outcome for sending the out-of-band message (e.g. issue verifiable credential, request proof, etc.) that is suitable for machine handling and possibly human display, while goal provides context specific guidance, targeting mainly a person controlling the receiver's agent. The list of goal_code values is provided in the Message Catalog section of this RFC.

    "},{"location":"features/0434-outofband/#the-services-item","title":"The services Item","text":"

    As mentioned in the description above, the services item array is intended to be analogous to the service block of a DIDDoc. When not reusing an existing connection, the receiver scans the array and selects (according to the rules described below) a service entry to use for the response to the out-of-band message.

    There are two forms of entries in the services item array:

    The following is an example of a two entry array, one of each form:

    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/invitation\",\n  \"@id\": \"<id used for context as pthid>\",\n  \"label\": \"Faber College\",\n  \"handshake_protocols\": [\"https://didcomm.org/didexchange/1.0\"],\n  \"services\": [\n    {\n      \"id\": \"#inline\",\n      \"type\": \"did-communication\",\n      \"recipientKeys\": [\"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\"],\n      \"routingKeys\": [],\n      \"serviceEndpoint\": \"https://example.com:5000\"\n    },\n    \"did:sov:LjgpST2rjsoxYegQDRm7EL\"\n  ]\n}\n

    The processing rules for the services block are:

    The attributes in the inline form parallel the attributes of a DID Document for increased meaning. The recipientKeys and routingKeys within the inline block decorator MUST be did:key references.

    As defined in the DIDComm Cross Domain Messaging RFC, if routingKeys is present and non-empty, additional forwarding wrapping are necessary in the response message.

    When considering routing and options for out-of-band messages, keep in mind that the more detail in the message, the longer the URL will be and (if used) the more dense (and harder to scan) the QR code will be.

    "},{"location":"features/0434-outofband/#service-endpoint","title":"Service Endpoint","text":"

    The service endpoint used to transmit the response is either present in the out-of-band message or available in the DID Document of a presented DID. If the endpoint is itself a DID, the serviceEndpoint in the DIDDoc of the resolved DID MUST be a URI, and the recipientKeys MUST contain a single key. That key is appended to the end of the list of routingKeys for processing. For more information about message forwarding and routing, see RFC 0094 Cross Domain Messaging.

    "},{"location":"features/0434-outofband/#adoption-messages","title":"Adoption Messages","text":"

    The problem_report message MAY be adopted by the out-of-band protocol if the agent wants to respond with problem reports to invalid messages, such as attempting to reuse a single-use invitation.

    "},{"location":"features/0434-outofband/#constraints","title":"Constraints","text":"

    An existing connection can only be reused based on a DID in the services list in an out-of-band message.

    "},{"location":"features/0434-outofband/#reference","title":"Reference","text":""},{"location":"features/0434-outofband/#messages-reference","title":"Messages Reference","text":"

    The full description of the message in this protocol can be found in the Tutorial section of this RFC.

    "},{"location":"features/0434-outofband/#localization","title":"Localization","text":"

    The goal_code and goal fields SHOULD have localization applied. See the purpose of those fields in the message type definitions section and the message catalog section (immediately below).

    "},{"location":"features/0434-outofband/#message-catalog","title":"Message Catalog","text":""},{"location":"features/0434-outofband/#goal_code","title":"goal_code","text":"

    The following values are defined for the goal_code field:

    Code (cd) English (en) issue-vc To issue a credential request-proof To request a proof create-account To create an account with a service p2p-messaging To establish a peer-to-peer messaging relationship"},{"location":"features/0434-outofband/#goal","title":"goal","text":"

    The goal localization values are use case specific and localization is left to the agent implementor to enable using the techniques defined in the ~l10n RFC.

    "},{"location":"features/0434-outofband/#roles-reference","title":"Roles Reference","text":"

    The roles are defined in the Tutorial section of this RFC.

    "},{"location":"features/0434-outofband/#states-reference","title":"States Reference","text":""},{"location":"features/0434-outofband/#initial","title":"initial","text":"

    No out-of-band messages have been sent.

    "},{"location":"features/0434-outofband/#await-response","title":"await-response","text":"

    The sender has shared an out-of-band message with the intended receiver(s), and the sender has not yet received all of the responses. For a single-use out-of-band message, there will be only one response; for a multi-use out-of-band message, there is no defined limit on the number of responses.

    "},{"location":"features/0434-outofband/#prepare-response","title":"prepare-response","text":"

    The receiver has received the out-of-band message and is preparing a response. The response will not be an out-of-band protocol message, but a message from another protocol chosen based on the contents of the out-of-band message.

    "},{"location":"features/0434-outofband/#done","title":"done","text":"

    The out-of-band protocol has been completed. Note that if the out-of-band message was intended to be available to many receivers (a multiple use message), the sender returns to the await-response state rather than going to the done state.

    "},{"location":"features/0434-outofband/#errors","title":"Errors","text":"

    There is an optional courtesy error message stemming from an out-of-band message that the sender could provide if they have sufficient recipient information. If the out-of-band message is a single use message and the sender receives multiple responses and each receiver's response includes a way for the sender to respond with a DIDComm message, all but the first MAY be answered with a problem_report.

    "},{"location":"features/0434-outofband/#error-message-example","title":"Error Message Example","text":"
    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/problem_report\",\n  \"@id\": \"5678876542345\",\n  \"~thread\": { \"pthid\": \"<@id of the OutofBand message>\" },\n  \"description\": {\n    \"en\": \"The invitation has expired.\",\n    \"code\": \"expired-invitation\"\n  },\n  \"impact\": \"thread\"\n}\n

    See the problem-report protocol for details on the items in the example.

    "},{"location":"features/0434-outofband/#flow-overview","title":"Flow Overview","text":"

    In an out-of-band message the sender gives information to the receiver about the kind of DIDComm protocol response messages it can handle and how to deliver the response. The receiver uses that information to determine what DIDComm protocol/message to use in responding to the sender, and (from the service item or an existing connection) how to deliver the response to the sender.

    The handling of the response is specified by the protocol used.

    To Do: Make sure that the following remains in the DID Exchange/Connections RFCs

    Any Published DID that expresses support for DIDComm by defining a service that follows the DIDComm conventions serves as an implicit invitation. If an invitee wishes to connect to any Published DID, they need not wait for an out-of-band invitation message. Rather, they can designate their own label and initiate the appropriate protocol (e.g. 0160-Connections or 0023-DID-Exchange) for establishing a connection.

    "},{"location":"features/0434-outofband/#standard-out-of-band-message-encoding","title":"Standard Out-of-Band Message Encoding","text":"

    Using a standard out-of-band message encoding allows for easier interoperability between multiple projects and software platforms. Using a URL for that standard encoding provides a built in fallback flow for users who are unable to automatically process the message. Those new users will load the URL in a browser as a default behavior, and may be presented with instructions on how to install software capable of processing the message. Already onboarded users will be able to process the message without loading in a browser via mobile app URL capture, or via capability detection after being loaded in a browser.

    The standard out-of-band message format is a URL with a Base64Url encoded json object as a query parameter.

    Please note the difference between Base64Url and Base64 encoding.

    The URL format is as follows, with some elements described below:

    https://<domain>/<path>?oob=<outofbandMessage>\n

    <domain> and <path> should be kept as short as possible, and the full URL SHOULD return human readable instructions when loaded in a browser. This is intended to aid new users. The oob query parameter is required and is reserved to contain the out-of-band message string. Additional path elements or query parameters are allowed, and can be leveraged to provide coupons or other promise of payment for new users.

    To do: We need to rationalize this approach https:// approach with the use of a special protocol (e.g. didcomm://) that will enable handling of the URL on mobile devices to automatically invoke an installed app on both Android and iOS. A user must be able to process the out-of-band message on the device of the agent (e.g. when the mobile device can't scan the QR code because it is on a web page on device).

    The <outofbandMessage> is an agent plaintext message (not a DIDComm message) that has been Base64Url encoded such that the resulting string can be safely used in a URL.

    outofband_message = base64UrlEncode(<outofbandMessage>)\n

    During Base64Url encoding, whitespace from the JSON string SHOULD be eliminated to keep the resulting out-of-band message string as short as possible.

    "},{"location":"features/0434-outofband/#example-out-of-band-message-encoding","title":"Example Out-of-Band Message Encoding","text":"

    Invitation:

    {\n  \"@type\": \"https://didcomm.org/out-of-band/1.0/invitation\",\n  \"@id\": \"69212a3a-d068-4f9d-a2dd-4741bca89af3\",\n  \"label\": \"Faber College\",\n  \"goal_code\": \"issue-vc\",\n  \"goal\": \"To issue a Faber College Graduate credential\",\n  \"handshake_protocols\": [\"https://didcomm.org/didexchange/1.0\", \"https://didcomm.org/connections/1.0\"],\n  \"services\": [\"did:sov:LjgpST2rjsoxYegQDRm7EL\"]\n}\n

    Whitespace removed:

    {\"@type\":\"https://didcomm.org/out-of-band/1.0/invitation\",\"@id\":\"69212a3a-d068-4f9d-a2dd-4741bca89af3\",\"label\":\"Faber College\",\"goal_code\":\"issue-vc\",\"goal\":\"To issue a Faber College Graduate credential\",\"handshake_protocols\":[\"https://didcomm.org/didexchange/1.0\",\"https://didcomm.org/connections/1.0\"],\"services\":[\"did:sov:LjgpST2rjsoxYegQDRm7EL\"]}\n

    Base64Url encoded:

    eyJAdHlwZSI6Imh0dHBzOi8vZGlkY29tbS5vcmcvb3V0LW9mLWJhbmQvMS4wL2ludml0YXRpb24iLCJAaWQiOiI2OTIxMmEzYS1kMDY4LTRmOWQtYTJkZC00NzQxYmNhODlhZjMiLCJsYWJlbCI6IkZhYmVyIENvbGxlZ2UiLCJnb2FsX2NvZGUiOiJpc3N1ZS12YyIsImdvYWwiOiJUbyBpc3N1ZSBhIEZhYmVyIENvbGxlZ2UgR3JhZHVhdGUgY3JlZGVudGlhbCIsImhhbmRzaGFrZV9wcm90b2NvbHMiOlsiaHR0cHM6Ly9kaWRjb21tLm9yZy9kaWRleGNoYW5nZS8xLjAiLCJodHRwczovL2RpZGNvbW0ub3JnL2Nvbm5lY3Rpb25zLzEuMCJdLCJzZXJ2aWNlcyI6WyJkaWQ6c292OkxqZ3BTVDJyanNveFllZ1FEUm03RUwiXX0=\n

    Example URL with Base64Url encoded message:

    http://example.com/ssi?oob=eyJAdHlwZSI6Imh0dHBzOi8vZGlkY29tbS5vcmcvb3V0LW9mLWJhbmQvMS4wL2ludml0YXRpb24iLCJAaWQiOiI2OTIxMmEzYS1kMDY4LTRmOWQtYTJkZC00NzQxYmNhODlhZjMiLCJsYWJlbCI6IkZhYmVyIENvbGxlZ2UiLCJnb2FsX2NvZGUiOiJpc3N1ZS12YyIsImdvYWwiOiJUbyBpc3N1ZSBhIEZhYmVyIENvbGxlZ2UgR3JhZHVhdGUgY3JlZGVudGlhbCIsImhhbmRzaGFrZV9wcm90b2NvbHMiOlsiaHR0cHM6Ly9kaWRjb21tLm9yZy9kaWRleGNoYW5nZS8xLjAiLCJodHRwczovL2RpZGNvbW0ub3JnL2Nvbm5lY3Rpb25zLzEuMCJdLCJzZXJ2aWNlcyI6WyJkaWQ6c292OkxqZ3BTVDJyanNveFllZ1FEUm03RUwiXX0=\n

    Out-of-band message URLs can be transferred via any method that can send text, including an email, SMS, posting on a website, or QR Code.

    Example URL encoded as a QR Code:

    Example Email Message:

    To: alice@alum.faber.edu\nFrom: studentrecords@faber.edu\nSubject: Your request to connect and receive your graduate verifiable credential\n\nDear Alice,\n\nTo receive your Faber College graduation certificate, click here to [connect](http://example.com/ssi?oob=eyJAdHlwZSI6Imh0dHBzOi8vZGlkY29tbS5vcmcvb3V0LW9mLWJhbmQvMS4wL2ludml0YXRpb24iLCJAaWQiOiI2OTIxMmEzYS1kMDY4LTRmOWQtYTJkZC00NzQxYmNhODlhZjMiLCJsYWJlbCI6IkZhYmVyIENvbGxlZ2UiLCJnb2FsX2NvZGUiOiJpc3N1ZS12YyIsImdvYWwiOiJUbyBpc3N1ZSBhIEZhYmVyIENvbGxlZ2UgR3JhZHVhdGUgY3JlZGVudGlhbCIsImhhbmRzaGFrZV9wcm90b2NvbHMiOlsiaHR0cHM6Ly9kaWRjb21tLm9yZy9kaWRleGNoYW5nZS8xLjAiLCJodHRwczovL2RpZGNvbW0ub3JnL2Nvbm5lY3Rpb25zLzEuMCJdLCJzZXJ2aWNlcyI6WyJkaWQ6c292OkxqZ3BTVDJyanNveFllZ1FEUm03RUwiXX0=) with us, or paste the following into your browser:\n\nhttp://example.com/ssi?oob=eyJAdHlwZSI6Imh0dHBzOi8vZGlkY29tbS5vcmcvb3V0LW9mLWJhbmQvMS4wL2ludml0YXRpb24iLCJAaWQiOiI2OTIxMmEzYS1kMDY4LTRmOWQtYTJkZC00NzQxYmNhODlhZjMiLCJsYWJlbCI6IkZhYmVyIENvbGxlZ2UiLCJnb2FsX2NvZGUiOiJpc3N1ZS12YyIsImdvYWwiOiJUbyBpc3N1ZSBhIEZhYmVyIENvbGxlZ2UgR3JhZHVhdGUgY3JlZGVudGlhbCIsImhhbmRzaGFrZV9wcm90b2NvbHMiOlsiaHR0cHM6Ly9kaWRjb21tLm9yZy9kaWRleGNoYW5nZS8xLjAiLCJodHRwczovL2RpZGNvbW0ub3JnL2Nvbm5lY3Rpb25zLzEuMCJdLCJzZXJ2aWNlcyI6WyJkaWQ6c292OkxqZ3BTVDJyanNveFllZ1FEUm03RUwiXX0=\n\nIf you don't have an identity agent for holding credentials, you will be given instructions on how you can get one.\n\nThanks,\n\nFaber College\nKnowledge is Good\n
    "},{"location":"features/0434-outofband/#url-shortening","title":"URL Shortening","text":"

    It seems inevitable that the length of some out-of-band message will be too long to produce a useable QR code. Techniques to avoid unusable QR codes have been presented above, including using attachment links for requests, minimizing the routing of the response and eliminating unnecessary whitespace in the JSON. However, at some point a sender may need generate a very long URL. In that case, a DIDComm specific URL shortener redirection should be implemented by the sender as follows:

    A usable QR code will always be able to be generated from the shortened form of the URL.

    "},{"location":"features/0434-outofband/#url-shortening-caveats","title":"URL Shortening Caveats","text":"

    Some HTTP libraries don't support stopping redirects from occuring on reception of a 301 or 302, in this instance the redirect is automatically followed and will result in a response that MAY have a status of 200 and MAY contain a URL that can be processed as a normal Out-of-Band message.

    If the agent performs a HTTP GET with the Accept header requesting application/json MIME type the response can either contain the message in json or result in a redirect, processing of the response should attempt to determine which response type is received and process the message accordingly.

    "},{"location":"features/0434-outofband/#out-of-band-message-publishing","title":"Out-of-Band Message Publishing","text":"

    The sender will publish or transmit the out-of-band message URL in a manner available to the intended receiver. After publishing, the sender is in the await-response state, will the receiver is in the prepare-response state.

    "},{"location":"features/0434-outofband/#out-of-band-message-processing","title":"Out-of-Band Message Processing","text":"

    If the receiver receives an out-of-band message in the form of a QR code, the receiver should attempt to decode the QR code to an out-of-band message URL for processing.

    When the receiver receives the out-of-band message URL, there are two possible user flows, depending on whether the individual has an Aries agent. If the individual is new to Aries, they will likely load the URL in a browser. The resulting page SHOULD contain instructions on how to get started by installing an Aries agent. That install flow will transfer the out-of-band message to the newly installed software.

    A user that already has those steps accomplished will have the URL received by software directly. That software will attempt to base64URL decode the string and can read the out-of-band message directly out of the oob query parameter, without loading the URL. If this process fails then the software should attempt the steps to process a shortened URL.

    NOTE: In receiving the out-of-band message, the base64url decode implementation used MUST correctly decode padded and unpadded base64URL encoded data.

    If the receiver wants to respond to the out-of-band message, they will use the information in the message to prepare the request, including:

    "},{"location":"features/0434-outofband/#correlating-responses-to-out-of-band-messages","title":"Correlating responses to Out-of-Band messages","text":"

    The response to an out-of-band message MUST set its ~thread.pthid equal to the @id property of the out-of-band message.

    Example referencing an explicit invitation:

    {\n  \"@id\": \"a46cdd0f-a2ca-4d12-afbf-2e78a6f1f3ef\",\n  \"@type\": \"https://didcomm.org/didexchange/1.0/request\",\n  \"~thread\": { \"pthid\": \"032fbd19-f6fd-48c5-9197-ba9a47040470\" },\n  \"label\": \"Bob\",\n  \"did\": \"B.did@B:A\",\n  \"did_doc~attach\": {\n    \"base64\": \"eyJ0eXAiOiJKV1Qi... (bytes omitted)\",\n    \"jws\": {\n      \"header\": {\n        \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n      },\n      \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n      \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n    }\n  }\n}\n
    "},{"location":"features/0434-outofband/#response-transmission","title":"Response Transmission","text":"

    The response message from the receiver is encoded according to the standards of the DIDComm encryption envelope, using the service block present in (or resolved from) the out-of-band invitation.

    "},{"location":"features/0434-outofband/#reusing-connections","title":"Reusing Connections","text":"

    If an out-of-band invitation has a DID in the services block, and the receiver determines it has previously established a connection with that DID, the receiver MAY send its response on the established connection. See Reuse Messages for details.

    "},{"location":"features/0434-outofband/#receiver-error-handling","title":"Receiver Error Handling","text":"

    If the receiver is unable to process the out-of-band message, the receiver may respond with a Problem Report identifying the problem using a DIDComm message. As with any response, the ~thread decorator of the pthid MUST be the @id of the out-of-band message. The problem report MUST be in the protocol of an expected response. An example of an error that might come up is that the receiver is not able to handle any of the proposed protocols in the out-of-band message. The receiver MAY include in the problem report a ~service decorator that allows the sender to respond to the out-of-band message with a DIDComm message.

    "},{"location":"features/0434-outofband/#response-processing","title":"Response processing","text":"

    The sender MAY look up the corresponding out-of-band message identified in the response's ~thread.pthid to determine whether it should accept the response. Information about the related out-of-band message protocol may be required to provide the sender with context about processing the response and what to do after the protocol completes.

    "},{"location":"features/0434-outofband/#sender-error-handling","title":"Sender Error Handling","text":"

    If the sender receives a Problem Report message from the receiver, the sender has several options for responding. The sender will receive the message as part of an offered protocol in the out-of-band message.

    If the receiver did not include a ~service decorator in the response, the sender can only respond if it is still in session with the receiver. For example, if the sender is a website that displayed a QR code for the receiver to scan, the sender could create a new, presumably adjusted, out-of-band message, encode it and present it to the user in the same way as before.

    If the receiver included a ~service decorator in the response, the sender can provide a new message to the receiver, even a new version of the original out-of-band message, and send it to the receiver. The new message MUST include a ~thread decorator with the thid set to the @id from the problem report message.

    "},{"location":"features/0434-outofband/#drawbacks","title":"Drawbacks","text":""},{"location":"features/0434-outofband/#prior-art","title":"Prior art","text":""},{"location":"features/0434-outofband/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0434-outofband/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0445-rich-schema-mapping/","title":"Aries RFC 0445: Aries Rich Schema Mapping","text":""},{"location":"features/0445-rich-schema-mapping/#summary","title":"Summary","text":"

    Mappings serve as a bridge between rich schemas and the flat array of signed integers. A mapping specifies the order in which attributes are transformed and signed. It consists of a set of graph paths and the encoding used for the attribute values specified by those graph paths. Each claim in a mapping has a reference to an encoding, and those encodings are defined in encoding objects.

    Mapping objects are processed in a generic way defined in Rich Schema Objects Common.

    "},{"location":"features/0445-rich-schema-mapping/#motivation","title":"Motivation","text":"

    Rich schemas are complex, hierarchical, and possibly nested objects. The Camenisch-Lysyanskaya signature scheme used by Indy requires the attributes to be represented by an array of 256-bit integers. Converting data specified by a rich schema into a flat array of integers requires a mapping object.

    "},{"location":"features/0445-rich-schema-mapping/#tutorial","title":"Tutorial","text":""},{"location":"features/0445-rich-schema-mapping/#intro-to-mappings","title":"Intro to Mappings","text":"

    Mappings are written to the ledger so they can be shared by multiple credential definitions. A Credential Definition may only reference a single Mapping.

    One or more Mappings can be referenced by a Presentation Definition. The mappings serve as a vital part of the verification process. The verifier, upon receipt of a presentation must not only check that the array of integers signed by the issuer is valid, but that the attribute values were transformed and ordered according to the mapping referenced in the credential definition.

    A Mapping references one and only one Rich Schema object. If there is no Schema Object a Mapping can reference, a new Schema must be created on the ledger. If a Mapping needs to map attributes from multiple Schemas, then a new Schema embedding the multiple Schemas must be created and stored on the ledger.

    Mappings need to be discoverable.

    Mapping is a JSON-LD object following the same structure (attributes and graph pathes) as the corresponding Rich Schema. A Mapping may contain only a subset of the original Rich Schema's attributes.

    Every Mapping must have two default attributes required by any W3C compatible credential (see W3C verifiable credential specification): issuer and issuanceDate. Additionally, any other attributes that are considered optional by the W3C verifiable credential specification that will be included in the issued credential must be included in the Mapping. For example, credentialStatus or expirationDate. This allows the holder to selectively disclose these attributes in the same way as other attributes from the schema.

    The value of every schema attribute in a Mapping object is an array of the following pairs: - encoding object (referenced by its id) to be used for representation of the attribute as an integer - rank of the attribute to define the order in which the attribute is signed by the Issuer.

    The value is an array as the same attribute may be used in Credential Definition multiple times with different encodings.

    Note: The anonymous credential signature scheme currently used by Indy is Camenisch-Lysyanskaya signatures. It is the use of this signature scheme in combination with rich schema objects that necessitates a mapping object. If another signature scheme is used which does not have the same requirements, a mapping object may not be necessary or a different mapping object may need to be defined.

    "},{"location":"features/0445-rich-schema-mapping/#properties","title":"Properties","text":"

    Mapping's properties follow the generic template defined in Rich Schema Common.

    Mapping's content field is a JSON-LD-serialized string with the following fields:

    "},{"location":"features/0445-rich-schema-mapping/#id","title":"@id","text":"

    A Mapping must have an @id property. The value of this property must be equal to the id field which is a DID (see Identification of Rich Schema Objects).

    "},{"location":"features/0445-rich-schema-mapping/#type","title":"@type","text":"

    A Mapping must have a @type property. The value of this property must be (or map to, via a context object) a URI.

    "},{"location":"features/0445-rich-schema-mapping/#context","title":"@context","text":"

    A Mapping may have a @context property. If present, the value of this property must be a context object or a URI which can be dereferenced to obtain a context object.

    "},{"location":"features/0445-rich-schema-mapping/#schema","title":"schema","text":"

    An id of the corresponding Rich Schema

    "},{"location":"features/0445-rich-schema-mapping/#attributes","title":"attributes","text":"

    A dict of all the schema attributes the Mapping object is going to map to encodings and use in credentials. An attribute may have nested attributes matching the schema structure.

    It must also contain the following default attributes required by any W3C compatible verifiable credential (plus any additional attributes that may have been included from the W3C verifiable credentials data model): - issuer - issuanceDate - any additional attributes

    Every leaf attribute's value (including the default issuer and issuanceDate ones) is an array of the following pairs:

    "},{"location":"features/0445-rich-schema-mapping/#example-mapping","title":"Example Mapping","text":"

    Let's consider a Rich Schema object with the following content:

        '@id': \"did:sov:4e9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n    '@context': \"did:sov:2f9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n    '@type': \"rdfs:Class\",\n    \"rdfs:comment\": \"ISO18013 International Driver License\",\n    \"rdfs:label\": \"Driver License\",\n    \"rdfs:subClassOf\": {\n        \"@id\": \"sch:Thing\"\n    },\n    \"driver\": \"Driver\",\n    \"dateOfIssue\": \"Date\",\n    \"dateOfExpiry\": \"Date\",\n    \"issuingAuthority\": \"Text\",\n    \"licenseNumber\": \"Text\",\n    \"categoriesOfVehicles\": {\n        \"vehicleType\": \"Text\",\n        \"dateOfIssue\": \"Date\",\n        \"dateOfExpiry\": \"Date\",\n        \"restrictions\": \"Text\",\n    },\n    \"administrativeNumber\": \"Text\"\n

    Then the corresponding Mapping object may have the following content. Please note that we used all attributes from the original Schema except dateOfExpiry, categoriesOfVehicles/dateOfExpiry and categoriesOfVehicles/restrictions. Also, the licenseNumber attribute is used twice, but with different encodings. It is important that no two rank values may be identical.

        '@id': \"did:sov:5e9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n    '@context': \"did:sov:2f9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n    '@type': \"rdfs:Class\",\n    \"schema\": \"did:sov:4e9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n    \"attributes\" : {\n        \"issuer\": [{\n            \"enc\": \"did:sov:9x9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n            \"rank\": 1\n        }],\n        \"issuanceDate\": [{\n            \"enc\": \"did:sov:119F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n            \"rank\": 2\n        }],\n        \"expirationDate\": [{\n            \"enc\": \"did:sov:119F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n            \"rank\": 11\n        }],        \n        \"driver\": [{\n            \"enc\": \"did:sov:1x9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n            \"rank\": 5\n        }],\n        \"dateOfIssue\": [{\n            \"enc\": \"did:sov:2x9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n            \"rank\": 4\n        }],\n        \"issuingAuthority\": [{\n            \"enc\": \"did:sov:3x9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n            \"rank\": 3\n        }],\n        \"licenseNumber\": [\n            {\n                \"enc\": \"did:sov:4x9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n                \"rank\": 9\n            },\n            {\n                \"enc\": \"did:sov:5x9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n                \"rank\": 10\n            },\n        ],\n        \"categoriesOfVehicles\": {\n            \"vehicleType\": [{\n                \"enc\": \"did:sov:6x9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n                \"rank\": 6\n            }],\n            \"dateOfIssue\": [{\n             \"enc\": \"did:sov:7x9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n                \"rank\": 7\n            }],\n        },\n        \"administrativeNumber\": [{\n            \"enc\": \"did:sov:8x9F8ZmxuvDqRiqqY29x6dx9oU4qwFTkPbDpWtwGbdUsrCD\",\n            \"rank\": 8\n        }]\n    }\n

    "},{"location":"features/0445-rich-schema-mapping/#data-registry-storage","title":"Data Registry Storage","text":"

    Aries will provide a means for writing contexts to and reading contexts from a verifiable data registry (such as a distributed ledger).

    A Mapping object will be written to the ledger in a generic way defined in Rich Schema Objects Common.

    "},{"location":"features/0445-rich-schema-mapping/#aries-data-registry-interface","title":"Aries Data Registry Interface","text":"

    Aries Data Registry Interface methods for adding and retrieving a Mapping object from the ledger comply with the generic approach described in Rich Schema Objects Common.

    This means the following methods can be used: - write_rich_schema_object - read_rich_schema_object_by_id - read_rich_schema_object_by_metadata

    "},{"location":"features/0445-rich-schema-mapping/#reference","title":"Reference","text":"

    The following is a reference implementation of various transformation algorithms.

    Here is the paper that defines Camenisch-Lysyanskaya signatures.

    "},{"location":"features/0445-rich-schema-mapping/#drawbacks","title":"Drawbacks","text":"

    This increases the complexity of issuing verifiable credentials and verifiying the accompanying verifiable presentations.

    "},{"location":"features/0445-rich-schema-mapping/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0445-rich-schema-mapping/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0445-rich-schema-mapping/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0446-rich-schema-cred-def/","title":"Aries RFC 0446: Aries Rich Schema Credential Definition","text":""},{"location":"features/0446-rich-schema-cred-def/#summary","title":"Summary","text":"

    Credential Definition can be used by the Issuer to set public keys for a particular Rich Schema and Mapping. The public keys can be used for signing the credentials by the Issuer according to the order and encoding of attributes defined by the referenced Mapping.

    Credential Definition objects are processed in a generic way defined in Rich Schema Objects Common.

    "},{"location":"features/0446-rich-schema-cred-def/#motivation","title":"Motivation","text":"

    The current format for Indy credential definitions provides a method for issuers to specify a schema and provide public key data for credentials they issue. This ties the schema and public key data values to the issuer's DID. The verifier uses the credential definition to check the validity of each signed credential attribute presented to the verifier.

    The new credential definition object that uses rich schemas is a minor modification of the current Indy credential definition. The new format has the same public key data. In addition to referencing a schema, the new credential definition can also reference a mapping object.

    "},{"location":"features/0446-rich-schema-cred-def/#tutorial","title":"Tutorial","text":""},{"location":"features/0446-rich-schema-cred-def/#intro-to-credential-definition","title":"Intro to Credential Definition","text":"

    Credential definitions are written to the ledger so they can be used by holders and verifiers in presentation protocol.

    A Credential Definition can reference a single Mapping and a single Rich Schema only.

    Credential Definition is a JSON object.

    Credential Definition should be immutable in most of the cases. Some application may consider it as a mutable object since the Issuer may rotate keys present there. However, rotation of Issuer's keys should be done carefully as it will invalidate all credentials issued for this key.

    "},{"location":"features/0446-rich-schema-cred-def/#properties","title":"Properties","text":"

    Credential definition's properties follow the generic template defined in Rich Schema Common.

    Credential Definition's content field is a JSON-serialized string with the following fields:

    "},{"location":"features/0446-rich-schema-cred-def/#signaturetype","title":"signatureType","text":"

    Type of the signature. ZKP scheme CL (Camenisch-Lysyanskaya) is the only type currently supported in Indy. Other signature types, even those that do not support ZKPs, may still make use of the credential definition to link the issuer's public keys with the rich schema against which the verifiable credential was constructed.

    "},{"location":"features/0446-rich-schema-cred-def/#mapping","title":"mapping","text":"

    An id of the corresponding Mapping

    "},{"location":"features/0446-rich-schema-cred-def/#schema","title":"schema","text":"

    An id of the corresponding Rich Schema. The mapping must reference the same Schema.

    "},{"location":"features/0446-rich-schema-cred-def/#publickey","title":"publicKey","text":"

    Issuer's public keys. Consists of primary and revocation keys.

    "},{"location":"features/0446-rich-schema-cred-def/#example-credential-definition","title":"Example Credential Definition","text":"

    An example of the content field of a Credential Definition object:

    \"signatureType\": \"CL\",\n\"mapping\": \"did:sov:UVj5w8DRzcmPVDpUMr4AZhJ\",\n\"schema\": \"did:sov:U5x5w8DRzcmPVDpUMr4AZhJ\",\n\"publicKey\": {\n    \"primary\": \"...\",\n    \"revocation\": \"...\"\n}\n

    "},{"location":"features/0446-rich-schema-cred-def/#use-in-verifiable-credentials","title":"Use in Verifiable Credentials","text":"

    A ZKP credential created according to the CL signature scheme must reference a Credential Definition used for signing. A Credential Definition is referenced in the credentialSchema property. A Credential Definition is referenced by its id.

    "},{"location":"features/0446-rich-schema-cred-def/#data-registry-storage","title":"Data Registry Storage","text":"

    Aries will provide a means for writing contexts to and reading contexts from a verifiable data registry (such as a distributed ledger).

    A Credential Definition object will be written to the ledger in a generic way defined in Rich Schema Objects Common.

    "},{"location":"features/0446-rich-schema-cred-def/#aries-data-registry-interface","title":"Aries Data Registry Interface","text":"

    Aries Data Registry Interface methods for adding and retrieving a Credential Definition object from the ledger comply with the generic approach described in Rich Schema Objects Common.

    This means the following methods can be used: - write_rich_schema_object - read_rich_schema_object_by_id - read_rich_schema_object_by_metadata

    "},{"location":"features/0446-rich-schema-cred-def/#reference","title":"Reference","text":"

    The following is a reference implementation of various transformation algorithms.

    Here is the paper that defines Camenisch-Lysyanskaya signatures.

    "},{"location":"features/0446-rich-schema-cred-def/#drawbacks","title":"Drawbacks","text":"

    This increases the complexity of issuing verifiable credentials and verifiying the accompanying verifiable presentations.

    "},{"location":"features/0446-rich-schema-cred-def/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0446-rich-schema-cred-def/#prior-art","title":"Prior art","text":"

    Indy already has a Credential Definition support.

    What the prior effort lacked was a corresponding enhancement of schema infrastructure which would have provided the necessary typing of attribute values.

    "},{"location":"features/0446-rich-schema-cred-def/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0446-rich-schema-cred-def/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0453-issue-credential-v2/","title":"Aries RFC 0453: Issue Credential Protocol 2.0","text":""},{"location":"features/0453-issue-credential-v2/#change-log","title":"Change Log","text":"

    For a period of time, versions 2.1 and 2.2 where defined in this RFC. Those definitions were added prior to any implementations, and to date, there are no known implementations available or planned. An attempt at implementing version 2.1 was not merged into the main branch of Aries Cloud Agent Python, deemed overly complicated and not worth the effort for what amounts to an edge case (issuing multiple credentials of the same type in a single protocol instance). Further, there is a version 3.0 of this protocol that has been specified and implemented that does not include these capabilities. Thus, a decision was made that versions 2.1 and 2.2 be removed as being not accepted by the community and overly complicated to both implement and migrate from. Those interested in seeing how those capabilities were specified can look at this protocol before they were removed.

    "},{"location":"features/0453-issue-credential-v2/#20propose-credential-and-identifiers","title":"2.0/propose-credential and identifiers","text":"

    Version 2.0 of the protocol is introduced because of a breaking changes in the propose-credential message, replacing the (indy-specific) filtration criteria with a generalized filter attachment to align with the rest of the messages in the protocol. The previous version is 1.1/propose-credential. Version 2.0 also uses <angle brackets> explicitly to mark all values that may vary between instances, such as identifiers and comments.

    The \"formats\" field is added to all the messages to enable the linking the specific attachment IDs with the the format (credential format and version) of the attachment.

    The details that are part of each message type about the different attachment formats serves as a registry of the known formats and versions.

    "},{"location":"features/0453-issue-credential-v2/#summary","title":"Summary","text":"

    Formalizes messages used to issue a credential--whether the credential is JWT-oriented, JSON-LD-oriented, or ZKP-oriented. The general flow is similar, and this protocol intends to handle all of them. If you are using a credential type that doesn't fit this protocol, please raise a Github issue.

    "},{"location":"features/0453-issue-credential-v2/#motivation","title":"Motivation","text":"

    We need a standard protocol for issuing credentials. This is the basis of interoperability between Issuers and Holders.

    "},{"location":"features/0453-issue-credential-v2/#tutorial","title":"Tutorial","text":""},{"location":"features/0453-issue-credential-v2/#name-and-version","title":"Name and Version","text":"

    issue-credential, version 2.0

    "},{"location":"features/0453-issue-credential-v2/#roles","title":"Roles","text":"

    There are two roles in this protocol: Issuer and Holder. Technically, the latter role is only potential until the protocol completes; that is, the second party becomes a Holder of a credential by completing the protocol. However, we will use the term Holder throughout, to keep things simple.

    Note: When a holder of credentials turns around and uses those credentials to prove something, they become a Prover. In the sister RFC to this one, 0454: Present Proof Protocol 2.0, the Holder is therefore renamed to Prover. Sometimes in casual conversation, the Holder role here might be called \"Prover\" as well, but more formally, \"Holder\" is the right term at this phase of the credential lifecycle.

    "},{"location":"features/0453-issue-credential-v2/#goals","title":"Goals","text":"

    When the goals of each role are not available because of context, goal codes may be specifically included in protocol messages. This is particularly helpful to differentiate between credentials passed between the same parties for several different reasons. A goal code included should be considered to apply to the entire thread and is not necessary to be repeated on each message. Changing the goal code may be done by including the new code in a message. All goal codes are optional, and without default.

    "},{"location":"features/0453-issue-credential-v2/#states","title":"States","text":"

    The choreography diagram below details how state evolves in this protocol, in a \"happy path.\" The states include

    "},{"location":"features/0453-issue-credential-v2/#issuer-states","title":"Issuer States","text":""},{"location":"features/0453-issue-credential-v2/#holder-states","title":"Holder States","text":"

    Errors might occur in various places. For example, an Issuer might offer a credential for a price that the Holder is unwilling to pay. All errors are modeled with a problem-report message. Easy-to-anticipate errors reset the flow as shown in the diagrams, and use the code issuance-abandoned; more exotic errors (e.g., server crashed at Issuer headquarters in the middle of a workflow) may have different codes but still cause the flow to be abandoned in the same way. That is, in this version of the protocol, all errors cause the state of both parties (the sender and the receiver of the problem-report) to revert to null (meaning it is no longer engaged in the protocol at all). Future versions of the protocol may allow more granular choices (e.g., requesting and receiving a (re-)send of the issue-credential message if the Holder times out while waiting in the request-sent state).

    The state table outlines the protocol states and transitions.

    "},{"location":"features/0453-issue-credential-v2/#messages","title":"Messages","text":"

    The Issue Credential protocol consists of these messages:

    In addition, the ack and problem-report messages are adopted into the protocol for confirmation and error handling.

    "},{"location":"features/0453-issue-credential-v2/#message-attachments","title":"Message Attachments","text":"

    This protocol is about the messages that must be exchanged to issue verifiable credentials, NOT about the specifics of particular verifiable credential schemes. DIDComm attachments are deliberately used in messages to isolate the protocol flow/semantics from the credential artifacts themselves as separate constructs. Attachments allow credential formats and this protocol to evolve through versioning milestones independently instead of in lockstep. Links are provided in the message descriptions below, to describe how the protocol adapts to specific verifiable credential implementations.

    The attachment items in the messages are arrays. The arrays are provided to support the issuing of different credential formats (e.g. ZKP, JSON-LD JWT, or other) containing the same data (claims). The arrays are not to be used for issuing credentials with different claims. The formats field of each message associates each attachment with the format (and version) of the attachment.

    A registry of attachment formats is provided in this RFC within the message type sections. A sub-section should be added for each attachment format type (and optionally, each version). Updates to the attachment type formats does NOT impact the versioning of the Issue Credential protocol. Formats are flexibly defined. For example, the first definitions are for hlindy/cred-abstract@v2.0 et al., assuming that all Hyperledger Indy implementations and ledgers will use a common format. However, if a specific instance of Indy uses a different format, another format value can be documented as a new registry entry.

    Any of the 0017-attachments RFC embedded inline attachments can be used. In the examples below, base64 is used in most cases, but implementations MUST expect any of the formats.

    "},{"location":"features/0453-issue-credential-v2/#choreography-diagram","title":"Choreography Diagram","text":"

    Note: This diagram was made in draw.io. To make changes:

    The protocol has 3 alternative beginnings:

    1. The Issuer can begin with an offer.
    2. The Holder can begin with a proposal.
    3. the Holder can begin with a request.

    The offer and proposal messages are part of an optional negotiation phase and may trigger back-and-forth counters. A request is not subject to negotiation; it can only be accepted or rejected.

    "},{"location":"features/0453-issue-credential-v2/#propose-credential","title":"Propose Credential","text":"

    An optional message sent by the potential Holder to the Issuer to initiate the protocol or in response to an offer-credential message when the Holder wants some adjustments made to the credential data offered by Issuer.

    Note: In Hyperledger Indy, where the `request-credential` message can **only** be sent in response to an `offer-credential` message, the `propose-credential` message is the only way for a potential Holder to initiate the workflow.

    Message format:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/propose-credential\",\n    \"@id\": \"<uuid of propose-message>\",\n    \"goal_code\": \"<goal-code>\",\n    \"comment\": \"<some comment>\",\n    \"credential_preview\": <json-ld object>,\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\"\n        }\n    ],\n    \"filters~attach\": [\n        {\n            \"@id\": \"<attachment identifier>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ]\n}\n

    Description of attributes:

    "},{"location":"features/0453-issue-credential-v2/#propose-attachment-registry","title":"Propose Attachment Registry","text":"Credential Format Format Value Link to Attachment Format Comment DIF Credential Manifest dif/credential-manifest@v1.0 propose-credential attachment format Linked Data Proof VC Detail aries/ld-proof-vc-detail@v1.0 ld-proof-vc-detail attachment format Hyperledger Indy Credential Filter hlindy/cred-filter@v2.0 cred filter format Hyperledger AnonCreds Credential Filter anoncreds/credential-filter@v1.0 Credential Filter format"},{"location":"features/0453-issue-credential-v2/#offer-credential","title":"Offer Credential","text":"

    A message sent by the Issuer to the potential Holder, describing the credential they intend to offer and possibly the price they expect to be paid. In Hyperledger Indy, this message is required, because it forces the Issuer to make a cryptographic commitment to the set of fields in the final credential and thus prevents Issuers from inserting spurious data. In credential implementations where this message is optional, an Issuer can use the message to negotiate the issuing following receipt of a request-credential message.

    Message Format:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/offer-credential\",\n    \"@id\": \"<uuid of offer message>\",\n    \"goal_code\": \"<goal-code>\",\n    \"replacement_id\": \"<issuer unique id>\",\n    \"comment\": \"<some comment>\",\n    \"credential_preview\": <json-ld object>,\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"offers~attach\": [\n        {\n            \"@id\": \"<attach@id value>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    It is possible for an Issuer to add a ~timing.expires_time decorator to this message to convey the idea that the offer will expire at a particular point in the future. Such behavior is not a special part of this protocol, and support for it is not a requirement of conforming implementations; the ~timing decorator is simply a general possibility for any DIDComm message. We mention it here just to note that the protocol can be enriched in composable ways.

    "},{"location":"features/0453-issue-credential-v2/#offer-attachment-registry","title":"Offer Attachment Registry","text":"Credential Format Format Value Link to Attachment Format Comment DIF Credential Manifest dif/credential-manifest@v1.0 offer-credential attachment format Hyperledger Indy Credential Abstract hlindy/cred-abstract@v2.0 cred abstract format Linked Data Proof VC Detail aries/ld-proof-vc-detail@v1.0 ld-proof-vc-detail attachment format Hyperledger AnonCreds Credential Offer anoncreds/credential-offer@v1.0 Credential Offer format W3C VC - Data Integrity Proof Credential Offer didcomm/w3c-di-vc-offer@v0.1 Credential Offer format"},{"location":"features/0453-issue-credential-v2/#request-credential","title":"Request Credential","text":"

    This is a message sent by the potential Holder to the Issuer, to request the issuance of a credential. Where circumstances do not require a preceding Offer Credential message (e.g., there is no cost to issuance that the Issuer needs to explain in advance, and there is no need for cryptographic negotiation), this message initiates the protocol. When using the Hyperledger Indy AnonCreds verifiable credential format, this message can only be sent in response to an offer-credential message.

    Message Format:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/request-credential\",\n    \"@id\": \"<uuid of request message>\",\n    \"goal_code\": \"<goal-code>\",\n    \"comment\": \"<some comment>\",\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"requests~attach\": [\n        {\n            \"@id\": \"<attachment identifier>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        },\n    ]\n}\n

    Description of Fields:

    "},{"location":"features/0453-issue-credential-v2/#request-attachment-registry","title":"Request Attachment Registry","text":"Credential Format Format Value Link to Attachment Format Comment DIF Credential Manifest dif/credential-manifest@v1.0 request-credential attachment format Hyperledger Indy Credential Request hlindy/cred-req@v2.0 cred request format Linked Data Proof VC Detail aries/ld-proof-vc-detail@v1.0 ld-proof-vc-detail attachment format Hyperledger AnonCreds Credential Request anoncreds/credential-request@v1.0 Credential Request format W3C VC - Data Integrity Proof Credential Request didcomm/w3c-di-vc-request@v0.1 Credential Request format"},{"location":"features/0453-issue-credential-v2/#issue-credential","title":"Issue Credential","text":"

    This message contains a verifiable credential being issued as an attached payload. It is sent in response to a valid Request Credential message.

    Message Format:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/issue-credential\",\n    \"@id\": \"<uuid of issue message>\",\n    \"goal_code\": \"<goal-code>\",\n    \"replacement_id\": \"<issuer unique id>\",\n    \"comment\": \"<some comment>\",\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"credentials~attach\": [\n        {\n            \"@id\": \"<attachment-id>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    "},{"location":"features/0453-issue-credential-v2/#credentials-attachment-registry","title":"Credentials Attachment Registry","text":"Credential Format Format Value Link to Attachment Format Comment Linked Data Proof VC aries/ld-proof-vc@v1.0 ld-proof-vc attachment format Hyperledger Indy Credential hlindy/cred@v2.0 credential format Hyperledger AnonCreds Credential anoncreds/credential@v1.0 Credential format W3C VC - Data Integrity Proof Credential didcomm/w3c-di-vc@v0.1 Credential format"},{"location":"features/0453-issue-credential-v2/#adopted-problem-report","title":"Adopted Problem Report","text":"

    The problem-report message is adopted by this protocol. problem-report messages can be used by either party to indicate an error in the protocol.

    "},{"location":"features/0453-issue-credential-v2/#preview-credential","title":"Preview Credential","text":"

    This is not a message but an inner object for other messages in this protocol. It is used construct a preview of the data for the credential that is to be issued. Its schema follows:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/credential-preview\",\n    \"attributes\": [\n        {\n            \"name\": \"<attribute name>\",\n            \"mime-type\": \"<type>\",\n            \"value\": \"<value>\"\n        },\n        // more attributes\n    ]\n}\n

    The main element is attributes. It is an array of (object) attribute specifications; the subsections below outline their semantics.

    "},{"location":"features/0453-issue-credential-v2/#attribute-name","title":"Attribute Name","text":"

    The mandatory \"name\" key maps to the attribute name as a string.

    "},{"location":"features/0453-issue-credential-v2/#mime-type-and-value","title":"MIME Type and Value","text":"

    The optional mime-type advises the issuer how to render a binary attribute, to judge its content for applicability before issuing a credential containing it. Its value parses case-insensitively in keeping with MIME type semantics of RFC 2045. If mime-type is missing, its value is null.

    The mandatory value holds the attribute value:

    "},{"location":"features/0453-issue-credential-v2/#threading","title":"Threading","text":"

    Threading can be used to initiate a sub-protocol during an issue credential protocol instance. For example, during credential issuance, the Issuer may initiate a child message thread to execute the Present Proof sub-protocol to have the potential Holder (now acting as a Prover) prove attributes about themselves before issuing the credential. Depending on circumstances, this might be a best practice for preventing credential fraud at issuance time.

    If threading were added to all of the above messages, a ~thread decorator would be present, and later messages in the flow would reference the @id of earlier messages to stitch the flow into a single coherent sequence. Details about threading can be found in the 0008: Message ID and Threading RFC.

    "},{"location":"features/0453-issue-credential-v2/#limitations","title":"Limitations","text":"

    Smart contracts may be missed in ecosystem, so operation \"issue credential after payment received\" is not atomic. It\u2019s possible case that malicious issuer will charge first and then will not issue credential in fact. But this situation should be easily detected and appropriate penalty should be applied in such type of networks.

    "},{"location":"features/0453-issue-credential-v2/#negotiation-and-preview","title":"Negotiation and Preview","text":"

    Negotiation prior to issuing the credential can be done using the offer-credential and propose-credential messages. A common negotiation use case would be about the data to go into the credential. For that, the credential_preview element is used.

    "},{"location":"features/0453-issue-credential-v2/#drawbacks","title":"Drawbacks","text":"

    None documented

    "},{"location":"features/0453-issue-credential-v2/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0453-issue-credential-v2/#prior-art","title":"Prior art","text":"

    See RFC 0036 Issue Credential, v1.x.

    "},{"location":"features/0453-issue-credential-v2/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0453-issue-credential-v2/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0454-present-proof-v2/","title":"Aries RFC 0454: Present Proof Protocol 2.0","text":""},{"location":"features/0454-present-proof-v2/#change-log","title":"Change Log","text":"

    For a period of time, versions 2.1 and 2.2 where defined in this RFC. Those definitions were added prior to any implementations, and to date, there are no known implementations available or planned. An attempt at implementing version 2.1 of the associated \"issue multiple credentials\" was not merged into the main branch of Aries Cloud Agent Python, deemed overly complicated and not worth the effort for what amounts to an edge case (presenting multiple presentations of the same type in a single protocol instance). Further, there is a version 3.0 of this protocol that has been specified and implemented that does not include these capabilities. Thus, a decision was made that versions 2.1 and 2.2 be removed as being not accepted by the community and overly complicated to both implement and migrate from. Those interested in seeing how those capabilities were specified can look at this protocol before they were removed.

    "},{"location":"features/0454-present-proof-v2/#20-alignment-with-rfc-0453-issue-credential","title":"2.0 - Alignment with RFC 0453 Issue Credential","text":""},{"location":"features/0454-present-proof-v2/#summary","title":"Summary","text":"

    A protocol supporting a general purpose verifiable presentation exchange regardless of the specifics of the underlying verifiable presentation request and verifiable presentation format.

    "},{"location":"features/0454-present-proof-v2/#motivation","title":"Motivation","text":"

    We need a standard protocol for a verifier to request a presentation from a prover, and for the prover to respond by presenting a proof to the verifier. When doing that exchange, we want to provide a mechanism for the participants to negotiate the underlying format and content of the proof.

    "},{"location":"features/0454-present-proof-v2/#tutorial","title":"Tutorial","text":""},{"location":"features/0454-present-proof-v2/#name-and-version","title":"Name and Version","text":"

    present-proof, version 2.0

    "},{"location":"features/0454-present-proof-v2/#key-concepts","title":"Key Concepts","text":"

    This protocol is about the messages to support the presentation of verifiable claims, not about the specifics of particular verifiable presentation formats. DIDComm attachments are deliberately used in messages to make the protocol agnostic to specific verifiable presentation format payloads. Links are provided in the message data element descriptions to details of specific verifiable presentation implementation data structures.

    Diagrams in this protocol were made in draw.io. To make changes:

    "},{"location":"features/0454-present-proof-v2/#roles","title":"Roles","text":"

    The roles are verifier and prover. The verifier requests the presentation of a proof and verifies the presentation, while the prover prepares the proof and presents it to the verifier. Optionally, although unlikely from a business sense, the prover may initiate an instance of the protocol using the propose-presentation message.

    "},{"location":"features/0454-present-proof-v2/#goals","title":"Goals","text":"

    When the goals of each role are not available because of context, goal codes may be specifically included in protocol messages. This is particularly helpful to differentiate between credentials passed between the same parties for several different reasons. A goal code included should be considered to apply to the entire thread and is not necessary to be repeated on each message. Changing the goal code may be done by including the new code in a message. All goal codes are optional, and without default.

    "},{"location":"features/0454-present-proof-v2/#states","title":"States","text":"

    The following states are defined and included in the state transition table below.

    "},{"location":"features/0454-present-proof-v2/#states-for-verifier","title":"States for Verifier","text":""},{"location":"features/0454-present-proof-v2/#states-for-prover","title":"States for Prover","text":"

    For the most part, these states map onto the transitions shown in both the state transition table above, and in the choreography diagram (below) in obvious ways. However, a few subtleties are worth highlighting:

    "},{"location":"features/0454-present-proof-v2/#choreography-diagram","title":"Choreography Diagram","text":""},{"location":"features/0454-present-proof-v2/#messages","title":"Messages","text":"

    The present proof protocol consists of these messages:

    In addition, the ack and problem-report messages are adopted into the protocol for confirmation and error handling.

    The messages that include ~attach attachments may use any form of the embedded attachment. In the examples below, the forms of the attachment are arbitrary.

    The ~attach array is to be used to enable a single presentation to be requested/delivered in different verifiable presentation formats. The ability to have multiple attachments must not be used to request/deliver multiple different presentations in a single instance of the protocol.

    "},{"location":"features/0454-present-proof-v2/#propose-presentation","title":"Propose Presentation","text":"

    An optional message sent by the prover to the verifier to initiate a proof presentation process, or in response to a request-presentation message when the prover wants to propose using a different presentation format or request. Schema:

    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/propose-presentation\",\n    \"@id\": \"<uuid-propose-presentation>\",\n    \"goal_code\": \"<goal-code>\",\n    \"comment\": \"some comment\",\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"proposals~attach\": [\n        {\n            \"@id\": \"<attachment identifier>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"json\": \"<json>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    If the proposals~attach is not provided, the attach_id item in the formats array should not be provided. That form of the propose-presentation message is to indicate the presentation formats supported by the prover, independent of the verifiable presentation request content.

    "},{"location":"features/0454-present-proof-v2/#negotiation-and-preview","title":"Negotiation and Preview","text":"

    Negotiation prior to the delivery of the presentation can be done using the propose-presentation and request-presentation messages. The common negotiation use cases would be about the claims to go into the presentation and the format of the verifiable presentation.

    "},{"location":"features/0454-present-proof-v2/#propose-attachment-registry","title":"Propose Attachment Registry","text":"Presentation Format Format Value Link to Attachment Format Comment Hyperledger Indy Proof Req hlindy/proof-req@v2.0 proof request format Used to propose as well as request proofs. DIF Presentation Exchange dif/presentation-exchange/definitions@v1.0 propose-presentation attachment format Hyperledger AnonCreds Proof Request anoncreds/proof-request@v1.0 Proof Request format Used to propose as well as request proofs."},{"location":"features/0454-present-proof-v2/#request-presentation","title":"Request Presentation","text":"

    From a verifier to a prover, the request-presentation message describes values that need to be revealed and predicates that need to be fulfilled. Schema:

    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/request-presentation\",\n    \"@id\": \"<uuid-request>\",\n    \"goal_code\": \"<goal-code>\",\n    \"comment\": \"some comment\",\n    \"will_confirm\": true,\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"request_presentations~attach\": [\n        {\n            \"@id\": \"<attachment identifier>\",\n            \"mime-type\": \"application/json\",\n            \"data\":  {\n                \"base64\": \"<base64 data>\"\n            }\n        }\n    ]\n}\n

    Description of fields:

    "},{"location":"features/0454-present-proof-v2/#presentation-request-attachment-registry","title":"Presentation Request Attachment Registry","text":"Presentation Format Format Value Link to Attachment Format Comment Hyperledger Indy Proof Req hlindy/proof-req@v2.0 proof request format Used to propose as well as request proofs. DIF Presentation Exchange dif/presentation-exchange/definitions@v1.0 propose-presentation attachment format Hyperledger AnonCreds Proof Request anoncreds/proof-request@v1.0 Proof Request format Used to propose as well as request proofs."},{"location":"features/0454-present-proof-v2/#presentation","title":"Presentation","text":"

    This message is a response to a Presentation Request message and contains signed presentations. Schema:

    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/presentation\",\n    \"@id\": \"<uuid-presentation>\",\n    \"goal_code\": \"<goal-code>\",\n    \"comment\": \"some comment\",\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\" : \"<format-and-version>\",\n        }\n    ],\n    \"presentations~attach\": [\n        {\n            \"@id\": \"<attachment identifier>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"sha256\": \"f8dca1d901d18c802e6a8ce1956d4b0d17f03d9dc5e4e1f618b6a022153ef373\",\n                \"links\": [\"https://ibb.co/TtgKkZY\"]\n            }\n        }\n    ]\n}\n

    Description of fields:

    If the prover wants an acknowledgement that the presentation was accepted, this message may be decorated with the ~please-ack decorator using the OUTCOME acknowledgement request. This is not necessary if the verifier has indicated it will send an ack-presentation using the will_confirm property. Outcome in the context of this protocol is the definition of \"successful\" as described in Ack Presentation. Note that this is different from the default behavior as described in 0317: Please ACK Decorator. It is then best practice for the new Verifier to respond with an explicit ack message as described in the please ack decorator RFC.

    "},{"location":"features/0454-present-proof-v2/#presentations-attachment-registry","title":"Presentations Attachment Registry","text":"Presentation Format Format Value Link to Attachment Format Comment Hyperledger Indy Proof hlindy/proof@v2.0 proof format DIF Presentation Exchange dif/presentation-exchange/submission@v1.0 propose-presentation attachment format Hyperledger AnonCreds Proof anoncreds/proof@v1.0 Proof format"},{"location":"features/0454-present-proof-v2/#ack-presentation","title":"Ack Presentation","text":"

    A message from the verifier to the prover that the Present Proof protocol was completed successfully and is now in the done state. The message is an adopted ack from the RFC 0015 acks protocol. The definition of \"successful\" in this protocol means the acceptance of the presentation in whole, i.e. the proof is verified and the contents of the proof are acknowledged.

    "},{"location":"features/0454-present-proof-v2/#problem-report","title":"Problem Report","text":"

    A message from the verifier to the prover that follows the presentation message to indicate that the Present Proof protocol was completed unsuccessfully and is now in the abandoned state. The message is an adopted problem-report from the RFC 0015 report-problem protocol. The definition of \"unsuccessful\" from a business sense is up to the verifier. The elements of the problem-report message can provide information to the prover about why the protocol instance was unsuccessful.

    Either party may send a problem-report message earlier in the flow to terminate the protocol before its normal conclusion.

    "},{"location":"features/0454-present-proof-v2/#reference","title":"Reference","text":"

    Details are covered in the Tutorial section.

    "},{"location":"features/0454-present-proof-v2/#drawbacks","title":"Drawbacks","text":"

    The Indy format of the proposal attachment as proposed above does not allow nesting of logic along the lines of \"A and either B or C if D, otherwise A and B\", nor cross-credential options such as proposing a legal name issued by either (for example) a specific financial institution or government entity.

    The verifiable presentation standardization work being conducted in parallel to this in DIF and the W3C Credentials Community Group (CCG) should be included in at least the Registry tables of this document, and ideally used to eliminate the need for presentation format-specific options.

    "},{"location":"features/0454-present-proof-v2/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0454-present-proof-v2/#prior-art","title":"Prior art","text":"

    The previous major version of this protocol is RFC 0037 Present Proof protocol and implementations.

    "},{"location":"features/0454-present-proof-v2/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0454-present-proof-v2/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0482-coprotocol-protocol/","title":"Aries RFC 0482: Coprotocol Protocol 0.5","text":""},{"location":"features/0482-coprotocol-protocol/#summary","title":"Summary","text":"

    Allows coprotocols to interact with one another.

    "},{"location":"features/0482-coprotocol-protocol/#motivation","title":"Motivation","text":"

    We need a standard way for one protocol to invoke another, giving it input, getting its output, detaching, and debugging.

    "},{"location":"features/0482-coprotocol-protocol/#tutorial","title":"Tutorial","text":""},{"location":"features/0482-coprotocol-protocol/#name-and-version","title":"Name and Version","text":"

    The name of this protocol is \"Coprotocol Protocol 0.5\" It is identified by the PIURI \"https://didcomm.org/coprotocol/0.5\".

    "},{"location":"features/0482-coprotocol-protocol/#key-concepts","title":"Key Concepts","text":"

    Please make sure you are familiar with the general concept of coprotocols, as set forth in Aries RFC 0478. A working knowledge of the terminology and mental model explained there are foundational.

    "},{"location":"features/0482-coprotocol-protocol/#roles","title":"Roles","text":"

    The caller role is played by the entity giving input and getting output. The called is the entity getting input and giving output.

    "},{"location":"features/0482-coprotocol-protocol/#states","title":"States","text":"

    The caller's normal state progression is null -> detached -> attached -> done. It is also possible to return to a detached state without ever reaching done.

    The coprotocols normal state progression is null -> attached -> done.

    "},{"location":"features/0482-coprotocol-protocol/#messages","title":"Messages","text":"

    Note: the discussion below is about how to launch and interact with any coprotocol. However, for concreteness we frame the walkthru in terms of a co-protocol that makes a payment. You can see an example definition of such a coprotocol in RFC 0478.

    The protocol consists of 5 messages: bind, attach, input, output, detach and the adopted problem-report (for propagating errors).

    The protocol begins with a bind message sent from caller to called. This message basically says, \"I would like to interact with a new coprotocol instance having the following characteristics and the following mapping of identifiers to roles.\" It might look like this:

    {\n    \"@id\": \"4d116a88-1314-4af5-9b3c-848456b8b3dd\",\n    \"@type\": \"https://didcomm.org/coprotocol/1.0/bind\",\n    \"goal_code\": \"aries.buy.make-payment\",\n    \"co_binding_id\": null,\n    \"cast\": [\n        // Recipient of the bind message (id = null) should be payee.\n        {\"role\": \"payee\", \"id\": null},\n        // The payer will be did:peer:abc123.\n        {\"role\": \"payer\", \"id\": \"did:peer:abc123\" }\n    ]\n}\n

    When a called agent receives this message, it should discover what protocol implementations are available that match the criteria, and sort the candidates by preference. (Note that additional criteria can be added besides those shown here; see the Reference section.) This could involve enumerating not-yet-loaded plugins. It could also involve negotiating a protocol with the remote party (e.g., the DID playing the role of payer in the example above) by querying its capabilities using the Discover Features Protocol. Of course, the capabilities of remote parties could also be cached to avoid this delay, or they could be predicted without confirmation, if circumstances suggest that's the best tradeoff. Once the candidates are sorted by preference, the best match should be selected. The coprotocol is NOT launched, but it is awaiting launch. The called agent should now generate an attach message that acknowledges the request to bind and tells the caller how to interact:

    {\n    \"@id\": \"b3dd4d11-6a88-9b3c-4af5-848456b81314\",\n    \"@type\": \"https://didcomm.org/coprotocol/1.0/attach\",\n    \"~thread\": { \"pthid\": \"4d116a88-1314-4af5-9b3c-848456b8b3dd\"},\n    // This is the best match.\n    \"piuri\": \"https://didcomm.org/pay-with-venmo/1.3\"\n}\n

    The @id of the bind message (also the ~thread.pthid of the attach response) becomes a permanent identifier for the coprotocol binding. Both the caller and the coprotocol instance code can use it to lookup state as needed. The caller can now kick off/invoke the protocol with an input message:

    {\n    \"@id\": \"56b81314-6a88-9b3c-4af5-b3dd4d118484\",\n    \"@type\": \"https://didcomm.org/coprotocol/1.0/input\",\n    \"~thread\": { \"pthid\": \"4d116a88-1314-4af5-9b3c-848456b8b3dd\"},\n    \"interaction_point\": \"invoke\",\n    \"data\": [\n        \"amount\": 1.23,\n        \"currency\": \"INR\",\n        \"bill_of_sale\": {\n            // describes what's being purchased\n        }\n    ]\n}\n

    This allows the caller to invoke the bound coprotocol instance, and to pass it any number of named inputs.

    Later, when the coprotocol instance wants to emit an output from called to caller, it uses an output message (in this case, one matching the preauth interaction point declared in the sample coprotocol definition in RFC 0478):

    {\n    \"@id\": \"9b3c56b8-6a88-f513-4a14-4d118484b3dd\",\n    \"@type\": \"https://didcomm.org/coprotocol/1.0/output\",\n    \"~thread\": { \"pthid\": \"4d116a88-1314-4af5-9b3c-848456b8b3dd\"},\n    \"interaction_point\": \"preauth\",\n    \"data\": [\n        \"code\": \"6a884d11-13149b3c\",\n    ]\n}\n

    If a caller wants to detach, it uses a detach message. This leaves the coprotocol running on called; all inputs that it emits are sent to the bitbucket, and it advances on its normal state trajectory as if it were a wholly independent protocol:

    {\n    \"@id\": \"7a3c56b8-5b88-d413-4a14-ca118484b3ee\",\n    \"@type\": \"https://didcomm.org/coprotocol/1.0/detach\",\n    \"~thread\": { \"pthid\": \"4d116a88-1314-4af5-9b3c-848456b8b3dd\"}\n}\n

    A caller can re-attach by sending a new bind message; this time, the co_binding_id field should have the coprotocol binding id from the original attach message. Other fields in the message are optional; if present, they constitute a check that the binding in question has the properties the caller expects. The reattachment is confrimed by a new attach message.

    "},{"location":"features/0482-coprotocol-protocol/#reference","title":"Reference","text":""},{"location":"features/0482-coprotocol-protocol/#bind","title":"bind","text":"
    {\n    \"@id\": \"4d116a88-1314-4af5-9b3c-848456b8b3dd\",\n    \"@type\": \"https://didcomm.org/coprotocol/1.0/bind\",\n    // I'd like to be bound to a coprotocol that achieves this goal.\n    \"goal_code\": \"aries.buy.make-payment\",\n    \"co_binding_id\": \n    // What is the intent about who plays which roles?\n    \"cast\": [\n        // Recipient of the bind message (id = null) should be payee.\n        {\"role\": \"payee\", \"id\": null},\n        // The payer will be did:peer:abc123.\n        {\"role\": \"payer\", \"id\": \"did:peer:abc123\" }\n    ],\n    // Optional and preferably omitted as it creates tight coupling;\n    // constrains bound coprotocol to just those that have a PIURI\n    // matching this wildcarded expression. \n    \"piuri_pat\": \"*/pay*\",\n    // If multiple matches are found, tells how to sort them to pick\n    // best match. \n    \"prefer\": [\n        // First prefer to bind a protocol that's often successful.\n        { \"attribute\": \"success_ratio\", \"direction\": \"d\" },\n        // Tie break by binding a protocol that's been run recently.\n        { \"attribute\": \"last_run_date\", \"direction\": \"d\" },\n        // Tie break by binding a protocol that's newer.\n        { \"attribute\": \"release_date\", \"direction\": \"d\" }\n        // Tie break by selecting protocols already running (false\n        // sorts before true).\n        { \"attribute\": \"running\", \"direction\": \"d\" }\n    ]\n}\n
    "},{"location":"features/0482-coprotocol-protocol/#attach","title":"attach","text":"
    {\n    \"@id\": \"b3dd4d11-6a88-9b3c-4af5-848456b81314\",\n    \"@type\": \"https://didcomm.org/coprotocol/1.0/attach\",\n    \"~thread\": { \"pthid\": \"4d116a88-1314-4af5-9b3c-848456b8b3dd\"},\n    // This is the best match.\n    \"piuri\": \"https://didcomm.org/pay-with-venmo/1.3\",\n    // Optional. Tells how long the caller has to take the next\n    // step binding will be held in an\n    // inactive state before being abandoned.\n    \"~timing.expires_time\": \"2020-06-23T18:42:07.124\"\n}\n
    "},{"location":"features/0482-coprotocol-protocol/#collateral","title":"Collateral","text":"

    This section is optional. It could be used to reference files, code, relevant standards, oracles, test suites, or other artifacts that would be useful to an implementer. In general, collateral should be checked in with the RFC.

    "},{"location":"features/0482-coprotocol-protocol/#drawbacks","title":"Drawbacks","text":"

    Why should we not do this?

    "},{"location":"features/0482-coprotocol-protocol/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0482-coprotocol-protocol/#prior-art","title":"Prior art","text":"

    Discuss prior art, both the good and the bad, in relation to this proposal. A few examples of what this can include are:

    This section is intended to encourage you as an author to think about the lessons from other implementers, provide readers of your proposal with a fuller picture. If there is no prior art, that is fine - your ideas are interesting to us whether they are brand new or if they are an adaptation from other communities.

    Note that while precedent set by other communities is some motivation, it does not on its own motivate an enhancement proposal here. Please also take into consideration that Aries sometimes intentionally diverges from common identity ../../features.

    "},{"location":"features/0482-coprotocol-protocol/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0482-coprotocol-protocol/#implementations","title":"Implementations","text":"

    NOTE: This section should remain in the RFC as is on first release. Remove this note and leave the rest of the text as is. Template text in all other sections should be replace.

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"features/0496-transition-to-oob-and-did-exchange/","title":"Aries RFC 0496: Transition to the Out of Band and DID Exchange Protocols","text":""},{"location":"features/0496-transition-to-oob-and-did-exchange/#summary","title":"Summary","text":"

    The Aries community has agreed to transition from using the invitation messages in RFC 0160 Connections and RFC 0023 DID Exchange to using the plaintext invitation message in RFC 0434 Out of Band and from using RFC 0160 to RFC 0023 for establishing agent-to-agent connections. As well, the community has agreed to transition from using RFC 0056 Service Decorator to execute connection-less instances of the RFC 0037 Present Proof protocol to using the out-of-band invitation message.

    This RFC follows the guidance in RFC 0345 about community-coordinated updates to (try to) ensure that independently deployed, interoperable agents remain interoperable throughout this transition.

    The transition from the old to new messages will occur in four steps:

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#step-1-out-of-band-messages","title":"Step 1 Out-of-Band Messages","text":"

    The definition of Step 1 has been deliberately defined to limit the impact of the changes on existing code bases. An implementation may be able to do as little as convert an incoming out-of-band protocol message into its \"current format\" equivalent and process the message, thus deferring larger changes to the message handling code. The following examples show the equivalence between out-of-band and current messages and the constraints on the out-of-band invitations used in Step 2.

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#connection-invitationinline-diddoc-service-entry","title":"Connection Invitation\u2014Inline DIDDoc Service Entry","text":"

    The following is the out-of-band invitation message equivalent to an RFC 0160 Connections invitation message that may be used in Step 2.

    {\n  \"@type\": \"https://didcomm.org/out-of-band/1.0/invitation\",\n  \"@id\": \"1234-1234-1234-1234\",\n  \"label\": \"Faber College\",\n  \"goal_code\": \"establish-connection\",\n  \"goal\": \"To establish a connection\",\n  \"handshake_protocols\": [\"did:sov:BzCbsNYhMrjHiqZDTUASHg;spec/connections/1.0/invitation\"],\n  \"service\": [\n      {\n        \"id\": \"#inline\"\n        \"type\": \"did-communication\",\n        \"recipientKeys\": [\"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\"],\n        \"routingKeys\": [],\n        \"serviceEndpoint\": \"https://example.com:5000\"\n      }\n  ]\n}\n

    The constraints on this form of the out-of-band invitation sent during Step 2 are:

    This out-of-band message can be transformed to the following RFC 0160 Connection invitation message.

    {\n  \"@type\": \"did:sov:BzCbsNYhMrjHiqZDTUASHg;spec/connections/1.0/invitation\",\n  \"@id\": \"1234-1234-1234-1234\",\n  \"label\": \"Faber College\",\n  \"recipientKeys\": [\"6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\"],\n  \"serviceEndpoint\": \"https://example.com:5000\",\n  \"routingKeys\": []\n}\n

    Note the use of did:key in the out-of-band message and the \"naked\" public key in the connection message. Ideally, full support for did:key will be added during Step 1. However, if there is not time for an agent builder to add full support, the transformation can be accomplished using simple text transformations between the did:key format and the (only) public key format used in current Aries agents.

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#connection-invitationdid-service-entry","title":"Connection Invitation\u2014DID Service Entry","text":"

    If the out-of-band message service item is a single DID, the resulting transformed message is comparably different. For example, this out-of-band invitation message:

    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/invitation\",\n  \"@id\": \"<id used for context as pthid>\",\n  \"label\": \"Faber College\",\n  \"goal_code\": \"issue-vc\",\n  \"goal\": \"To issue a Faber College Graduate credential\",\n  \"handshake_protocols\": [\"https://didcomm.org/connections/1.0\"],\n  \"service\": [\"did:sov:LjgpST2rjsoxYegQDRm7EL\"]\n}\n

    The did form of the connection invitation is implied, as shown here:

    {\n  \"@type\": \"did:sov:BzCbsNYhMrjHiqZDTUASHg;spec/connections/1.0/invitation\",\n  \"@id\": \"1234-1234-1234-1234\",\n  \"label\": \"Faber College\",\n  \"did\": [\"did:sov:LjgpST2rjsoxYegQDRm7EL\"]\n}\n
    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#connection-less-present-proof-request","title":"Connection-less Present Proof Request","text":"

    The most common connection-less form being used in production is the request-presentation message from the RFC 0037 Present Proof protocol. The out-of-band invitation for that request looks like this, using the inline form of the service entry.

    {\n  \"@type\": \"https://didcomm.org/out-of-band/%VER/invitation\",\n  \"@id\": \"1234-1234-1234-1234\",\n  \"label\": \"Faber College\",\n  \"goal_code\": \"present-proof\",\n  \"goal\": \"Request proof of some claims from verified credentials\",\n  \"request~attach\": [\n    {\n        \"@id\": \"request-0\",\n        \"mime-type\": \"application/json\",\n        \"data\": {\n            \"json\": {\n                \"@type\": \"did:sov:BzCbsNYhMrjHiqZDTUASHg;spec/present-proof/1.0/request-presentation\",\n                \"@id\": \"<uuid-request>\",\n                \"comment\": \"some comment\",\n                \"request_presentations~attach\": [\n                    {\n                        \"@id\": \"libindy-request-presentation-0\",\n                        \"mime-type\": \"application/json\",\n                        \"data\":  {\n                            \"base64\": \"<bytes for base64>\"\n                        }\n                    }\n                ]\n            }\n        }\n    }\n  ],\n  \"service\": [\n      {\n        \"id\": \"#inline\",\n        \"type\": \"did-communication\",\n        \"recipientKeys\": [\"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\"],\n        \"routingKeys\": [],\n        \"serviceEndpoint\": \"https://example.com:5000\"\n      }\n  ]\n}\n

    The constraints on this form of the out-of-band invitation sent during Step 2 are:

    This out-of-band message can be transformed to the following RFC 0037 Present Proof request-presentation message with an RFC 0056 Service Decorator item.

    {\n    \"@type\": \"did:sov:BzCbsNYhMrjHiqZDTUASHg;spec/present-proof/1.0/request-presentation\",\n    \"@id\": \"1234-1234-1234-1234\",\n    \"comment\": \"Request proof of some claims from verified credentials\",\n    \"~service\": {\n        \"recipientKeys\": [\"6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\"],\n        \"routingKeys\": [],\n        \"serviceEndpoint\": \"https://example.com:5000\"\n    },\n    \"request_presentations~attach\": [\n        {\n            \"@id\": \"libindy-request-presentation-0\",\n            \"mime-type\": \"application/json\",\n            \"data\":  {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ]\n}\n

    If the DID form of the out-of-band invitation message service entry was used, the ~service item would be comparably altered.

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#url-shortener-handling","title":"URL Shortener Handling","text":"

    During Step 2 URL Shortening as defined in RFC 0434 Out of Band must be supported.

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#between-step-triggers","title":"Between Step Triggers","text":"

    The community coordination triggers between the steps above will be as follows:

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#motivation","title":"Motivation","text":"

    To enable agent builders to independently update their code bases and deployed agents to support the out-of-band protocol while maintaining interoperability.

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#tutorial","title":"Tutorial","text":"

    The general mechanism for this type of transition is documented in RFC 0345 about community-coordinated updates.

    The specific sequence of events to make this particular transition is outlined in the summary section of this RFC.

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#reference","title":"Reference","text":"

    See the summary section of this RFC for the details of this transition.

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#drawbacks","title":"Drawbacks","text":"

    None identified.

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    This approach balances the speed of adoption with the need for independent deployment and ongoing interoperability.

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#prior-art","title":"Prior art","text":"

    The approach outlined in RFC 0345 about community-coordinated updates is a well-known pattern for using deprecation to make breaking changes in an ecosystem. That said, this is the first attempt to use this approach in Aries. Adjustments to the transition plan will be made as needed, and RFC 0345 will be updated based on lessons learned in executing this plan.

    "},{"location":"features/0496-transition-to-oob-and-did-exchange/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0496-transition-to-oob-and-did-exchange/#implementations","title":"Implementations","text":"

    The following table lists the status of various agent code bases and deployments with respect to Step 1 of this transition. Agent builders MUST update this table as they complete steps of the transition.

    Name / Link Implementation Notes"},{"location":"features/0509-action-menu/","title":"Aries RFC 0509: Action Menu Protocol","text":""},{"location":"features/0509-action-menu/#summary","title":"Summary","text":"

    The action-menu protocol allows one Agent to present a set of heirarchical menus and actions to another user-facing Agent in a human friendly way. The protocol allows limited service discovery as well as simple data entry. While less flexible than HTML forms or a chat bot, it should be relatively easy to implement and provides a user interface which can be adapted for various platforms, including mobile agents.

    "},{"location":"features/0509-action-menu/#motivation","title":"Motivation","text":"

    Discovery of a peer Agent's capabilities or service offerings is currently reliant on knowledge obtained out-of-band. There is no in-band DIDComm supported protocol for querying a peer to obtain a human freindly menu of their capabilities or service offerings. Whilst this protocol doesn't offer ledger wide discovery capabilities, it will allow one User Agent connected to another, to present a navigable menu and request offered services. The protocol also provides an interface definition language to define action menu display, selection and request submission.

    "},{"location":"features/0509-action-menu/#tutorial","title":"Tutorial","text":""},{"location":"features/0509-action-menu/#name-and-version","title":"Name and Version","text":"

    action-menu, version 1.0

    "},{"location":"features/0509-action-menu/#key-concepts","title":"Key Concepts","text":"

    The action-menu protocol requires an active DIDComm connection before it can proceed. One Agent behaves as a requester in the protocol whilst the other Agent represents a responder. Conceptually the responder presents a list of actions which can be initiated by the requester. Actions are contained within a menu structure. Individual Actions may result in traversal to another menu or initiation of other Aries protocols such as a presentation request, an introduction proposal, a credential offer, an acknowledgement, or a problem report.

    The protocol can be initiated by either the requester asking for the root menu or the responder sending an unsolicited root menu. The protocol ends when the requester issues a perform operation or an internal timeout on the responder causes it to discard menu context. At any time a requester can reset the protocol by requesting the root menu from a responder.

    Whilst the protocol is defined here as uni-directional (i.e requester to responder), both Agents may support both requester and responder roles simultaneously. Such cases would result in two instances of the action-menu protocol operating in parrallel.

    "},{"location":"features/0509-action-menu/#roles","title":"Roles","text":"

    There are two roles in the action-menu protocol: requester and responder.

    The requester asks the responder for menu definitions, presents them to a user, and initiates subsequent action items from the menu through further requests to the responder.

    The responder presents an initial menu definition containing actionable elements to a requestor and then responds to subsequent action requests from the menu.

    "},{"location":"features/0509-action-menu/#states","title":"States","text":""},{"location":"features/0509-action-menu/#states-for-requester","title":"States for Requester","text":"State\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003 Description null No menu has been requested or received awaiting-root-menu menu-request message has been sent and awaiting root menu response preparing-selection menu message has been received and a user selection is pending done perform message has been sent and protocol has finished. Perform actions can include requesting a new menu which will re-enter the state machine with the receive-menu event from the null state."},{"location":"features/0509-action-menu/#states-for-responder","title":"States for Responder","text":"State\u2003\u2003\u2003\u2003\u2003\u2003\u2003\u2003 Description null No menu has been requested or sent preparing-root-menu menu-request message has been received and preparing menu response for root menu awaiting-selection menu message has been sent and are awaiting a perform request done perform message has been received and protocol has finished. Perform actions can include requesting a new menu which will re-enter the state machine with the send-menu event from the null state."},{"location":"features/0509-action-menu/#messages","title":"Messages","text":""},{"location":"features/0509-action-menu/#menu","title":"menu","text":"

    A requestor is expected to display only one active menu per connection when action menus are employed by the responder. A newly received menu is not expected to interrupt a user, but rather be made available for the user to inspect possible actions related to the responder.

    {\n  \"@type\": \"https://didcomm.org/action-menu/%VER/menu\",\n  \"@id\": \"5678876542344\",\n  \"title\": \"Welcome to IIWBook\",\n  \"description\": \"IIWBook facilitates connections between attendees by verifying attendance and distributing connection invitations.\",\n  \"errormsg\": \"No IIWBook names were found.\",\n  \"options\": [\n    {\n      \"name\": \"obtain-email-cred\",\n      \"title\": \"Obtain a verified email credential\",\n      \"description\": \"Connect with the BC email verification service to obtain a verified email credential\"\n    },\n    {\n      \"name\": \"verify-email-cred\",\n      \"title\": \"Verify your participation\",\n      \"description\": \"Present a verified email credential to identify yourself\"\n    },\n    {\n      \"name\": \"search-introductions\",\n      \"title\": \"Search introductions\",\n      \"description\": \"Your email address must be verified to perform a search\",\n      \"disabled\": true\n    }\n  ]\n}\n
    "},{"location":"features/0509-action-menu/#description-of-attributes","title":"Description of attributes","text":""},{"location":"features/0509-action-menu/#quick-forms","title":"Quick forms","text":"

    Menu options may define a form property, which would direct the requester user to a client-generated form when the menu option is selected. The menu title should be shown at the top of the form, followed by the form description text if defined, followed by the list of form params in sequence. The form should also include a Cancel button to return to the menu, a Submit button (with an optional custom label defined by submit-label), and optionally a Clear button to reset the parameters to their default values.

    {\n  \"@type\": \"https://didcomm.org/action-menu/%VER/menu\",\n  \"@id\": \"5678876542347\",\n  \"~thread\": {\n    \"thid\": \"5678876542344\"\n  },\n  \"title\": \"Attendance Verified\",\n  \"description\": \"\",\n  \"options\": [\n    {\n      \"name\": \"submit-invitation\",\n      \"title\": \"Submit an invitation\",\n      \"description\": \"Send an invitation for IIWBook to share with another participant\"\n    },\n    {\n      \"name\": \"search-introductions\",\n      \"title\": \"Search introductions\",\n      \"form\": {\n        \"description\": \"Enter a participant name below to perform a search.\",\n        \"params\": [\n          {\n            \"name\": \"query\",\n            \"title\": \"Participant name\",\n            \"default\": \"\",\n            \"description\": \"\",\n            \"required\": true,\n            \"type\": \"text\"\n          }\n        ],\n        \"submit-label\": \"Search\"\n      }\n    }\n  ]\n}\n

    When the form is submitted, a perform message is generated containing values entered in the form. The form block may have an empty or missing params property in which case it acts as a simple confirmation dialog.

    Each entry in the params list must define a name and title. The description is optional (should be displayed as help text below the field) and the type defaults to \u2018text\u2019 if not provided (only the \u2018text\u2019 type is supported at this time). Parameters should default to required true, if not specified. Parameters may also define a default value (used when rendering or clearing the form).

    "},{"location":"features/0509-action-menu/#menu-request","title":"menu-request","text":"

    In addition to menus being pushed by the responder, the root menu can be re-requested at any time by the requestor sending a menu-request.

    {\n  \"@type\": \"https://didcomm.org/action-menu/%VER/menu-request\",\n  \"@id\": \"5678876542345\"\n}\n
    "},{"location":"features/0509-action-menu/#perform","title":"perform","text":"

    When the requestor user actions a menu option, a perform message is generated. It should be attached to the same thread as the menu. The active menu should close when an option is selected.

    The response to a perform message can be any type of agent message, including another menu message, a presentation request, an introduction proposal, a credential offer, an acknowledgement, or a problem report. Whatever the message type, it should normally reference the same message thread as the perform message.

    {\n  \"@type\": \"https://didcomm.org/action-menu/%VER/perform\",\n  \"@id\": \"5678876542346\",\n  \"~thread\": {\n    \"thid\": \"5678876542344\"\n  },\n  \"name\": \"obtain-email-cred\",\n  \"params\": {}\n}\n
    "},{"location":"features/0509-action-menu/#description-of-attributes_1","title":"Description of attributes","text":""},{"location":"features/0509-action-menu/#drawbacks","title":"Drawbacks","text":"

    N/A

    "},{"location":"features/0509-action-menu/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    N/A

    "},{"location":"features/0509-action-menu/#prior-art","title":"Prior art","text":"

    There are several existing RFCs that relate to the general problem of \"Discovery\"

    "},{"location":"features/0509-action-menu/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0509-action-menu/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Aries Cloud Agent - Python MISSING test results"},{"location":"features/0510-dif-pres-exch-attach/","title":"Aries RFC 0510: Presentation-Exchange Attachment format for requesting and presenting proofs","text":""},{"location":"features/0510-dif-pres-exch-attach/#summary","title":"Summary","text":"

    This RFC registers three attachment formats for use in the present-proof V2 protocol based on the Decentralized Identity Foundation's (DIF) Presentation Exchange specification (P-E). Two of these formats define containers for a presentation-exchange request object and another options object carrying additional parameters, while the third format is just a vessel for the final presentation_submission verifiable presentation transferred from the Prover to the Verifier.

    Presentation Exchange defines a data format capable of articulating a rich set of proof requirements from Verifiers, and also provides a means of describing the formats in which Provers must submit those proofs.

    A Verifier's defines their requirements in a presentation_definition containing input_descriptors that describe the credential(s) the proof(s) must be derived from as well as a rich set of operators that place constraints on those proofs (eg. \"must be issued from issuer X\" or \"age over X\", etc.).

    The Verifiable Presentation format of Presentation Submissions is used as opposed to OIDC tokens or CHAPI objects. For an alternative on how to tunnel OIDC messages over DIDComm, see HTTP-Over-DIDComm. CHAPI is an alternative transport to DIDComm.

    "},{"location":"features/0510-dif-pres-exch-attach/#motivation","title":"Motivation","text":"

    The Presentation Exchange specification (P-E) possesses a rich language for expressing a Verifier's criterion.

    P-E lends itself well to several transport mediums due to its limited scope as a data format, and is easily transported over DIDComm.

    It is furthermore desirable to make use of specifications developed in an open standards body.

    "},{"location":"features/0510-dif-pres-exch-attach/#tutorial","title":"Tutorial","text":"

    Complete examples of messages are provided in the reference section.

    The Verifier sends a request-presentation to the Prover containing a presentation_definition, along with a domain and challenge the Prover must sign over in the proof.

    The Prover can optionally respond to the Verifier's request-presentation with a propose-presentation message containing \"Input Descriptors\" that describe the proofs they can provide. The contents of the attachment is just the input_descriptors attribute of the presentation_definition object.

    The Prover responds with a presentation message containing a presentation_submission.

    "},{"location":"features/0510-dif-pres-exch-attach/#reference","title":"Reference","text":""},{"location":"features/0510-dif-pres-exch-attach/#propose-presentation-attachment-format","title":"propose-presentation attachment format","text":"

    Format identifier: dif/presentation-exchange/definitions@v1.0

    "},{"location":"features/0510-dif-pres-exch-attach/#examples-propose-presentation","title":"Examples: propose-presentation","text":"Complete message example
    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/propose-presentation\",\n    \"@id\": \"fce30ed1-96f8-44c9-95cf-b274288009dc\",\n    \"comment\": \"some comment\",\n    \"formats\" : [{\n        \"attach_id\" : \"143c458d-1b1c-40c7-ab85-4d16808ddf0a\",\n        \"format\" : \"dif/presentation-exchange/definitions@v1.0\"\n    }],\n    \"proposal~attach\": [{\n        \"@id\": \"143c458d-1b1c-40c7-ab85-4d16808ddf0a\",\n        \"mime-type\": \"application/json\",\n        \"data\": {\n            \"json\": {\n                \"input_descriptors\": [{\n                    \"id\": \"citizenship_input\",\n                    \"name\": \"US Passport\",\n                    \"group\": [\"A\"],\n                    \"schema\": [{\n                        \"uri\": \"hub://did:foo:123/Collections/schema.us.gov/passport.json\"\n                    }],\n                    \"constraints\": {\n                        \"fields\": [{\n                            \"path\": [\"$.credentialSubject.birth_date\", \"$.vc.credentialSubject.birth_date\", \"$.birth_date\"],\n                            \"filter\": {\n                                \"type\": \"date\",\n                                \"minimum\": \"1999-5-16\"\n                            }\n                        }]\n                    }\n                }]\n            }\n        }\n    }]\n}\n
    "},{"location":"features/0510-dif-pres-exch-attach/#request-presentation-attachment-format","title":"request-presentation attachment format","text":"

    Format identifier: dif/presentation-exchange/definitions@v1.0

    Since the format identifier defined above is the same as the one used in the propose-presentation message, it's recommended to consider both the message @type and the format to accuarately understand the contents of the attachment.

    The contents of the attachment is a JSON object containing the Verifier's presentation definition and an options object with proof options:

    {\n    \"options\": {\n        \"challenge\": \"...\",\n        \"domain\": \"...\",\n    },\n    \"presentation_definition\": {\n        // presentation definition object\n    }\n}\n
    "},{"location":"features/0510-dif-pres-exch-attach/#the-options-object","title":"The options object","text":"

    options is a container of additional parameters required for the Prover to fulfill the Verifier's request.

    Available options are:

    Name Status Description challenge RECOMMENDED (for LD proofs) Random seed provided by the Verifier for LD Proofs. domain RECOMMENDED (for LD proofs) The operational domain of the requested LD proof."},{"location":"features/0510-dif-pres-exch-attach/#examples-request-presentation","title":"Examples: request-presentation","text":"Complete message example requesting a verifiable presentation with proof type Ed25519Signature2018
    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/request-presentation\",\n    \"@id\": \"0ac534c8-98ed-4fe3-8a41-3600775e1e92\",\n    \"comment\": \"some comment\",\n    \"formats\" : [{\n        \"attach_id\" : \"ed7d9b1f-9eed-4bde-b81c-3aa7485cf947\",\n        \"format\" : \"dif/presentation-exchange/definitions@v1.0\"\n    }],\n    \"request_presentations~attach\": [{\n        \"@id\": \"ed7d9b1f-9eed-4bde-b81c-3aa7485cf947\",\n        \"mime-type\": \"application/json\",\n        \"data\":  {\n            \"json\": {\n                \"options\": {\n                    \"challenge\": \"23516943-1d79-4ebd-8981-623f036365ef\",\n                    \"domain\": \"us.gov/DriversLicense\"\n                },\n                \"presentation_definition\": {\n                    \"input_descriptors\": [{\n                        \"id\": \"citizenship_input\",\n                        \"name\": \"US Passport\",\n                        \"group\": [\"A\"],\n                        \"schema\": [{\n                            \"uri\": \"hub://did:foo:123/Collections/schema.us.gov/passport.json\"\n                        }],\n                        \"constraints\": {\n                            \"fields\": [{\n                                \"path\": [\"$.credentialSubject.birth_date\", \"$.birth_date\"],\n                                \"filter\": {\n                                    \"type\": \"date\",\n                                    \"minimum\": \"1999-5-16\"\n                                }\n                            }]\n                        }\n                    }],\n                    \"format\": {\n                        \"ldp_vp\": {\n                            \"proof_type\": [\"Ed25519Signature2018\"]\n                        }\n                    }\n                }\n            }\n        }\n    }]\n}\n
    The same example but requesting the verifiable presentation with proof type BbsBlsSignatureProof2020 instead
    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/request-presentation\",\n    \"@id\": \"0ac534c8-98ed-4fe3-8a41-3600775e1e92\",\n    \"comment\": \"some comment\",\n    \"formats\" : [{\n        \"attach_id\" : \"ed7d9b1f-9eed-4bde-b81c-3aa7485cf947\",\n        \"format\" : \"dif/presentation-exchange/definitions@v1.0\"\n    }],\n    \"request_presentations~attach\": [{\n        \"@id\": \"ed7d9b1f-9eed-4bde-b81c-3aa7485cf947\",\n        \"mime-type\": \"application/json\",\n        \"data\":  {\n            \"json\": {\n                \"options\": {\n                    \"challenge\": \"23516943-1d79-4ebd-8981-623f036365ef\",\n                    \"domain\": \"us.gov/DriversLicense\"\n                },\n                \"presentation_definition\": {\n                    \"input_descriptors\": [{\n                        \"id\": \"citizenship_input\",\n                        \"name\": \"US Passport\",\n                        \"group\": [\"A\"],\n                        \"schema\": [{\n                            \"uri\": \"hub://did:foo:123/Collections/schema.us.gov/passport.json\"\n                        }],\n                        \"constraints\": {\n                            \"fields\": [{\n                                \"path\": [\"$.credentialSubject.birth_date\", \"$.vc.credentialSubject.birth_date\", \"$.birth_date\"],\n                                \"filter\": {\n                                    \"type\": \"date\",\n                                    \"minimum\": \"1999-5-16\"\n                                }\n                            }],\n                            \"limit_disclosure\": \"required\"\n                        }\n                    }],\n                    \"format\": {\n                        \"ldp_vc\": {\n                            \"proof_type\": [\"BbsBlsSignatureProof2020\"]\n                        }\n                    }\n                }\n            }\n        }\n    }]\n}\n
    "},{"location":"features/0510-dif-pres-exch-attach/#presentation-attachment-format","title":"presentation attachment format","text":"

    Format identifier: dif/presentation-exchange/submission@v1.0

    The contents of the attachment is a Presentation Submission in a standard Verifiable Presentation format containing the proofs requested.

    "},{"location":"features/0510-dif-pres-exch-attach/#examples-presentation","title":"Examples: presentation","text":"Complete message example
    {\n    \"@type\": \"https://didcomm.org/present-proof/%VER/presentation\",\n    \"@id\": \"f1ca8245-ab2d-4d9c-8d7d-94bf310314ef\",\n    \"comment\": \"some comment\",\n    \"formats\" : [{\n        \"attach_id\" : \"2a3f1c4c-623c-44e6-b159-179048c51260\",\n        \"format\" : \"dif/presentation-exchange/submission@v1.0\"\n    }],\n    \"presentations~attach\": [{\n        \"@id\": \"2a3f1c4c-623c-44e6-b159-179048c51260\",\n        \"mime-type\": \"application/ld+json\",\n        \"data\": {\n            \"json\": {\n                \"@context\": [\n                    \"https://www.w3.org/2018/credentials/v1\",\n                    \"https://identity.foundation/presentation-exchange/submission/v1\"\n                ],\n                \"type\": [\n                    \"VerifiablePresentation\",\n                    \"PresentationSubmission\"\n                ],\n                \"presentation_submission\": {\n                    \"descriptor_map\": [{\n                        \"id\": \"citizenship_input\",\n                        \"path\": \"$.verifiableCredential.[0]\"\n                    }]\n                },\n                \"verifiableCredential\": [{\n                    \"@context\": \"https://www.w3.org/2018/credentials/v1\",\n                    \"id\": \"https://eu.com/claims/DriversLicense\",\n                    \"type\": [\"EUDriversLicense\"],\n                    \"issuer\": \"did:foo:123\",\n                    \"issuanceDate\": \"2010-01-01T19:73:24Z\",\n                    \"credentialSubject\": {\n                        \"id\": \"did:example:ebfeb1f712ebc6f1c276e12ec21\",\n                        \"license\": {\n                            \"number\": \"34DGE352\",\n                            \"dob\": \"07/13/80\"\n                        }\n                    },\n                    \"proof\": {\n                        \"type\": \"RsaSignature2018\",\n                        \"created\": \"2017-06-18T21:19:10Z\",\n                        \"proofPurpose\": \"assertionMethod\",\n                        \"verificationMethod\": \"https://example.edu/issuers/keys/1\",\n                        \"jws\": \"...\"\n                    }\n                }],\n                \"proof\": {\n                    \"type\": \"RsaSignature2018\",\n                    \"created\": \"2018-09-14T21:19:10Z\",\n                    \"proofPurpose\": \"authentication\",\n                    \"verificationMethod\": \"did:example:ebfeb1f712ebc6f1c276e12ec21#keys-1\",\n                    \"challenge\": \"1f44d55f-f161-4938-a659-f8026467f126\",\n                    \"domain\": \"4jt78h47fh47\",\n                    \"jws\": \"...\"\n                }\n            }\n        }\n    }]\n}\n
    "},{"location":"features/0510-dif-pres-exch-attach/#supported-features-of-presentation-exchange","title":"Supported Features of Presentation-Exchange","text":"

    Level of support for Presentation-Exchange ../../features:

    Feature Notes presentation_definition.input_descriptors.id presentation_definition.input_descriptors.name presentation_definition.input_descriptors.purpose presentation_definition.input_descriptors.schema.uri URI for the credential's schema. presentation_definition.input_descriptors.constraints.fields.path Array of JSONPath string expressions as defined in section 8. REQUIRED as per the spec. presentation_definition.input_descriptors.constraints.fields.filter JSONSchema descriptor. presentation_definition.input_descriptors.constraints.limit_disclosure preferred or required as defined in the spec and as supported by the Holder and Verifier proof mechanisms.Note that the Holder MUST have credentials with cryptographic proof suites that are capable of selective disclosure in order to respond to a request with limit_disclosure: \"required\".See RFC0593 for appropriate crypto suites. presentation_definition.input_descriptors.constraints.is_holder preferred or required as defined in the spec.Note that this feature allows the Holder to present credentials with a different subject identifier than the DID used to establish the DIDComm connection with the Verifier. presentation_definition.format For JSONLD-based credentials: ldp_vc and ldp_vp. presentation_definition.format.proof_type For JSONLD-based credentials: Ed25519Signature2018, BbsBlsSignature2020, and JsonWebSignature2020. When specifying ldp_vc, BbsBlsSignatureProof2020 may also be used."},{"location":"features/0510-dif-pres-exch-attach/#proof-formats","title":"Proof Formats","text":""},{"location":"features/0510-dif-pres-exch-attach/#constraints","title":"Constraints","text":"

    Verifiable Presentations MUST be produced and consumed using the JSON-LD syntax.

    The proof types defined below MUST be registered in the Linked Data Cryptographic Suite Registry.

    The value of any credentialSubject.id in a credential MUST be a Dentralized Identifier (DID) conforming to the DID Syntax if present. This allows the Holder to authenticate as the credential's subject if required by the Verifier (see the is_holder property above). The Holder authenticates as the credential's subject by attaching an LD Proof on the enclosing Verifiable Presentation.

    "},{"location":"features/0510-dif-pres-exch-attach/#proof-formats-on-credentials","title":"Proof Formats on Credentials","text":"

    Aries agents implementing this RFC MUST support the formats outlined in RFC0593 for proofs on Verifiable Credentials.

    "},{"location":"features/0510-dif-pres-exch-attach/#proof-formats-on-presentations","title":"Proof Formats on Presentations","text":"

    Aries agents implementing this RFC MUST support the formats outlined below for proofs on Verifiable Presentations.

    "},{"location":"features/0510-dif-pres-exch-attach/#ed25519signature2018","title":"Ed25519Signature2018","text":"

    Specification.

    Request Parameters:

    Result:

    A Verifiable Presentation of type Presentation Submission containing the credentials requested under the verifiableCredential property and a proof property of type Ed25519Signature2018.

    Example
    {\n    \"@context\": [\n        \"https://www.w3.org/2018/credentials/v1\",\n        \"https://identity.foundation/presentation-exchange/submission/v1\"\n    ],\n    \"type\": [\n        \"VerifiablePresentation\",\n        \"PresentationSubmission\"\n    ],\n    \"presentation_submission\": {\n        \"descriptor_map\": [{\n           \"id\": \"citizenship_input\",\n           \"path\": \"$.verifiableCredential.[0]\"\n        }]\n    },\n    \"verifiableCredential\": [{\n        \"@context\": \"https://www.w3.org/2018/credentials/v1\",\n        \"id\": \"https://eu.com/claims/DriversLicense\",\n        \"type\": [\n            \"EUDriversLicense\"\n        ],\n        \"issuer\": \"did:foo:123\",\n        \"issuanceDate\": \"2010-01-01T19:73:24Z\",\n        \"credentialSubject\": {\n            \"id\": \"did:example:ebfeb1f712ebc6f1c276e12ec21\",\n            \"license\": {\n            \"number\": \"34DGE352\",\n            \"dob\": \"07/13/80\"\n          }\n        },\n        \"proof\": {\n            \"type\": \"RsaSignature2018\",\n            \"created\": \"2017-06-18T21:19:10Z\",\n            \"proofPurpose\": \"assertionMethod\",\n            \"verificationMethod\": \"https://example.edu/issuers/keys/1\",\n            \"jws\": \"...\"\n        }\n    }],\n    \"proof\": {\n      \"type\": \"Ed25519Signature2018\",\n      \"proofPurpose\": \"authentication\",\n      \"created\": \"2017-09-23T20:21:34Z\",\n      \"verificationMethod\": \"did:example:123456#key1\",\n      \"challenge\": \"2bbgh3dgjg2302d-d2b3gi423d42\",\n      \"domain\": \"example.org\",\n      \"jws\": \"eyJ0eXAiOiJK...gFWFOEjXk\"\n  }\n}\n
    "},{"location":"features/0510-dif-pres-exch-attach/#bbsblssignature2020","title":"BbsBlsSignature2020","text":"

    Specification.

    Associated RFC: RFC0646.

    Request Parameters: * presentation_definition.format: ldp_vp * presentation_definition.format.proof_type: BbsBlsSignature2020 * options.challenge: (Optional) a random string value generated by the Verifier * options.domain: (Optional) a string value specified set by the Verifier

    Result:

    A Verifiable Presentation of type Presentation Submission containing the credentials requested under the verifiableCredential property and a proof property of type BbsBlsSignature2020.

    Example
    {\n    \"@context\": [\n        \"https://www.w3.org/2018/credentials/v1\",\n        \"https://w3id.org/security/v2\",\n        \"https://w3id.org/security/bbs/v1\",\n        \"https://identity.foundation/presentation-exchange/submission/v1\"\n    ],\n    \"type\": [\n        \"VerifiablePresentation\",\n        \"PresentationSubmission\"\n    ],\n    \"presentation_submission\": {\n        \"descriptor_map\": [{\n            \"id\": \"citizenship_input\",\n            \"path\": \"$.verifiableCredential.[0]\"\n        }]\n    },\n    \"verifiableCredential\": [{\n        \"@context\": \"https://www.w3.org/2018/credentials/v1\",\n        \"id\": \"https://eu.com/claims/DriversLicense\",\n        \"type\": [\"EUDriversLicense\"],\n        \"issuer\": \"did:foo:123\",\n        \"issuanceDate\": \"2010-01-01T19:73:24Z\",\n        \"credentialSubject\": {\n            \"id\": \"did:example:ebfeb1f712ebc6f1c276e12ec21\",\n            \"license\": {\n                \"number\": \"34DGE352\",\n                \"dob\": \"07/13/80\"\n            }\n       },\n       \"proof\": {\n           \"type\": \"BbsBlsSignatureProof2020\",\n           \"created\": \"2020-04-25\",\n           \"verificationMethod\": \"did:example:489398593#test\",\n           \"proofPurpose\": \"assertionMethod\",\n           \"signature\": \"F9uMuJzNBqj4j+HPTvWjUN/MNoe6KRH0818WkvDn2Sf7kg1P17YpNyzSB+CH57AWDFunU13tL8oTBDpBhODckelTxHIaEfG0rNmqmjK6DOs0/ObksTZh7W3OTbqfD2h4C/wqqMQHSWdXXnojwyFDEg==\"\n       }\n    }],\n    \"proof\": {\n        \"type\": \"BbsBlsSignature2020\",\n        \"created\": \"2020-04-25\",\n        \"verificationMethod\": \"did:example:489398593#test\",\n        \"proofPurpose\": \"authentication\",\n        \"proofValue\": \"F9uMuJzNBqj4j+HPTvWjUN/MNoe6KRH0818WkvDn2Sf7kg1P17YpNyzSB+CH57AWDFunU13tL8oTBDpBhODckelTxHIaEfG0rNmqmjK6DOs0/ObksTZh7W3OTbqfD2h4C/wqqMQHSWdXXnojwyFDEg==\",\n        \"requiredRevealStatements\": [ 4, 5 ]\n    }\n}\n

    Note: The above example is for illustrative purposes. In particular, note that whether a Verifier requests a proof_type of BbsBlsSignature2020 has no bearing on whether the Holder is required to present credentials with proofs of type BbsBlsSignatureProof2020. The choice of proof types on the credentials is constrained by a) the available types registered in RFC0593 and b) additional constraints placed on them due to other aspects of the proof requested by the Verifier, such as requiring limited disclosure with the limit_disclosure property. In such a case, a proof type of Ed25519Signature2018 in the credentials is not appropriate whereas BbsBlsSignatureProof2020 is capable of selective disclosure.

    "},{"location":"features/0510-dif-pres-exch-attach/#jsonwebsignature2020","title":"JsonWebSignature2020","text":"

    Specification.

    Request Parameters:

    Result:

    A Verifiable Presentation of type Presentation Submission containing the credentials requested under the verifiableCredential property and a proof property of type JsonWebSignature2020.

    Example
    {\n    \"@context\": [\n        \"https://www.w3.org/2018/credentials/v1\",\n        \"https://identity.foundation/presentation-exchange/submission/v1\"\n    ],\n    \"type\": [\n        \"VerifiablePresentation\",\n        \"PresentationSubmission\"\n    ],\n    \"presentation_submission\": {\n        \"descriptor_map\": [{\n           \"id\": \"citizenship_input\",\n           \"path\": \"$.verifiableCredential.[0]\"\n        }]\n    },\n    \"verifiableCredential\": [{\n        \"@context\": \"https://www.w3.org/2018/credentials/v1\",\n        \"id\": \"https://eu.com/claims/DriversLicense\",\n        \"type\": [\n            \"EUDriversLicense\"\n        ],\n        \"issuer\": \"did:foo:123\",\n        \"issuanceDate\": \"2010-01-01T19:73:24Z\",\n        \"credentialSubject\": {\n            \"id\": \"did:example:ebfeb1f712ebc6f1c276e12ec21\",\n            \"license\": {\n            \"number\": \"34DGE352\",\n            \"dob\": \"07/13/80\"\n          }\n        },\n        \"proof\": {\n            \"type\": \"RsaSignature2018\",\n            \"created\": \"2017-06-18T21:19:10Z\",\n            \"proofPurpose\": \"assertionMethod\",\n            \"verificationMethod\": \"https://example.edu/issuers/keys/1\",\n            \"jws\": \"...\"\n        }\n    }],\n    \"proof\": {\n      \"type\": \"JsonWebSignature2020\",\n      \"proofPurpose\": \"authentication\",\n      \"created\": \"2017-09-23T20:21:34Z\",\n      \"verificationMethod\": \"did:example:123456#key1\",\n      \"challenge\": \"2bbgh3dgjg2302d-d2b3gi423d42\",\n      \"domain\": \"example.org\",\n      \"jws\": \"eyJ0eXAiOiJK...gFWFOEjXk\"\n  }\n}\n

    Available JOSE key types are:

    kty crv signature EC P-256 ES256 EC P-384 ES384"},{"location":"features/0510-dif-pres-exch-attach/#drawbacks","title":"Drawbacks","text":"

    N/A

    "},{"location":"features/0510-dif-pres-exch-attach/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0510-dif-pres-exch-attach/#prior-art","title":"Prior art","text":""},{"location":"features/0510-dif-pres-exch-attach/#unresolved-questions","title":"Unresolved questions","text":"

    TODO it is assumed the Verifier will initiate the protocol if they can transmit their presentation definition via an out-of-band channel (eg. it is published on their website) with a request-presentation message, possibly delivered via an Out-of-Band invitation (see RFC0434). For now, the Prover sends propose-presentation as a response to request-presentation.

    "},{"location":"features/0510-dif-pres-exch-attach/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0511-dif-cred-manifest-attach/","title":"Aries RFC 0511: Credential-Manifest Attachment format for requesting and presenting credentials","text":""},{"location":"features/0511-dif-cred-manifest-attach/#summary","title":"Summary","text":"

    This RFC registers an attachment format for use in the issue-credential V2 based on the Decentralized Identity Foundation's (DIF) Credential Manifest specification. Credental Manifest describes a data format that specifies the inputs an Issuer requires for issuance of a credential. It relies on the closely-related Presentation Exchange specification to describe the required inputs and the format in which the Holder submits those inputs (a verifiable presentation).

    "},{"location":"features/0511-dif-cred-manifest-attach/#motivation","title":"Motivation","text":"

    The Credential Manifest specification lends itself well to several transport mediums due to its limited scope as a data format, and is easily transported over DIDComm.

    It is furthermore desirable to make use of specifications developed in an open standards body.

    "},{"location":"features/0511-dif-cred-manifest-attach/#tutorial","title":"Tutorial","text":"

    Complete examples of messages are provided in the reference section.

    Credential Manifests MAY be acquired by the Holder via out of band means, such as from a well-known location on the Issuer's website. This allows the Holder to initiate the issue-credential protocol with a request-message providing they also possess the requisite challenge and domain values. If they do not possess these values then the Issuer MAY respond with an offer-credential message.

    Otherwise the Holder MAY initiate the protocol with propose-credential in order to discover the Issuer's requirements.

    "},{"location":"features/0511-dif-cred-manifest-attach/#reference","title":"Reference","text":""},{"location":"features/0511-dif-cred-manifest-attach/#propose-credential-attachment-format","title":"propose-credential attachment format","text":"

    Format identifier: dif/credential-manifest@v1.0

    The contents of the attachment is the minimal form of the Issuer's credential manifest describing the credential the Holder desires. It SHOULD contain the issuer and credential properties and no more.

    Complete message example:

    {\n    \"@id\": \"8639505e-4ec5-41b9-bb31-ac6a7b800fe7\",\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/propose-credential\",\n    \"comment\": \"<some comment>\",\n    \"formats\" : [{\n        \"attach_id\": \"b45ca1bc-5b3c-4672-a300-84ddf6fbbaea\",\n        \"format\": \"dif/credential-manifest@v1.0\"\n    }],\n    \"filters~attach\": [{\n        \"@id\": \"b45ca1bc-5b3c-4672-a300-84ddf6fbbaea\",\n        \"mime-type\": \"application/json\",\n        \"data\": {\n            \"json\": {\n                \"issuer\": \"did:example:123\",\n                \"credential\": {\n                    \"name\": \"Washington State Class A Commercial Driver License\",\n                    \"schema\": \"ipfs:QmPXME1oRtoT627YKaDPDQ3PwA8tdP9rWuAAweLzqSwAWT\"\n                }\n            }\n        }\n    }]\n}\n
    "},{"location":"features/0511-dif-cred-manifest-attach/#offer-credential-attachment-format","title":"offer-credential attachment format","text":"

    Format identifier: dif/credential-manifest@v1.0

    The contents of the attachment is a JSON object containing the Issuer's credential manifest, a challenge and domain. All three attributes are REQUIRED.

    Example:

    {\n    \"@id\": \"dfedaad3-bd7a-4c33-8337-fa94a547c0e2\",\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/offer-credential\",\n    \"comment\": \"<some comment>\",\n    \"formats\" : [{\n        \"attach_id\" : \"76cd0d94-8eb6-4ef3-a094-af45d81e9528\",\n        \"format\" : \"dif/credential-manifest@v1.0\"\n    }],\n    \"offers~attach\": [{\n        \"@id\": \"76cd0d94-8eb6-4ef3-a094-af45d81e9528\",\n        \"mime-type\": \"application/json\",\n        \"data\": {\n            \"json\": {\n                \"challenge\": \"1f44d55f-f161-4938-a659-f8026467f126\",\n                \"domain\": \"us.gov/DriverLicense\",\n                \"credential_manifest\": {\n                    // credential manifest object\n                }\n            }\n        }\n    }]\n}\n
    "},{"location":"features/0511-dif-cred-manifest-attach/#request-credential-attachment-format","title":"request-credential attachment format","text":"

    Format identifier: dif/credential-manifest@v1.0

    The contents of the attachment is a JSON object that describes the credential requested and provides the inputs the Issuer requires from the Holder before proceeding with issuance:

    {\n    \"credential-manifest\": {\n        \"issuer\": \"did:example:123\",\n        \"credential\": {\n            \"name\": \"Washington State Class A Commercial Driver License\",\n            \"schema\": \"ipfs:QmPXME1oRtoT627YKaDPDQ3PwA8tdP9rWuAAweLzqSwAWT\"\n        }\n    },\n    \"presentation-submission\": {\n        // presentation submission object\n    }\n}\n

    If the Issuer's credential manifest does not include the presentation_definition attribute, and the Holder has initiated the protocol with propose-credential, then this attachment MAY be omitted entirely as the message thread provides sufficient context for this request.

    Implementors are STRONGLY discouraged from allowing BOTH credential-manifest and presentation-submission. The latter requires the Holder's knowledge of the necessary challenge and domain, both of which SHOULD provide sufficient context to the Issuer as to which credential is being requested.

    The following example shows a request-credential with a presentation submission. Notice the presentation's proof includes the challenge and domain acquired either through out-of-band means or via an offer-credential message.:

    {\n    \"@id\": \"cf3a9301-6d4a-430f-ae02-b4a79ddc9706\",\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/request-credential\",\n    \"comment\": \"<some comment>\",\n    \"formats\": [{\n        \"attach_id\": \"7cd11894-838a-45c0-a9ec-13e2d9d125a1\",\n        \"format\": \"dif/credential-manifest@v1.0\"\n    }],\n    \"requests~attach\": [{\n        \"@id\": \"7cd11894-838a-45c0-a9ec-13e2d9d125a1\",\n        \"mime-type\": \"application/json\",\n        \"data\": {\n            \"json\": {\n                \"presentation-submission\": {\n                    \"@context\": [\n                        \"https://www.w3.org/2018/credentials/v1\",\n                        \"https://identity.foundation/presentation-exchange/submission/v1\"\n                    ],\n                    \"type\": [\n                        \"VerifiablePresentation\",\n                        \"PresentationSubmission\"\n                    ],\n                    \"presentation_submission\": {\n                        \"descriptor_map\": [{\n                            \"id\": \"citizenship_input\",\n                            \"path\": \"$.verifiableCredential.[0]\"\n                        }]\n                    },\n                    \"verifiableCredential\": [{\n                        \"@context\": \"https://www.w3.org/2018/credentials/v1\",\n                        \"id\": \"https://us.gov/claims/Passport/723c62ab-f2f0-4976-9ec1-39992e20c9b1\",\n                        \"type\": [\"USPassport\"],\n                        \"issuer\": \"did:foo:123\",\n                        \"issuanceDate\": \"2010-01-01T19:73:24Z\",\n                        \"credentialSubject\": {\n                            \"id\": \"did:example:ebfeb1f712ebc6f1c276e12ec21\",\n                            \"birth_date\": \"2000-08-14\"\n                        },\n                        \"proof\": {\n                            \"type\": \"EcdsaSecp256k1VerificationKey2019\",\n                            \"created\": \"2017-06-18T21:19:10Z\",\n                            \"proofPurpose\": \"assertionMethod\",\n                            \"verificationMethod\": \"https://example.edu/issuers/keys/1\",\n                            \"jws\": \"...\"\n                        }\n                    }],\n                    \"proof\": {\n                        \"type\": \"RsaSignature2018\",\n                        \"created\": \"2018-09-14T21:19:10Z\",\n                        \"proofPurpose\": \"authentication\",\n                        \"verificationMethod\": \"did:example:ebfeb1f712ebc6f1c276e12ec21#keys-1\",\n                        \"challenge\": \"1f44d55f-f161-4938-a659-f8026467f126\",\n                        \"domain\": \"us.gov/DriverLicense\",\n                        \"jws\": \"...\"\n                    }\n                }\n            }\n        }\n    }]\n}\n
    "},{"location":"features/0511-dif-cred-manifest-attach/#issue-credential-attachment-format","title":"issue-credential attachment format","text":"

    This specification does not register any format identifier for the issue-credential message. The Issuer SHOULD set the format to the value that corresponds to the format the credentials are issued in.

    "},{"location":"features/0511-dif-cred-manifest-attach/#drawbacks","title":"Drawbacks","text":"

    N/A

    "},{"location":"features/0511-dif-cred-manifest-attach/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0511-dif-cred-manifest-attach/#prior-art","title":"Prior art","text":"

    N/A

    "},{"location":"features/0511-dif-cred-manifest-attach/#unresolved-questions","title":"Unresolved questions","text":"

    N/A

    "},{"location":"features/0511-dif-cred-manifest-attach/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0557-discover-features-v2/","title":"Aries RFC 0557: Discover Features Protocol v2.x","text":""},{"location":"features/0557-discover-features-v2/#summary","title":"Summary","text":"

    Describes how one agent can query another to discover which ../../features it supports, and to what extent.

    "},{"location":"features/0557-discover-features-v2/#motivation","title":"Motivation","text":"

    Though some agents will support just one feature and will be statically configured to interact with just one other party, many exciting uses of agents are more dynamic and unpredictable. When Alice and Bob meet, they won't know in advance which ../../features are supported by one another's agents. They need a way to find out.

    "},{"location":"features/0557-discover-features-v2/#tutorial","title":"Tutorial","text":"

    This is version 2.0 of the Discover Features protocol, and its fully qualified PIURI for the Discover Features protocol is:

    https://didcomm.org/discover-features/2.0\n

    This version is conceptually similar to version 1.0 of this protocol. It differs in its ability to ask about multiple feature types, and to ask multiple questions and receive multiple answers in a single round trip.

    "},{"location":"features/0557-discover-features-v2/#roles","title":"Roles","text":"

    There are two roles in the discover-features protocol: requester and responder. Normally, the requester asks the responder about the ../../features it supports, and the responder answers. Each role uses a single message type.

    It is also possible to proactively disclose ../../features; in this case a requester receives a response without asking for it. This may eliminate some chattiness in certain use cases (e.g., where two-way connectivity is limited).

    "},{"location":"features/0557-discover-features-v2/#states","title":"States","text":"

    The state progression is very simple. In the normal case, it is simple request-response; in a proactive disclosure, it's a simple one-way notification.

    "},{"location":"features/0557-discover-features-v2/#requester","title":"Requester","text":""},{"location":"features/0557-discover-features-v2/#responder","title":"Responder","text":""},{"location":"features/0557-discover-features-v2/#messages","title":"Messages","text":""},{"location":"features/0557-discover-features-v2/#queries-message-type","title":"queries Message Type","text":"

    A discover-features/queries message looks like this:

    {\n  \"@type\": \"https://didcomm.org/discover-features/2.0/queries\",\n  \"@id\": \"yWd8wfYzhmuXX3hmLNaV5bVbAjbWaU\",\n  \"queries\": [\n    { \"feature-type\": \"protocol\", \"match\": \"https://didcomm.org/tictactoe/1.*\" },\n    { \"feature-type\": \"goal-code\", \"match\": \"aries.*\" }\n  ]\n}\n

    Queries messages contain one or more query objects in the queries array. Each query essentially says, \"Please tell me what ../../features of type X you support, where the feature identifiers match this (potentially wildcarded) string.\" This particular example asks an agent if it supports any 1.x versions of the tictactoe protocol, and if it supports any goal codes that begin with \"aries.\".

    Implementations of this protocol must recognize the following values for feature-type: protocol, goal-code, gov-fw, didcomm-version, and decorator/header. (The concept known as decorator in DIDComm v1 approximately maps to the concept known as header in DIDComm v2. The two values should be considered synonyms and must both be recognized.) Additional values of feature-type may be standardized by raising a PR against this RFC that defines the new type and increments the minor protocol version number; non-standardized values are also valid, but there is no guarantee that their semantics will be recognized.

    Identifiers for feature types vary. For protocols, identifiers are PIURIs. For goal codes, identifiers are goal code values. For governance frameworks, identifiers are URIs where the framework is published (typically the data_uri field if machine-readable. For DIDComm versions, identifiers are the URIs where DIDComm versions are developed (https://github.com/hyperledger/aries-rfcs for V1 and https://github.com/decentralized-identity/didcomm-messaging for V2; see \"Detecting DIDComm Versions\" in RFC 0044 for more details).

    The match field of a query descriptor may use the * wildcard. By itself, a match with just the wildcard says, \"I'm interested in anything you want to share with me.\" But usually, this wildcard will be to match a prefix that's a little more specific, as in the example that matches any 1.x version.

    Any agent may send another agent this message type at any time. Implementers of agents that intend to support dynamic relationships and rich ../../features are strongly encouraged to implement support for this message, as it is likely to be among the first messages exchanged with a stranger.

    "},{"location":"features/0557-discover-features-v2/#disclosures-message-type","title":"disclosures Message Type","text":"

    A discover-features/disclosures message looks like this:

    {\n  \"@type\": \"https://didcomm.org/discover-features/2.0/disclosures\",\n  \"~thread\": { \"thid\": \"yWd8wfYzhmuXX3hmLNaV5bVbAjbWaU\" },\n  \"disclosures\": [\n    {\n      \"feature-type\": \"protocol\",\n      \"id\": \"https://didcomm.org/tictactoe/1.0\",\n      \"roles\": [\"player\"]\n    },\n    {\n      \"feature-type\": \"goal-code\",\n      \"id\": \"aries.sell.goods.consumer\"\n    }\n  ]\n}\n

    The disclosures field is a JSON array of zero or more disclosure objects that describe a feature. Each descriptor has a feature-type field that contains data corresponding to feature-type in a query object, and an id field that unambiguously identifies a single item of that feature type. When the item is a protocol, the disclosure object may also contain a roles array that enumerates the roles the responding agent can play in the associated protocol. Future feature types may add additional optional fields, though no other fields are being standardized with this version of the RFC.

    Disclosures messages say, \"Here are some ../../features I support (that matched your queries).\"

    "},{"location":"features/0557-discover-features-v2/#sparse-disclosures","title":"Sparse Disclosures","text":"

    Disclosures do not have to contain exhaustive detail. For example, the following response omits the optional roles field but may be just as useful as one that includes it:

    {\n  \"@type\": \"https://didcomm.org/discover-features/2.0/disclosures\",\n  \"~thread\": { \"thid\": \"yWd8wfYzhmuXX3hmLNaV5bVbAjbWaU\" },\n  \"disclosures\": [\n    {\"feature-type\": \"protocol\", \"id\": \"https://didcomm.org/tictactoe/1.0\"}\n  ]\n}\n

    Less detail probably suffices because agents do not need to know everything about one another's implementations in order to start an interaction--usually the flow will organically reveal what's needed. For example, the outcome message in the tictactoe protocol isn't needed until the end, and is optional anyway. Alice can start a tictactoe game with Bob and will eventually see whether he has the right idea about outcome messages.

    The missing roles in this disclosure does not say, \"I support no roles in this protocol.\" It says, \"I support the protocol but I'm providing no detail about specific roles.\" Similar logic applies to any other omitted fields.

    An empty disclosures array does not say, \"I support no ../../features that match your query.\" It says, \"I'm not disclosing to you that I support any ../../features (that match your query).\" An agent might not tell another that it supports a feature for various reasons, including: the trust that it imputes to the other party based on cumulative interactions so far, whether it's in the middle of upgrading a plugin, whether it's currently under high load, and so forth. And responses to a discover-features query are not guaranteed to be true forever; agents can be upgraded or downgraded, although they probably won't churn in their feature profiles from moment to moment.

    "},{"location":"features/0557-discover-features-v2/#privacy-considerations","title":"Privacy Considerations","text":"

    Because the wildcards in a queries message can be very inclusive, the discover-features protocol could be used to mine information suitable for agent fingerprinting, in much the same way that browser fingerprinting works. This is antithetical to the ethos of our ecosystem, and represents bad behavior. Agents should use discover-features to answer legitimate questions, and not to build detailed profiles of one another. However, fingerprinting may be attempted anyway.

    For agents that want to maintain privacy, several best practices are recommended:

    "},{"location":"features/0557-discover-features-v2/#follow-selective-disclosure","title":"Follow selective disclosure.","text":"

    Only reveal supported ../../features based on trust in the relationship. Even if you support a protocol, you may not wish to use it in every relationship. Don't tell others about ../../features you do not plan to use with them.

    Patterns are easier to see in larger data samples. However, a pattern of ultra-minimal data is also a problem, so use good judgment about how forthcoming to be.

    "},{"location":"features/0557-discover-features-v2/#vary-the-format-of-responses","title":"Vary the format of responses.","text":"

    Sometimes, you might prettify your agent plaintext message one way, sometimes another.

    "},{"location":"features/0557-discover-features-v2/#vary-the-order-of-items-in-the-disclosures-array","title":"Vary the order of items in the disclosures array.","text":"

    If more than one key matches a query, do not always return them in alphabetical order or version order. If you do return them in order, do not always return them in ascending order.

    "},{"location":"features/0557-discover-features-v2/#consider-adding-some-spurious-details","title":"Consider adding some spurious details.","text":"

    If a query could match multiple ../../features, then occasionally you might add some made-up ../../features as matches. If a wildcard allows multiple versions of a protocol, then sometimes you might use some made-up versions. And sometimes not. (Doing this too aggressively might reveal your agent implementation, so use sparingly.)

    "},{"location":"features/0557-discover-features-v2/#vary-how-you-query-too","title":"Vary how you query, too.","text":"

    How you ask questions may also be fingerprintable.

    "},{"location":"features/0557-discover-features-v2/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0587-encryption-envelope-v2/","title":"Aries RFC 0587: Encryption Envelope v2","text":""},{"location":"features/0587-encryption-envelope-v2/#summary","title":"Summary","text":"

    This RFC proposes that we support the definition of envelopes from DIDComm Messaging.

    "},{"location":"features/0587-encryption-envelope-v2/#motivation","title":"Motivation","text":"

    This RFC defines ciphersuites for envelopes such that we can achieve better compatability with DIDComm Messaging being specified at DIF. The ciphersuites defined in this RFC are a subset of the definitions in Aries RFC 0334-jwe-envelope.

    "},{"location":"features/0587-encryption-envelope-v2/#encryption-algorithms","title":"Encryption Algorithms","text":"

    DIDComm defines both the concept of authenticated sender encryption (aka Authcrypt) and anonymous sender encryption (aka Anoncrypt). In general, Aries RFCs and protocols use Authcrypt to exchange messages. In some limited scenarios (e.g., mediator and relays), an Aries RFC or protocol may define usage of Anoncrypt.

    ECDH-1PU draft 04 defines the JWE structure for Authcrypt. ECDH-ES from RFC 7518 defines the JWE structure for Anoncrypt. The following sections summarize the supported algorithms.

    "},{"location":"features/0587-encryption-envelope-v2/#curves","title":"Curves","text":"

    DIDComm Messaging (and this RFC) requires support for X25519, P-256, and P-384.

    "},{"location":"features/0587-encryption-envelope-v2/#content-encryption-algorithms","title":"Content Encryption Algorithms","text":"

    DIDComm Messaging (and this RFC) requires support for both XC20P and A256GCM for Anoncrypt only and A256CBC-HS512 for both Authcrypt and Anoncrypt.

    "},{"location":"features/0587-encryption-envelope-v2/#key-wrapping-algorithms","title":"Key Wrapping Algorithms","text":"

    DIDComm Messaging (and this RFC) requires support for ECDH-1PU+A256KW and ECDH-ES+A256KW.

    "},{"location":"features/0587-encryption-envelope-v2/#key-ids-kid-and-skid-headers-references-in-the-did-document","title":"Key IDs kid and skid headers references in the DID document","text":"

    Keys used by DIDComm envelopes MUST be sourced from the DIDs exchanged between two agents. Specifically, both sender and recipients keys MUST be retrieved from the DID document's KeyAgreement verification section as per the DID Document Keys definition.

    When Alice is preparing an envelope intended for Bob, the packing process should use a key from both hers and Bob's DID document's KeyAgreement section.

    Assuming Alice has a DID Doc with the following KeyAgreement definition (source: DID V1 Example 17):

    {\n  \"@context\": \"https://www.w3.org/ns/did/v1\",\n  \"id\": \"did:example:123456789abcdefghi\",\n  ...\n  \"keyAgreement\": [\n    // this method can be used to perform key agreement as did:...fghi\n    \"did:example:123456789abcdefghi#keys-1\",\n    // this method is *only* approved for key agreement usage, it will not\n    // be used for any other verification relationship, so its full description is\n    // embedded here rather than using only a reference\n    {\n      \"id\": \"did:example:123#zC9ByQ8aJs8vrNXyDhPHHNNMSHPcaSgNpjjsBYpMMjsTdS\",\n      \"type\": \"X25519KeyAgreementKey2019\", // external (property value)\n      \"controller\": \"did:example:123\",\n      \"publicKeyBase58\": \"9hFgmPVfmBZwRvFEyniQDBkz9LmV7gDEqytWyGZLmDXE\"\n    }\n  ],\n  ...\n}\n

    The envelope packing process should set the skid header with value did:example:123456789abcdefghi#keys-1 in the envelope's protected headers and fetch the underlying key to execute ECDH-1PU key derivation for content key wrapping.

    Assuming she also has Bob's DID document which happens to include the following KeyAgreement section:

    {\n  \"@context\": \"https://www.w3.org/ns/did/v1\",\n  \"id\": \"did:example:jklmnopqrstuvwxyz1\",\n  ...\n  \"keyAgreement\": [\n    {\n      \"id\": \"did:example:jklmnopqrstuvwxyz1#key-1\",\n      \"type\": \"X25519KeyAgreementKey2019\", // external (property value)\n      \"controller\": \"did:example:jklmnopqrstuvwxyz1\",\n      \"publicKeyBase58\": \"9hFgmPVfmBZwRvFEyniQDBkz9LmV7gDEqytWyGZLmDXE\"\n    }\n  ],\n  ...\n}\n

    There should be only 1 entry in the recipients of the envelope, representing Bob. The corresponding kid header for this recipient MUST have did:example:jklmnopqrstuvwxyz1#key-1 as value. The packing process MUST extract the public key bytes found in publicKeyBase58 of Bob's DID Doc KeyAgreement[0] to execute the ECDH-1PU key derivation for content key wrapping.

    When Bob receives the envelope, the unpacking process on his end MUST resolve the skid protected header value using Alice's DID doc's KeyAgreement[0] in order to extract her public key. In Alice's DID Doc example above, KeyAgreement[0] is a reference id, it MUST be resolved from the main VerificationMethod[] of Alice's DID document (not shown in the example).

    Once resolved, the unpacker will then execute ECDH-1PU key derivation using this key and Bob's own recipient key found in the envelope's recipients[0] to unwrap the content encryption key.

    "},{"location":"features/0587-encryption-envelope-v2/#protecting-the-skid-header","title":"Protecting the skid header","text":"

    When the skid cannot be revealed in a plain-text JWE header (to avoid potentially leaking sender's key id), the skid MAY be encrypted for each recipient. In this case, instead of having a skid protected header in the envelope, each recipient MAY include an encrypted_skid header with a value based on the encryption of skid using ECDH-ES Z computation of the epk and the recipient's key as the encryption key.

    For applications that don't require this protection, they MAY use skid protected header directly without any additional recipient headers.

    Applications MUST use either skid protected header or encrypted_skid recipients header but not both in the same envelope.

    "},{"location":"features/0587-encryption-envelope-v2/#ecdh-1pu-key-wrapping-and-common-protected-headers","title":"ECDH-1PU key wrapping and common protected headers","text":"

    When using authcrypt, the 1PU draft requires mandates the use of AES_CBC_HMAC_SHA family of content encryption algorithms. To meet this requirement, JWE messages MUST use common epk, apu, apv and alg headers for all recipients. They MUST be set in the protected headers JWE section.

    As per this requirement, the JWE building must first encrypt the payload then use the resulting tag as part of the key derivation process when wrapping the cek.

    To meet this requirement, the above headers must be defined as follows: * epk: generated once for all recipients. It MUST be of the same type and curve as all recipient keys since kdf with the sender key must be on the same curve. - Example: \"epk\": {\"kty\": \"EC\",\"crv\": \"P-256\",\"x\": \"BVDo69QfyXAdl6fbK6-QBYIsxv0CsNMtuDDVpMKgDYs\",\"y\": \"G6bdoO2xblPHrKsAhef1dumrc0sChwyg7yTtTcfygHA\"} * apu: similar to skid, this is the producer (sender) identifier, it MUST contain the skid value base64 RawURL (no padding) encoded. Note: this is base64URL(skid value). - Example for skid mentioned in an earlier section above: ZGlkOmV4YW1wbGU6MTIzNDU2Nzg5YWJjZGVmZ2hpI2tleXMtMQ * apv: this represents the recipients' kid list. The list must be alphanumerically sorted, kid values will then be concatenated with a . and the final result MUST be base64 URL (no padding) encoding of the SHA256 hash of concatenated list. * alg: this is the key wrapping algorithm, ie: ECDH-1PU+A256KW.

    A final note about skid header: since the 1PU draft does not require this header, authcrypt implementations MUST be able to resolve the sender kid from the APU header if skid is not set.

    "},{"location":"features/0587-encryption-envelope-v2/#media-type","title":"Media Type","text":"

    The media type associated to this envelope is application/didcomm-encrypted+json. RFC 0044 provides a general discussion of media (aka mime) types.

    The media type of the envelope MUST be set in the typ property of the JWE and the media type of the payload MUST be set in the cty property of the JWE.

    For example, following the guidelines of RFC 0044, an encrypted envelope with a plaintext DIDComm v1 payload contains the typ property with the value application/didcomm-encrypted+json and cty property with the value application/json;flavor=didcomm-msg.

    As specified in IETF RFC 7515 and referenced in IETF RFC 7516, implementations MUST also support media types that omit application/. For example, didcomm-encrypted+json and application/didcomm-encrypted+json are treated as equivalent media types.

    As discussed in RFC 0434 and RFC 0067, the accept property is used to advertise supported media types. The accept property may contain an envelope media type or a combination of the envelope media type and the content media type. In cases where the content media type is not present, the expectation is that the appropriate content media type can be inferred. For example, application/didcomm-envelope-enc indicates both Envelope v1 and DIDComm v1 and application/didcomm-encrypted+json indicates both Envelope v2 and DIDComm v2. However, some agents may choose to support Envelope v2 with a DIDComm v1 message payload.

    In case the accept property is set in both the DID service block and the out-of-band message, the out-of-band property takes precedence.

    "},{"location":"features/0587-encryption-envelope-v2/#didcomm-v2-transition","title":"DIDComm v2 Transition","text":"

    As this RFC specifies the same envelope format as will be used in DIDComm v2, an implementor should detect if the payload contains DIDComm v1 content or the JWM from DIDComm v2. These payloads can be distinguished based on the cty property of the JWE.

    As discussed in RFC 0044, the content type for the plaintext DIDComm v1 message is application/json;flavor=didcomm-msg. When the cty property contains application/json;flavor=didcomm-msg, the payload is treated as DIDComm v1. DIDComm Messaging will specify appropriate media types for DIDComm v2. To advertise the combination of Envelope v2 with a DIDComm v1 message, the media type is application/didcomm-encrypted+json;cty=application/json.

    "},{"location":"features/0587-encryption-envelope-v2/#additional-aip-impacts","title":"Additional AIP impacts","text":"

    Implementors supporting an AIP sub-target that contains this RFC (e.g., DIDCOMMV2PREP) MAY choose to only support Envelope v2 without support for the original envelope declared in RFC 0019. In these cases, the accept property will not contain didcomm/aip2;env=rfc19 media type.

    "},{"location":"features/0587-encryption-envelope-v2/#drawbacks","title":"Drawbacks","text":"

    The DIDComm v2 specification is a draft. However, the aries-framework-go project has already implemented the new envelope format.

    "},{"location":"features/0587-encryption-envelope-v2/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Our approach for Authcrypt compliance is to use the NIST approved One-Pass Unified Model for ECDH scheme described in SP 800-56A Rev. 3. The JOSE version is defined as ECDH-1PU in this IETF draft.

    Aries agents currently use the envelope described in RFC0019. This envelope uses libsodium (NaCl) encryption/decryption, which is based on Salsa20Poly1305 algorithm.

    "},{"location":"features/0587-encryption-envelope-v2/#prior-art","title":"Prior art","text":""},{"location":"features/0587-encryption-envelope-v2/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0587-encryption-envelope-v2/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0592-indy-attachments/","title":"Aries RFC 0592: Indy Attachment Formats for Requesting and Presenting Credentials","text":""},{"location":"features/0592-indy-attachments/#summary","title":"Summary","text":"

    This RFC registers attachment formats used with Hyperledger Indy-style ZKP-oriented credentials in Issue Credential Protocol 2.0 and Present Proof Protocol 2.0. These formats are generally considered v2 formats, as they align with the \"anoncreds2\" work in Hyperledger Ursa and are a second generation implementation. They began to be used in production in 2018 and are in active deployment in 2021.

    "},{"location":"features/0592-indy-attachments/#motivation","title":"Motivation","text":"

    Allows Indy-style credentials to be used with credential-related protocols that take pluggable formats as payloads.

    "},{"location":"features/0592-indy-attachments/#reference","title":"Reference","text":""},{"location":"features/0592-indy-attachments/#cred-filter-format","title":"cred filter format","text":"

    The potential holder uses this format to propose criteria for a potential credential for the issuer to offer.

    The identifier for this format is hlindy/cred-filter@v2.0. It is a base64-encoded version of the data structure specifying zero or more criteria from the following (non-base64-encoded) structure:

    {\n    \"schema_issuer_did\": \"<schema_issuer_did>\",\n    \"schema_name\": \"<schema_name>\",\n    \"schema_version\": \"<schema_version>\",\n    \"schema_id\": \"<schema_identifier>\",\n    \"issuer_did\": \"<issuer_did>\",\n    \"cred_def_id\": \"<credential_definition_identifier>\"\n}\n

    The potential holder may not know, and need not specify, all of these criteria. For example, the holder might only know the schema name and the (credential) issuer DID. Recall that the potential holder may specify target attribute values and MIME types in the credential preview.

    For example, the JSON (non-base64-encoded) structure might look like this:

    {\n    \"schema_issuer_did\": \"did:sov:4RW6QK2HZhHxa2tg7t1jqt\",\n    \"schema_name\": \"bcgov-mines-act-permit.bcgov-mines-permitting\",\n    \"issuer_did\": \"did:sov:4RW6QK2HZhHxa2tg7t1jqt\"\n}\n

    A complete propose-credential message from the Issue Credential protocol 2.0 embeds this format at /filters~attach/data/base64:

    {\n    \"@id\": \"<uuid of propose message>\",\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/propose-credential\",\n    \"comment\": \"<some comment>\",\n    \"formats\" : [{\n        \"attach_id\": \"<attach@id value>\",\n        \"format\": \"hlindy/cred-filter@v2.0\"\n    }],\n    \"filters~attach\": [{\n        \"@id\": \"<attach@id value>\",\n        \"mime-type\": \"application/json\",\n        \"data\": {\n            \"base64\": \"ewogICAgInNjaGVtYV9pc3N1ZXJfZGlkIjogImRpZDpzb3Y... (clipped)... LMkhaaEh4YTJ0Zzd0MWpxdCIKfQ==\"\n        }\n    }]\n}\n
    "},{"location":"features/0592-indy-attachments/#cred-abstract-format","title":"cred abstract format","text":"

    This format is used to clarify the structure and semantics (but not the concrete data values) of a potential credential, in offers sent from issuer to potential holder.

    The identifier for this format is hlindy/cred-abstract@v2.0. It is a base64-encoded version of the data returned from indy_issuer_create_credential_offer().

    The JSON (non-base64-encoded) structure might look like this:

    {\n    \"schema_id\": \"4RW6QK2HZhHxa2tg7t1jqt:2:bcgov-mines-act-permit.bcgov-mines-permitting:0.2.0\",\n    \"cred_def_id\": \"4RW6QK2HZhHxa2tg7t1jqt:3:CL:58160:default\",\n    \"nonce\": \"57a62300-fbe2-4f08-ace0-6c329c5210e1\",\n    \"key_correctness_proof\" : <key_correctness_proof>\n}\n

    A complete offer-credential message from the Issue Credential protocol 2.0 embeds this format at /offers~attach/data/base64:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/offer-credential\",\n    \"@id\": \"<uuid of offer message>\",\n    \"replacement_id\": \"<issuer unique id>\",\n    \"comment\": \"<some comment>\",\n    \"credential_preview\": <json-ld object>,\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\": \"hlindy/cred-abstract@v2.0\"\n        }\n    ],\n    \"offers~attach\": [\n        {\n            \"@id\": \"<attach@id value>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"ewogICAgInNjaGVtYV9pZCI6ICI0Ulc2UUsySFpoS... (clipped)... jb3JyZWN0bmVzc19wcm9vZj4KfQ==\"\n            }\n        }\n    ]\n}\n

    The same structure can be embedded at /offers~attach/data/base64 in an offer-credential message.

    "},{"location":"features/0592-indy-attachments/#cred-request-format","title":"cred request format","text":"

    This format is used to formally request a credential. It differs from the credential abstract above in that it contains a cryptographic commitment to a link secret; an issuer can therefore use it to bind a concrete instance of an issued credential to the appropriate holder. (In contrast, the credential abstract describes the schema and cred def, but not enough information to actually issue to a specific holder.)

    The identifier for this format is hlindy/cred-req@v2.0. It is a base64-encoded version of the data returned from indy_prover_create_credential_req().

    The JSON (non-base64-encoded) structure might look like this:

    {\n    \"prover_did\" : \"did:sov:abcxyz123\",\n    \"cred_def_id\" : \"4RW6QK2HZhHxa2tg7t1jqt:3:CL:58160:default\",\n    // Fields below can depend on Cred Def type\n    \"blinded_ms\" : <blinded_master_secret>,\n    \"blinded_ms_correctness_proof\" : <blinded_ms_correctness_proof>,\n    \"nonce\": \"fbe22300-57a6-4f08-ace0-9c5210e16c32\"\n}\n

    A complete request-credential message from the Issue Credential protocol 2.0 embeds this format at /requests~attach/data/base64:

    {\n    \"@id\": \"cf3a9301-6d4a-430f-ae02-b4a79ddc9706\",\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/request-credential\",\n    \"comment\": \"<some comment>\",\n    \"formats\": [{\n        \"attach_id\": \"7cd11894-838a-45c0-a9ec-13e2d9d125a1\",\n        \"format\": \"hlindy/cred-req@v2.0\"\n    }],\n    \"requests~attach\": [{\n        \"@id\": \"7cd11894-838a-45c0-a9ec-13e2d9d125a1\",\n        \"mime-type\": \"application/json\",\n        \"data\": {\n            \"base64\": \"ewogICAgInByb3Zlcl9kaWQiIDogImRpZDpzb3Y6YWJjeHl.. (clipped)... DAtNTdhNi00ZjA4LWFjZTAtOWM1MjEwZTE2YzMyIgp9\"\n        }\n    }]\n}\n
    "},{"location":"features/0592-indy-attachments/#credential-format","title":"credential format","text":"

    A concrete, issued Indy credential may be transmitted over many protocols, but is specifically expected as the final message in Issuance Protocol 2.0. The identifier for its format is hlindy/cred@v2.0.

    This is a credential that's designed to be held but not shared directly. It is stored in the holder's wallet and used to derive a novel ZKP or W3C-compatible verifiable presentation just in time for each sharing of credential material.

    The encoded values of the credential MUST follow the encoding algorithm as described in Encoding Claims.

    This is the format emitted by libindy's indy_issuer_create_credential() function. It is JSON-based and might look like this:

    {\n    \"schema_id\": \"4RW6QK2HZhHxa2tg7t1jqt:2:bcgov-mines-act-permit.bcgov-mines-permitting:0.2.0\",\n    \"cred_def_id\": \"4RW6QK2HZhHxa2tg7t1jqt:3:CL:58160:default\",\n    \"rev_reg_id\", \"EyN78DDGHyok8qw6W96UBY:4:EyN78DDGHyok8qw6W96UBY:3:CL:56389:CardossierOrgPerson:CL_ACCUM:1-1000\",\n    \"values\": {\n        \"attr1\" : {\"raw\": \"value1\", \"encoded\": \"value1_as_int\" },\n        \"attr2\" : {\"raw\": \"value2\", \"encoded\": \"value2_as_int\" }\n    },\n    // Fields below can depend on Cred Def type\n    \"signature\": <signature>,\n    \"signature_correctness_proof\": <signature_correctness_proof>\n    \"rev_reg\": <revocation registry state>\n    \"witness\": <witness>\n}\n

    An exhaustive description of the format is out of scope here; it is more completely documented in white papers, source code, and other Indy materials.

    "},{"location":"features/0592-indy-attachments/#proof-request-format","title":"proof request format","text":"

    This format is used to formally request a verifiable presenation (proof) derived from an Indy-style ZKP-oriented credential. It can also be used by a holder to propose a presentation.

    The identifier for this format is hlindy/proof-req@v2.0. It is a base64-encoded version of the data returned from indy_prover_search_credentials_for_proof_req().

    Here is a sample proof request that embodies the following: \"Using a government-issued ID, disclose the credential holder\u2019s name and height, hide the credential holder\u2019s sex, get them to self-attest their phone number, and prove that their age is at least 18\":

    {\n    \"nonce\": \u201c2934823091873049823740198370q23984710239847\u201d, \n    \"name\":\"proof_req_1\",\n    \"version\":\"0.1\",\n    \"requested_attributes\":{\n        \"attr1_referent\": {\"name\":\"sex\"},\n        \"attr2_referent\": {\"name\":\"phone\"},\n        \"attr3_referent\": {\"names\": [\"name\", \"height\"], \"restrictions\": <restrictions specifying government-issued ID>}\n    },\n    \"requested_predicates\":{\n        \"predicate1_referent\":{\"name\":\"age\",\"p_type\":\">=\",\"p_value\":18}\n    }\n}\n
    "},{"location":"features/0592-indy-attachments/#proof-format","title":"proof format","text":"

    This is the format of an Indy-style ZKP. It plays the same role as a W3C-style verifiable presentation (VP) and can be mapped to one.

    The raw values encoded in the presentation SHOULD be verified against the encoded values using the encoding algorithm as described below in Encoding Claims.

    The identifier for this format is hlindy/proof@v2.0. It is a version of the (JSON-based) data emitted by libindy's indy_prover_create_proof()) function. A proof that responds to the previous proof request sample looks like this:

    {\n  \"proof\":{\n    \"proofs\":[\n      {\n        \"primary_proof\":{\n          \"eq_proof\":{\n            \"revealed_attrs\":{\n              \"height\":\"175\",\n              \"name\":\"1139481716457488690172217916278103335\"\n            },\n            \"a_prime\":\"5817705...096889\",\n            \"e\":\"1270938...756380\",\n            \"v\":\"1138...39984052\",\n            \"m\":{\n              \"master_secret\":\"375275...0939395\",\n              \"sex\":\"3511483...897083518\",\n              \"age\":\"13430...63372249\"\n            },\n            \"m2\":\"1444497...2278453\"\n          },\n          \"ge_proofs\":[\n            {\n              \"u\":{\n                \"1\":\"152500...3999140\",\n                \"2\":\"147748...2005753\",\n                \"0\":\"8806...77968\",\n                \"3\":\"10403...8538260\"\n              },\n              \"r\":{\n                \"2\":\"15706...781609\",\n                \"3\":\"343...4378642\",\n                \"0\":\"59003...702140\",\n                \"DELTA\":\"9607...28201020\",\n                \"1\":\"180097...96766\"\n              },\n              \"mj\":\"134300...249\",\n              \"alpha\":\"827896...52261\",\n              \"t\":{\n                \"2\":\"7132...47794\",\n                \"3\":\"38051...27372\",\n                \"DELTA\":\"68025...508719\",\n                \"1\":\"32924...41082\",\n                \"0\":\"74906...07857\"\n              },\n              \"predicate\":{\n                \"attr_name\":\"age\",\n                \"p_type\":\"GE\",\n                \"value\":18\n              }\n            }\n          ]\n        },\n        \"non_revoc_proof\":null\n      }\n    ],\n    \"aggregated_proof\":{\n      \"c_hash\":\"108743...92564\",\n      \"c_list\":[ 6 arrays of 257 numbers between 0 and 255]\n    }\n  },\n  \"requested_proof\":{\n    \"revealed_attrs\":{\n      \"attr1_referent\":{\n        \"sub_proof_index\":0,\n        \"raw\":\"Alex\",\n        \"encoded\":\"1139481716457488690172217916278103335\"\n      }\n    },\n    \"revealed_attr_groups\":{\n      \"attr4_referent\":{\n        \"sub_proof_index\":0,\n        \"values\":{\n          \"name\":{\n            \"raw\":\"Alex\",\n            \"encoded\":\"1139481716457488690172217916278103335\"\n          },\n          \"height\":{\n            \"raw\":\"175\",\n            \"encoded\":\"175\"\n          }\n        }\n      }\n    },\n    \"self_attested_attrs\":{\n      \"attr3_referent\":\"8-800-300\"\n    },\n    \"unrevealed_attrs\":{\n      \"attr2_referent\":{\n        \"sub_proof_index\":0\n      }\n    },\n    \"predicates\":{\n      \"predicate1_referent\":{\n        \"sub_proof_index\":0\n      }\n    }\n  },\n  \"identifiers\":[\n    {\n      \"schema_id\":\"NcYxiDXkpYi6ov5FcYDi1e:2:gvt:1.0\",\n      \"cred_def_id\":\"NcYxi...cYDi1e:2:gvt:1.0:TAG_1\",\n      \"rev_reg_id\":null,\n      \"timestamp\":null\n    }\n  ]\n}\n
    "},{"location":"features/0592-indy-attachments/#unrevealed-attributes","title":"Unrevealed Attributes","text":"

    AnonCreds supports a holder responding to a proof request with some of the requested claims included in an unrevealed_attrs array, as seen in the example above, with attr2_referent. Assuming the rest of the proof is valid, AnonCreds will indicate that a proof with unrevealed attributes has been successfully verified. It is the responsibility of the verifier to determine if the purpose of the verification has been met if some of the attributes are not revealed.

    There are at least a few valid use cases for this approach:

    "},{"location":"features/0592-indy-attachments/#encoding-claims","title":"Encoding Claims","text":"

    Claims in AnonCreds-based verifiable credentials are put into the credential in two forms, raw and encoded. raw is the actual data value, and encoded is the (possibly derived) integer value that is used in presentations. At this time, AnonCreds does not take an opinion on the method used for encoding the raw value.

    AnonCreds issuers and verifiers must agree on the encoding method so that the verifier can check that the raw value returned in a presentation corresponds to the proven encoded value. The following is the encoding algorithm that MUST be used by Issuers when creating credentials and SHOULD be verified by Verifiers receiving presentations:

    An example implementation in Python can be found here.

    A gist of test value pairs can be found here.

    "},{"location":"features/0592-indy-attachments/#notes-on-encoding-claims","title":"Notes on Encoding Claims","text":""},{"location":"features/0592-indy-attachments/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0593-json-ld-cred-attach/","title":"Aries RFC 0593: JSON-LD Credential Attachment format for requesting and issuing credentials","text":""},{"location":"features/0593-json-ld-cred-attach/#summary","title":"Summary","text":"

    This RFC registers an attachment format for use in the issue-credential V2 protocol based on JSON-LD credentials with Linked Data Proofs from the VC Data Model.

    It defines a minimal set of parameters needed to create a common understanding of the verifiable credential to issue. It is based on version 1.0 of the Verifiable Credentials Data Model which is a W3C recommendation since 19 November 2019.

    "},{"location":"features/0593-json-ld-cred-attach/#motivation","title":"Motivation","text":"

    The Issue Credential protocol needs an attachment format to be able to exchange JSON-LD credentials with Linked Data Proofs. It is desirable to make use of specifications developed in an open standards body, such as the Credential Manifest for which the attachment format is described in RFC 0511: Credential-Manifest Attachment format. However, the Credential Manifest is not finished and ready yet, and therefore there is a need to bridge the gap between standards.

    "},{"location":"features/0593-json-ld-cred-attach/#tutorial","title":"Tutorial","text":"

    Complete examples of messages are provided in the reference section.

    "},{"location":"features/0593-json-ld-cred-attach/#reference","title":"Reference","text":""},{"location":"features/0593-json-ld-cred-attach/#ld-proof-vc-detail-attachment-format","title":"ld-proof-vc-detail attachment format","text":"

    Format identifier: aries/ld-proof-vc-detail@v1.0

    This format is used to formally propose, offer, or request a credential. The credential property should contain the credential as it is going to be issued, without the proof and credentialStatus properties. Options for these properties are specified in the options object.

    The JSON structure might look like this:

    {\n  \"credential\": {\n    \"@context\": [\n      \"https://www.w3.org/2018/credentials/v1\",\n      \"https://www.w3.org/2018/credentials/examples/v1\"\n    ],\n    \"id\": \"urn:uuid:3978344f-8596-4c3a-a978-8fcaba3903c5\",\n    \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n    \"issuer\": \"did:key:z6MkodKV3mnjQQMB9jhMZtKD9Sm75ajiYq51JDLuRSPZTXrr\",\n    \"issuanceDate\": \"2020-01-01T19:23:24Z\",\n    \"expirationDate\": \"2021-01-01T19:23:24Z\",\n    \"credentialSubject\": {\n      \"id\": \"did:key:z6MkpTHR8VNsBxYAAWHut2Geadd9jSwuBV8xRoAnwWsdvktH\",\n      \"degree\": {\n        \"type\": \"BachelorDegree\",\n        \"name\": \"Bachelor of Science and Arts\"\n      }\n    }\n  },\n  \"options\": {\n    \"proofPurpose\": \"assertionMethod\",\n    \"created\": \"2020-04-02T18:48:36Z\",\n    \"domain\": \"example.com\",\n    \"challenge\": \"9450a9c1-4db5-4ab9-bc0c-b7a9b2edac38\",\n    \"credentialStatus\": {\n      \"type\": \"CredentialStatusList2017\"\n    },\n    \"proofType\": \"Ed25519Signature2018\"\n  }\n}\n

    A complete request credential message form the Issue Credential protocol 2.0 might look like this:

    {\n  \"@id\": \"7293daf0-ed47-4295-8cc4-5beb513e500f\",\n  \"@type\": \"https://didcomm.org/issue-credential/%VER/request-credential\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"13a3f100-38ce-4e96-96b4-ea8f30250df9\",\n      \"format\": \"aries/ld-proof-vc-detail@v1.0\"\n    }\n  ],\n  \"requests~attach\": [\n    {\n      \"@id\": \"13a3f100-38ce-4e96-96b4-ea8f30250df9\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"base64\": \"ewogICJjcmVkZW50aWFsIjogewogICAgIkBjb250...(clipped)...IkVkMjU1MTlTaWduYXR1cmUyMDE4IgogIH0KfQ==\"\n      }\n    }\n  ]\n}\n

    The format is closely related to the Verifiable Credentials HTTP API, but diverts on some places. The main differences are:

    "},{"location":"features/0593-json-ld-cred-attach/#ld-proof-vc-attachment-format","title":"ld-proof-vc attachment format","text":"

    Format identifier: aries/ld-proof-vc@v1.0

    This format is used to transmit a verifiable credential with linked data proof. The contents of the attachment is a standard JSON-LD Verifiable Credential object with linked data proof as defined by the Verifiable Credentials Data Model and the Linked Data Proofs specification.

    The JSON structure might look like this:

    {\n  \"@context\": [\n    \"https://www.w3.org/2018/credentials/v1\",\n    \"https://www.w3.org/2018/credentials/examples/v1\"\n  ],\n  \"id\": \"http://example.gov/credentials/3732\",\n  \"type\": [\"VerifiableCredential\", \"UniversityDegreeCredential\"],\n  \"issuer\": {\n    \"id\": \"did:web:vc.transmute.world\"\n  },\n  \"issuanceDate\": \"2020-03-10T04:24:12.164Z\",\n  \"credentialSubject\": {\n    \"id\": \"did:example:ebfeb1f712ebc6f1c276e12ec21\",\n    \"degree\": {\n      \"type\": \"BachelorDegree\",\n      \"name\": \"Bachelor of Science and Arts\"\n    }\n  },\n  \"proof\": {\n    \"type\": \"JsonWebSignature2020\",\n    \"created\": \"2020-03-21T17:51:48Z\",\n    \"verificationMethod\": \"did:web:vc.transmute.world#_Qq0UL2Fq651Q0Fjd6TvnYE-faHiOpRlPVQcY_-tA4A\",\n    \"proofPurpose\": \"assertionMethod\",\n    \"jws\": \"eyJiNjQiOmZhbHNlLCJjcml0IjpbImI2NCJdLCJhbGciOiJFZERTQSJ9..OPxskX37SK0FhmYygDk-S4csY_gNhCUgSOAaXFXDTZx86CmI5nU9xkqtLWg-f4cqkigKDdMVdtIqWAvaYx2JBA\"\n  }\n}\n

    A complete issue-credential message from the Issue Credential protocol 2.0 might look like this:

    {\n  \"@id\": \"284d3996-ba85-45d9-964b-9fd5805517b6\",\n  \"@type\": \"https://didcomm.org/issue-credential/%VER/issue-credential\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"format\": \"aries/ld-proof-vc@v1.0\"\n    }\n  ],\n  \"credentials~attach\": [\n    {\n      \"@id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"mime-type\": \"application/ld+json\",\n      \"data\": {\n        \"base64\": \"ewogICAgICAgICAgIkBjb250ZXogWwogICAgICAg...(clipped)...RNVmR0SXFXZhWXgySkJBIgAgfQogICAgICAgIH0=\"\n      }\n    }\n  ]\n}\n
    "},{"location":"features/0593-json-ld-cred-attach/#supported-proof-types","title":"Supported Proof Types","text":"

    Following are the Linked Data proof types on Verifiable Credentials that MUST be supported for compliance with this RFC. All suites listed in the following table MUST be registered in the Linked Data Cryptographic Suite Registry:

    Suite Spec Enables Selective disclosure? Enables Zero-knowledge proofs? Optional Ed25519Signature2018 Link No No No BbsBlsSignature2020** Link Yes No No JsonWebSignature2020*** Link No No Yes

    ** Note: see RFC0646 for details on how BBS+ signatures are to be produced and consumed by Aries agents.

    *** Note: P-256 and P-384 curves are supported.

    "},{"location":"features/0593-json-ld-cred-attach/#drawbacks","title":"Drawbacks","text":"

    N/A

    "},{"location":"features/0593-json-ld-cred-attach/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0593-json-ld-cred-attach/#prior-art","title":"Prior art","text":"

    N/A

    "},{"location":"features/0593-json-ld-cred-attach/#unresolved-questions","title":"Unresolved questions","text":"

    N/A

    "},{"location":"features/0593-json-ld-cred-attach/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0627-static-peer-dids/","title":"Aries RFC 0627: Static Peer DIDs","text":""},{"location":"features/0627-static-peer-dids/#summary","title":"Summary","text":"

    Formally documents a very crisp profile of peer DID functionality that can be referenced in Aries Interop Profiles.

    "},{"location":"features/0627-static-peer-dids/#motivation","title":"Motivation","text":"

    The Peer DID Spec includes a number of advanced ../../features that are still evolving. However, a subset of its functionality is easy to implement and would be helpful to freeze for the purpose of Aries interop.

    "},{"location":"features/0627-static-peer-dids/#tutorial","title":"Tutorial","text":""},{"location":"features/0627-static-peer-dids/#spec-version","title":"Spec version","text":"

    The Peer DID method spec is still undergoing minor evolution. However, it is relatively stable, particularly in the simpler ../../features.

    This Aries RFC targets the version of the spec that is dated April 2, 2021 in its rendered form, or github commit 202a913 in its source form. Note that the rendered form of the spec may update without warning, so the github commit is the better reference.

    "},{"location":"features/0627-static-peer-dids/#targeted-layers","title":"Targeted layers","text":"

    Support for peer DIDs is imagined to target configurable \"layers\" of interoperability:

    For a careful definition of what these layers entail, please see https://identity.foundation/peer-did-method-spec/#layers-of-support.

    This Aries RFC targets Layers 1 and 2. That is, code that complies with this RFC would satisfy the required behaviors for Layer 1 and for Layer 2. Note, however, that Layer 2 is broken into accepting and giving static peer DIDs. An RFC-compliant implementation may choose to implement either side, or both.

    Support for Layer 3 (dynamic peer DIDs that have updatable state and that synchronize that state using Sync Connection Protocol as documented in Aries RFC 0030) is NOT required by this RFC. However, if there is an intent to support dynamic updates in the future, use of numalgo Method 1 is encouraged, as this allows static peer DIDs to acquire new state when dynamic support is added. (See next section.)

    "},{"location":"features/0627-static-peer-dids/#targeted-generation-methods-numalgo","title":"Targeted Generation Methods (numalgo)","text":"

    Peer DIDs can use several different algorithms to generate the entropy that constitutes their numeric basis. See https://identity.foundation/peer-did-method-spec/#generation-method for details.

    This RFC targets Method 0 (inception key without doc), Method 1 (genesis doc), and Method 2 (multiple inception keys). Code that complies with this RFC, and that intends to accept static DIDs at Layer 2a, MUST accept peer DIDs that use any of these methods. Code that intends to give peer DIDs (Layer 2b) MUST give peer DIDs that use at least one of these three methods.

    "},{"location":"features/0627-static-peer-dids/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"features/0641-linking-binary-objects-to-credentials/","title":"0641: Linking binary objects to credentials using hash based references","text":""},{"location":"features/0641-linking-binary-objects-to-credentials/#summary","title":"Summary","text":"

    This RFC provides a solution for issuing and presenting credentials with external binary objects, after referred to as attachments. It is compatible with 0036: Issue Credential Protocol V1, 0453: Issue Credential Protocol V2, 0037: Present Proof V1 protocol and 0454: Present Proof V2 Protocol. These external attachments could consist of images, PDFs, zip files, movies, etc. Through the use of DIDComm attachments, 0017: Attachments, the data can be embedded directly into the attachment or externally hosted. In order to maintain integrity over these attachments, hashlinks are used as the checksum.

    "},{"location":"features/0641-linking-binary-objects-to-credentials/#motivation","title":"Motivation","text":"

    Many use cases, such as a rental agreement or medical data in a verifiable credential, rely on attachments, small or large. At this moment, it is possible to issue credentials with accompanying attachments. When the attachment is rather small, this will work fine. However, larger attachments cause inconsistent timing issues and are resource intensive.

    "},{"location":"features/0641-linking-binary-objects-to-credentials/#tutorial","title":"Tutorial","text":"

    It is already possible to issue and verify base64-encoded attachments in credentials. When a credential is getting larger and larger, it becomes more and more impractical as it has to be signed, which is time consuming and resource intensive. A solution for this is to use the attachments decorator. This decorator creates a way to externalize the attachment from the credential attributes. By allowing this, the signing will be faster and more consistent. However, DIDComm messages SHOULD stay small, like with SMTP or Bluetooth, as specified in 0017: Attachments. In the attachments decorator it is also possible to specify a list of URLs where the attachment might be located for download. This list of URLs is accompanied by a sha256 tag that is a checksum over the file to maintain integrity. This sha256 tag can only contain a sha256 hash and if another algorithm is preferred then the hashlink MUST be used as the checksum.

    When issuing and verifying a credential, messages have to be sent between the holder, issuer and verifier. In order to circumvent additional complexity, such as looking at previously sent credentials for the attachment, the attachments decorator, when containing an attachment, MUST be sent at all of the following steps:

    Issue Credential V1 & V2

    1. Credential Proposal
    2. Credential Offer
    3. Credential Request
    4. Credential

    Present Proof V1 & V2

    1. Presentation Proposal
    2. Presentation Request
    3. Presentation
    "},{"location":"features/0641-linking-binary-objects-to-credentials/#linking","title":"Linking","text":"

    When a credential is issued with an attachment in the attachments decorator, be it a base64-encoded file or a hosted file, the link has to be made between the credential and the attachment. The link MUST be made with the attribute.value of the credential and the @id tag of the attachment in the attachments decorator.

    "},{"location":"features/0641-linking-binary-objects-to-credentials/#hashlink","title":"Hashlink","text":"

    A hashlink, as specified in IETF: Cryptographic Hyperlinks, is a formatted hash that has a prefix of hl: and an optional suffix of metadata. The hash in the hashlink is a multihash, which means that according to the prefix of the hash it is possible to see which hashing algorithm and encoding algorithm has been chosen. An example of a hashlink would be:

    hl:zQmcWyBPyedDzHFytTX6CAjjpvqQAyhzURziwiBKDKgqx6R

    This example shows the prefix of hl: indicating that it is a hashlink and the hash after the prefix is a multihash.

    The hashlink also allows for opional metadata, such as; a list of URLs where the attachment is hosted and a MIME-type. These metadata values are encoded in the CBOR data format using the specified algortihm from section 3.1.2 in the IETF: Cryptographic Hyperlinks.

    When a holder receives a credential with hosted attachments, the holder MAY rehost these attachments. A holder would do this in order to prevent the phone-home problem. If a holder does not care about this issue, this is use case specific, this can be left out but should be considered.

    "},{"location":"features/0641-linking-binary-objects-to-credentials/#inlined-attachments-as-a-credential-attribute","title":"Inlined Attachments as a Credential Attribute","text":"

    Attachments can be inlined in the credential attribute as a base64-encoded string. With this, there is no need for the attachment decorator. Below is an example of embedding a base64-encoded file as a string in a credential attribute.

    {\n  \"name\": \"Picture of a cat\",\n  \"mime-type\": \"image/png\",\n  \"value\": \"VGhpcyBpc ... (many bytes omitted) ... C4gSG93IG5pY2U=\"\n}\n
    "},{"location":"features/0641-linking-binary-objects-to-credentials/#attachments-inlined-in-the-attachment-decorator","title":"Attachments inlined in the Attachment Decorator","text":"

    When the attachments decorator is used to issue a credential with a binary object, a link has to be made between the credential value and the corresponding attachment. This link MUST be a hash, specifically a hashlink based on the checksum of the attachment.

    As stated in 0008: message id and threading, the @id tag of the attachment MUST NOT contain a colon and MUST NOT be longer than 64 characters. because of this, the @id can not contain a hashlink and MUST contain the multihash with a maximum length of 64 characters. When a hash is longer than 64 character, use the first 64 characters.

    {\n  \"@type\": \"https://didcomm.org/issue-credential/%VER/issue-credential\",\n  \"@id\": \"<uuid of issue message>\",\n  \"goal_code\": \"<goal-code>\",\n  \"replacement_id\": \"<issuer unique id>\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"<attach@id value>\",\n      \"format\": \"hlindy/cred@v2.0\"\n    }\n  ],\n  \"credentials~attach\": [\n    {\n      \"@id\": \"<attachment-id>\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"json\": {\n          \"schema_id\": \"4RW6QK2HZhHxa2tg7t1jqt:2:catSchema:0.3.0\",\n          \"cred_def_id\": \"4RW6QK2HZhHxa2tg7t1jqt:3:CL:58161:default\",\n          \"values\": {\n            \"pictureOfACat\": {\n              \"raw\": \"hl:zQmcWyBPyedDzHFytTX6CAjjpvqQAyhzURziwiBKDKgqx6R\",\n              \"encoded\": \"hl:zQmcWyBPyedDzHFytTX6CAjjpvqQAyhzURziwiBKDKgqx6R\"\n            }\n          },\n          \"signature\": \"<signature>\",\n          \"signature_correctness_proof\": \"<signature_correctness_proof>\"\n        }\n      }\n    }\n  ],\n  \"~attach\": [\n    {\n      \"@id\": \"zQmcWyBPyedDzHFytTX6CAjjpvqQAyhzURziwiBKDKgqx6R\",\n      \"mime-type\": \"image/png\",\n      \"filename\": \"cat.png\",\n      \"byte_count\": 2181,\n      \"lastmod_time\": \"2021-04-20 19:38:07Z\",\n      \"description\": \"Cute picture of a cat\",\n      \"data\": {\n        \"base64\": \"VGhpcyBpcyBhIGNv ... (many bytes omitted) ... R0ZXIgU0hJQkEgSU5VLg==\"\n      }\n    }\n  ]\n}\n
    "},{"location":"features/0641-linking-binary-objects-to-credentials/#hosted-attachments","title":"Hosted attachments","text":"

    The last method of adding a binary object in a credential is by using the attachments decorator in combination with external hosting. In the example below the attachment is hosted at two locations. These two URLs MUST point to the same file and match the integrity check with the sha256 value. It is important to note that when an issuer hosts an attachment, and issues a credential with this attachment, that the holder rehosts this attachment to prevent the phone-home assosiation.

    {\n  \"@type\": \"https://didcomm.org/issue-credential/%VER/issue-credential\",\n  \"@id\": \"<uuid of issue message>\",\n  \"goal_code\": \"<goal-code>\",\n  \"replacement_id\": \"<issuer unique id>\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"<attach@id value>\",\n      \"format\": \"hlindy/cred@v2.0\"\n    }\n  ],\n  \"credentials~attach\": [\n    {\n      \"@id\": \"<attachment-id>\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"json\": {\n          \"schema_id\": \"4RW6QK2HZhHxa2tg7t1jqt:2:catSchema:0.3.0\",\n          \"cred_def_id\": \"4RW6QK2HZhHxa2tg7t1jqt:3:CL:58161:default\",\n          \"values\": {\n            \"pictureOfACat\": {\n              \"raw\": \"hl:zQmcWyBPyedDzHFytTX6CAjjpvqQAyhzURziwiBKDKgqx6R\",\n              \"encoded\": \"hl:zQmcWyBPyedDzHFytTX6CAjjpvqQAyhzURziwiBKDKgqx6R\"\n            }\n          },\n          \"signature\": \"<signature>\",\n          \"signature_correctness_proof\": \"<signature_correctness_proof>\"\n        }\n      }\n    }\n  ],\n  \"~attach\": [\n    {\n      \"@id\": \"zQmcWyBPyedDzHFytTX6CAjjpvqQAyhzURziwiBKDKgqx6R\",\n      \"mime-type\": \"application/zip\",\n      \"filename\": \"cat.zip\",\n      \"byte_count\": 218187322,\n      \"lastmod_time\": \"2021-04-20 19:38:07Z\",\n      \"description\": \"Cute pictures of multiple cats\",\n      \"data\": {\n        \"links\": [\n          \"https://drive.google.com/kitty/cats.zip\",\n          \"s3://bucket/cats.zip\"\n        ]\n      }\n    }\n  ]\n}\n
    "},{"location":"features/0641-linking-binary-objects-to-credentials/#matching","title":"Matching","text":"

    Now that a link has been made between the attachment in the attachments decorator, it is possible to match the two together. When a credential is received and a value of an attribute starts with hl: it means that there is a linked attachment. To find the linked attachment to the credential attribute to following steps SHOULD be done:

    1. Extract the multihash from the credential attribute value
    2. Extract the first 64 characters of this multihash
    3. Loop over the @id tag of all the attachments in the attachment decorator
    4. Compare the value of the @id tag with the multihash
    5. If the @id tag matches with the multihash, then there is a link
    6. An integrity check can be done with the original, complete hashlink
    "},{"location":"features/0641-linking-binary-objects-to-credentials/#reference","title":"Reference","text":"

    When an issuer creates a value in a credential attribute with a prefix of hl:, but there is no attachment, a warning SHOULD be thrown.

    When DIDcomm V2 is implemented the attachment decorator will not contain the sha256 tag anymore and it will be replaced by hash to allow for any algorithm. DIDcomm messaging Attachments

    "},{"location":"features/0641-linking-binary-objects-to-credentials/#drawbacks","title":"Drawbacks","text":""},{"location":"features/0641-linking-binary-objects-to-credentials/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    The findings that large credentials are inconsistent and resource intensive are derived from issuing and verifying credentials of 100 kilobytes to 50 megabytes in Aries Framework JavaScript and Aries Cloudagent Python.

    The Identity Foundation is currently working on confidential storage, a way to allow access to your files based on DIDs. This storage would be a sleek fix for the last drawback.

    "},{"location":"features/0641-linking-binary-objects-to-credentials/#prior-art","title":"Prior art","text":""},{"location":"features/0641-linking-binary-objects-to-credentials/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0641-linking-binary-objects-to-credentials/#implementations","title":"Implementations","text":"Name / Link Implementation Notes"},{"location":"features/0646-bbs-credentials/","title":"0646: W3C Credential Exchange using BBS+ Signatures","text":""},{"location":"features/0646-bbs-credentials/#summary","title":"Summary","text":"

    This RFC describes how the Hyperledger Aries community should use BBS+ Signatures that conform with the Linked-Data Proofs Specification to perform exchange of credentials that comply with the W3C Verifiable Credential specification.

    Key ../../features include:

    This RFC sets guidelines for their safe usage and describes privacy-enabling ../../features that should be incorporated.

    The usage of zero-knowledge proofs, selective disclosure and signature blinding are already supported using the specifications as described in this document. Support for private holder binding and privacy preserving revocation will be added in the future.

    "},{"location":"features/0646-bbs-credentials/#motivation","title":"Motivation","text":"

    Aries currently supports credential formats used by Indy (Anoncreds based on JSON) and Aries-Framework-Go. BBS+ signatures with JSON-LD Proofs provide a unified credential format that includes strong privacy protecting anti-correlation ../../features and wide interoperability with verifiable credentials outside the Aries ecosystem.

    "},{"location":"features/0646-bbs-credentials/#tutorial","title":"Tutorial","text":""},{"location":"features/0646-bbs-credentials/#issuing-credentials","title":"Issuing Credentials","text":"

    This section highlights the process of issuing credentials with BBS+ signatures. The first section (Creating BBS+ Credentials) highlights the process of creating credentials with BBS+ signatures, while the next section focusses on the the process of exchanging credentials with BBS+ signatures (Exchanging BBS+ Credentials).

    "},{"location":"features/0646-bbs-credentials/#creating-bbs-credentials","title":"Creating BBS+ Credentials","text":"

    The process to create verifiable credentials with BBS+ signatures is mostly covered by the VC Data Model and BBS+ LD-Proofs specifications. At the date of writing this RFC, the BBS+ LD-Proofs specification still has some unresolved issues. The issues are documented in the Issues with the BBS+ LD-Proofs specification section below.

    Aries implementations MUST use the BBS+ Signature Suite 2020 to create verifiable credentials with BBS+ signatures, identified by the BbsBlsSignature2020 proof type.

    NOTE: Once the signature suites for bound signatures (private holder binding) are defined in the BBS+ LD-Proofs spec, the use of the BbsBlsSignature2020 suite will be deprecated and superseded by the BbsBlsBoundSignature2020 signature suite. See Private Holder Binding below for more information.

    "},{"location":"features/0646-bbs-credentials/#identifiers-in-issued-credentials","title":"Identifiers in Issued Credentials","text":"

    It is important to note that due to limitations of the underlying RDF canonicalization scheme, which is used by BBS+ LD-Proofs, issued credentials SHOULD NOT have any id properties, as the value of these properties will be revealed during the RDF canonicalization process, regardless of whether or not the holder chooses to disclose them.

    Credentials can make use of other identifier properties to create selectively disclosable identifiers. An example of this is the identifier property from the Citizenship Vocabulary

    "},{"location":"features/0646-bbs-credentials/#private-holder-binding","title":"Private Holder Binding","text":"

    A private holder binding allows the holder of a credential to authenticate itself without disclosing a correlating identifier (such as a DID) to the verifier. The current BBS+ LD-Proofs specification does not describe a mechanism yet to do private holder binding, but it is expected this will be done using two new signature suites: BbsBlsBoundSignature2020 and BbsBlsBoundSignatureProof2020. Both suites feature a commitment to a private key held by the credential holder, for which they prove knowledge of when deriving proofs without ever directly revealing the private key, nor a unique identifier linked to the private key (e.g its complementary public pair).

    "},{"location":"features/0646-bbs-credentials/#usage-of-credential-schema","title":"Usage of Credential Schema","text":"

    The zero-knowledge proof section of the VC Data Model requires verifiable credentials used in zero-knowledge proof systems to include a credential definition using the credentialSchema property. Due to the nature of how BBS+ LD proofs work, it is NOT required to include the credentialSchema property. See Issue 726 in the VC Data Model.

    "},{"location":"features/0646-bbs-credentials/#example-bbs-credential","title":"Example BBS+ Credential","text":"

    Below is a complete example of a Verifiable Credential with BBS+ linked data proof.

    {\n  \"@context\": [\n    \"https://www.w3.org/2018/credentials/v1\",\n    \"https://w3id.org/citizenship/v1\",\n    \"https://w3id.org/security/bbs/v1\" // <-- BBS+ context\n  ],\n  \"id\": \"https://issuer.oidp.uscis.gov/credentials/83627465\",\n  \"type\": [\"VerifiableCredential\", \"PermanentResidentCard\"],\n  \"issuer\": \"did:example:489398593\",\n  \"identifier\": \"83627465\", // <-- `identifier` property allows for seletively disclosable id property\n  \"name\": \"Permanent Resident Card\",\n  \"description\": \"Government of Example Permanent Resident Card.\",\n  \"issuanceDate\": \"2019-12-03T12:19:52Z\",\n  \"expirationDate\": \"2029-12-03T12:19:52Z\",\n  \"credentialSubject\": {\n    \"id\": \"did:example:b34ca6cd37bbf23\",\n    \"type\": [\"PermanentResident\", \"Person\"],\n    \"givenName\": \"JOHN\",\n    \"familyName\": \"SMITH\",\n    \"gender\": \"Male\",\n    \"image\": \"data:image/png;base64,iVBORw0KGgokJggg==\",\n    \"residentSince\": \"2015-01-01\",\n    \"lprCategory\": \"C09\",\n    \"lprNumber\": \"999-999-999\",\n    \"commuterClassification\": \"C1\",\n    \"birthCountry\": \"Bahamas\",\n    \"birthDate\": \"1958-07-17\"\n  },\n  \"proof\": {\n    \"type\": \"BbsBlsSignature2020\", // <-- type must be `BbsBlsSignature2020`\n    \"created\": \"2020-10-16T23:59:31Z\",\n    \"proofPurpose\": \"assertionMethod\",\n    \"proofValue\": \"kAkloZSlK79ARnlx54tPqmQyy6G7/36xU/LZgrdVmCqqI9M0muKLxkaHNsgVDBBvYp85VT3uouLFSXPMr7Stjgq62+OCunba7bNdGfhM/FUsx9zpfRtw7jeE182CN1cZakOoSVsQz61c16zQikXM3w==\",\n    \"verificationMethod\": \"did:example:489398593#test\"\n  }\n}\n
    "},{"location":"features/0646-bbs-credentials/#exchanging-bbs-credentials","title":"Exchanging BBS+ Credentials","text":"

    While the process of creating credentials with BBS+ signatures is defined in specifications outside of Aries, the process of exchanging credentials with BBS+ signatures is defined within Aries.

    Credentials with BBS+ signatures can be exchanged by following RFC 0453: Issue Credential Protocol 2.0. The Issue Credential 2.0 provides a registry of attachment formats that can be used for credential exchange. Currently, agents are expected to use the format as described in RFC 0593 (see below).

    NOTE: Once Credential Manifest v1.0 is released, RFC 0593 is expected to be deprecated and replaced by an updated version of RFC 0511: Credential-Manifest Attachment format

    "},{"location":"features/0646-bbs-credentials/#0593-json-ld-credential-attachment-format","title":"0593: JSON-LD Credential Attachment format","text":"

    RFC 0593: JSON-LD Credential Attachment format for requesting and issuing credentials defines a very simple, feature-poor attachment format for issuing JSON-LD credentials.

    The only requirement for exchanging BBS+ credentials, in addition to the requirements as specified in Creating BBS+ Credentials and RFC 0593, is the options.proofType in the ld-proof-vc-detail MUST be BbsBlsSignature2020.

    "},{"location":"features/0646-bbs-credentials/#presenting-derived-credentials","title":"Presenting Derived Credentials","text":"

    This section highlights the process of creating and presenting derived BBS+ credentials containing a BBS+ proof of knowledge.

    "},{"location":"features/0646-bbs-credentials/#deriving-credentials","title":"Deriving Credentials","text":"

    Deriving credentials should be done according to the BBS+ Signature Proof Suite 2020

    "},{"location":"features/0646-bbs-credentials/#disclosing-required-properties","title":"Disclosing Required Properties","text":"

    A verifiable presentation MUST NOT leak information that would enable the verifier to correlate the holder across multiple verifiable presentations.

    The above section from the VC Data Model may give the impression that it is allowed to omit required properties from a derived credential if this prevents correlation. However things the holder chooses to reveal are in a different category than things the holder MUST reveal. Derived credentials MUST disclose required properties, even if it can correlate them.

    E.g. a credential with issuanceDate of 2017-12-05T14:27:42Z could create a correlating factor. However it is against the VC Data Model to not include the property. Take this into account when issuing credentials.

    "},{"location":"features/0646-bbs-credentials/#transforming-blank-node-identifiers","title":"Transforming Blank Node Identifiers","text":"

    This section will be removed once Issue 10 in the LD Proof BBS+ spec is resolved.

    For the verifier to be able to verify the signature of a derived credential it should be able to deterministically normalize the credentials statements for verification. RDF Dataset Canonicalization defines a way in which to allocate identifiers for blank nodes deterministically for normalization. However, the algorithm does not guarantee that the same blank node identifiers will be allocated in the event of modifications to the graph. Because selective disclosure of signed statements modifies the graph as presented to the verifier, the blank node identifiers must be transformed into actual node identifiers when presented to the verifier.

    The BBS+ LD-Proofs specification does not define a mechanism to transform blank node identifiers into actual identifiers. Current implementations use the mechanism as described in this Issue Comment. Some reference implementations:

    "},{"location":"features/0646-bbs-credentials/#verifying-presented-derived-credentials","title":"Verifying Presented Derived Credentials","text":""},{"location":"features/0646-bbs-credentials/#transforming-back-into-blank-node-identifiers","title":"Transforming Back into Blank Node Identifiers","text":"

    This section will be removed once Issue 10 in the LD Proof BBS+ spec is resolved.

    Transforming the blank node identifiers into actual node identifiers in the derived credential means the verification data will be different from the verification data at issuance, invalidating the signature. Therefore the blank node identifier placeholders should be transformed back into blank node identifiers before verification.

    Same as with Transforming Blank Node Identifiers, current implementations use the mechanism as described in this Issue Comment. Some reference implementations:

    "},{"location":"features/0646-bbs-credentials/#exchanging-derived-credentials","title":"Exchanging Derived Credentials","text":"

    The presentation of credentials with BBS+ signatures can be exchanged by following RFC 0454: Present Proof Protocol 2.0. The Present Proof Protocol 2.0 provides a registry of attachment formats that can be used for presentation exchange. Although agents can use any attachment format they want, agents are expected to use the format as described in RFC 0510 (see below).

    "},{"location":"features/0646-bbs-credentials/#0510-presentation-exchange-attachment-format","title":"0510: Presentation-Exchange Attachment format","text":"

    RFC 0510: Presentation-Exchange Attachment format for requesting and presenting proofs defines an attachment format based on the DIF Presentation Exchange specification.

    The following part of this section describes the requirements of exchanging derived credentials using the Presentation Exchange Attachment format, in addition to the requirements as specified above and in RFC 0510.

    The Presentation Exchange MUST include the ldp_vp Claim Format Designation. In turn the proof_type property of the ldp_vp claim format designation MUST include the BbsBlsSignatureProof2020 proof type.

    "},{"location":"features/0646-bbs-credentials/#example-bbs-derived-credential","title":"Example BBS+ Derived Credential","text":"
    {\n  \"@context\": [\n    \"https://www.w3.org/2018/credentials/v1\",\n    \"https://w3id.org/citizenship/v1\",\n    \"https://w3id.org/security/bbs/v1\" // BBS + Context\n  ],\n  \"id\": \"https://issuer.oidp.uscis.gov/credentials/83627465\",\n  \"type\": [\"PermanentResidentCard\", \"VerifiableCredential\"],\n  \"description\": \"Government of Example Permanent Resident Card.\",\n  \"identifier\": \"83627465\",\n  \"name\": \"Permanent Resident Card\",\n  \"credentialSubject\": {\n    \"id\": \"did:example:b34ca6cd37bbf23\",\n    \"type\": [\"Person\", \"PermanentResident\"],\n    \"familyName\": \"SMITH\",\n    \"gender\": \"Male\",\n    \"givenName\": \"JOHN\"\n  },\n  \"expirationDate\": \"2029-12-03T12:19:52Z\",\n  \"issuanceDate\": \"2019-12-03T12:19:52Z\",\n  \"issuer\": \"did:example:489398593\",\n  \"proof\": {\n    \"type\": \"BbsBlsSignatureProof2020\", // <-- type must be `BbsBlsSignatureProof2020`\n    \"nonce\": \"wrmPiSRm+iBqnGBXz+/37LLYRZWirGgIORKHIkrgWVnHtb4fDe/4ZPZaZ+/RwGVJYYY=\",\n    \"proofValue\": \"ABkB/wbvt6213E9eJ+aRGbdG1IIQtx+IdAXALLNg2a5ENSGOIBxRGSoArKXwD/diieDWG6+0q8CWh7CViUqOOdEhYp/DonzmjoWbWECalE6x/qtyBeE7W9TJTXyK/yW6JKSKPz2ht4J0XLV84DZrxMF4HMrY7rFHvdE4xV7ULeC9vNmAmwYAqJfNwY94FG2erg2K2cg0AAAAdLfutjMuBO0JnrlRW6O6TheATv0xZZHP9kf1AYqPaxsYg0bq2XYzkp+tzMBq1rH3tgAAAAIDTzuPazvFHijdzuAgYg+Sg0ziF+Gw5Bz8r2cuvuSg1yKWqW1dM5GhGn6SZUpczTXuZuKGlo4cZrwbIg9wf4lBs3kQwWULRtQUXki9izmznt4Go98X/ElOguLLum4S78Gehe1ql6CXD1zS5PiDXjDzAAAACWz/sbigWpPmUqNA8YUczOuzBUvzmkpjVyL9aqf1e7rSZmN8CNa6dTGOzgKYgDGoIbSQR8EN8Ld7kpTIAdi4YvNZwEYlda/BR6oSrFCquafz7s/jeXyOYMsiVC53Zls9KEg64tG7n90XuZOyMk9RAdcxYRGligbFuG2Ap+rQ+rrELJaW7DWwFEI6cRnitZo6aS0hHmiOKKtJyA7KFbx27nBGd2y3JCvgYO6VUROQ//t3F4aRVI1U53e5N3MU+lt9GmFeL+Kv+2zV1WssScO0ZImDGDOvjDs1shnNSjIJ0RBNAo2YzhFKh3ExWd9WbiZ2/USSyomaSK4EzdTDqi2JCGdqS7IpooKSX/1Dp4K+d8HhPLGNLX4yfMoG9SnRfRQZZQ==\",\n    \"verificationMethod\": \"did:example:489398593#test\",\n    \"proofPurpose\": \"assertionMethod\",\n    \"created\": \"2020-10-16T23:59:31Z\"\n  }\n}\n
    "},{"location":"features/0646-bbs-credentials/#privacy-considerations","title":"Privacy Considerations","text":"

    Private Holder Binding is an evolution of CL Signatures Linked Secrets.

    "},{"location":"features/0646-bbs-credentials/#reference","title":"Reference","text":""},{"location":"features/0646-bbs-credentials/#interoperability-with-existing-credential-formats","title":"Interoperability with Existing Credential Formats","text":"

    We expect that many issuers will choose to shift exclusively to BBS+ credentials for the benefits described here. Accessing these benefits will require reissuing credentials that were previously in a different format.

    An issuer can issue duplicate credentials with both signature formats.

    A holder can hold both types of credentials. The holder wallet could display the two credentials as a single entry in their credential list if the data is the same (it\u2019s \u201cenhanced\u201d with both credential formats).

    A verifier can send a proof request for the formats that they choose to support.

    "},{"location":"features/0646-bbs-credentials/#issues-with-the-bbs-ld-proofs-specification","title":"Issues with the BBS+ LD-Proofs specification","text":""},{"location":"features/0646-bbs-credentials/#drawbacks","title":"Drawbacks","text":"

    Existing implementations of BBS+ Signatures do not support ZKP proof predicates, but it is theoretically possible to support numeric date predicates. ZKP proof predicates are considered a key feature of CL signatures, and a migration to BBS+ LD-Proofs will lose this capability. The Indy maintainers consider this a reasonable trade-off to get the other benefits of BBS+ LD-Proofs. A mechanism to support predicates can hopefully be added in future work.

    As mentioned in the Private Holder Binding section, the BBS+ LD-Proofs specification does not define a mechanism for private holder binding yet. This means implementing this RFC does not provide all privacy-enabling ../../features that should be incorporated until the BbsBlsBoundSignature2020 and BbsBlsBoundSignatureProof2020 signature suites are formally defined.

    "},{"location":"features/0646-bbs-credentials/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    BBS+ LD-Proofs is a reasonable evolution of CL Signatures, as it supports most of the same ../../features (with the exception of ZKP Proof Predicates), while producing smaller credentials that require less computation resources to validate (a key requirement for mobile use cases).

    BBS+ LD-Proofs are receiving broad support across the verifiable credentials implementation community, so supporting this signature format will be strategic for interoperability and allow Aries to promote the privacy preserving capabilities such as zero knowledge proofs and private holder binding.

    "},{"location":"features/0646-bbs-credentials/#prior-art","title":"Prior art","text":"

    Indy Anoncreds used CL Signatures to meet many of the use cases currently envisioned for BBS+ LD-Proofs.

    BBS+ Signatures were originally proposed by Boneh, Boyen, and Shacham in 2004.

    The approach was improved by Au, Susilo, and Mu in 2006.

    It was then further refined by Camenisch, Drijvers, and Lehmann in section 4.3 of this paper from 2016.

    In 2019, Evernym and Sovrin proposed BBS+ Signatures as the foundation for Indy Anoncreds 2.0, which in conjunction with Rich Schemas addressed a similar set of goals and capabilities as those addressed here, but were ultimately too heavy a solution.

    In 2020, Mattr provided a draft specification for BBS+ LD-Proofs that comply with the Linked Data proof specification in the W3C Credentials Community Group. The authors acknowledged that their approach did not support two key Anoncreds ../../features: proof predicates and link secrets.

    Aries RFC 593 describes the JSON-LD credential format.

    "},{"location":"features/0646-bbs-credentials/#unresolved-questions","title":"Unresolved questions","text":"

    See the above note in the Drawbacks Section about ZKP predicates.

    "},{"location":"features/0646-bbs-credentials/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"features/0685-pickup-v2/","title":"0685: Pickup Protocol 2.0","text":""},{"location":"features/0685-pickup-v2/#summary","title":"Summary","text":"

    A protocol to facilitate an agent picking up messages held at a mediator.

    "},{"location":"features/0685-pickup-v2/#motivation","title":"Motivation","text":"

    Messages can be picked up simply by sending a message to the Mediator with a return_route decorator specified. This mechanism is implicit, and lacks some desired behavior made possible by more explicit messages.

    This protocol is the explicit companion to the implicit method of picking up messages.

    "},{"location":"features/0685-pickup-v2/#tutorial","title":"Tutorial","text":""},{"location":"features/0685-pickup-v2/#roles","title":"Roles","text":"

    Mediator - The agent that has messages waiting for pickup by the Recipient.

    Recipient - The agent who is picking up messages.

    "},{"location":"features/0685-pickup-v2/#flow","title":"Flow","text":"

    The status-request message is sent by the Recipient to the Mediator to query how many messages are pending.

    The status message is the response to status-request to communicate the state of the message queue.

    The delivery-request message is sent by the Recipient to request delivery of pending messages.

    The message-received message is sent by the Recipient to confirm receipt of delivered messages, prompting the Mediator to clear messages from the queue.

    The live-delivery-change message is used to set the state of live_delivery.

    "},{"location":"features/0685-pickup-v2/#reference","title":"Reference","text":"

    Each message sent MUST use the ~transport decorator as follows, which has been adopted from RFC 0092 transport return route protocol. This has been omitted from the examples for brevity.

    ```json= \"~transport\": { \"return_route\": \"all\" }

    ## Message Types\n\n### Status Request\n\nSent by the _Recipient_ to the _Mediator_ to request a `status` message.\n#### Example:\n\n```json=\n{\n    \"@id\": \"123456781\",\n    \"@type\": \"https://didcomm.org/messagepickup/2.0/status-request\",\n    \"recipient_key\": \"<key for messages>\"\n}\n

    recipient_key is optional. When specified, the Mediator MUST only return status related to that recipient key. This allows the Recipient to discover if any messages are in the queue that were sent to a specific key. You can find more details about recipient_key and how it's managed in 0211-route-coordination.

    "},{"location":"features/0685-pickup-v2/#status","title":"Status","text":"

    Status details about waiting messages.

    "},{"location":"features/0685-pickup-v2/#example","title":"Example:","text":"

    ```json= { \"@id\": \"123456781\", \"@type\": \"https://didcomm.org/messagepickup/2.0/status\", \"recipient_key\": \"\", \"message_count\": 7, \"longest_waited_seconds\": 3600, \"newest_received_time\": \"2019-05-01 12:00:00Z\", \"oldest_received_time\": \"2019-05-01 12:00:01Z\", \"total_bytes\": 8096, \"live_delivery\": false }

    `message_count` is the only REQUIRED attribute. The others MAY be present if offered by the _Mediator_.\n\n`longest_waited_seconds` is in seconds, and is the longest delay of any message in the queue.\n\n`total_bytes` represents the total size of all messages.\n\nIf a `recipient_key` was specified in the `status-request` message, the matching value MUST be specified \nin the `recipient_key` attribute of the status message.\n\n`live_delivery` state is also indicated in the status message. \n\n> Note: due to the potential for confusing what the actual state of the message queue\n> is, a status message MUST NOT be put on the pending message queue and MUST only\n> be sent when the _Recipient_ is actively connected (HTTP request awaiting\n> response, WebSocket, etc.).\n\n### Delivery Request\n\nA request from the _Recipient_ to the _Mediator_ to have pending messages delivered. \n\n#### Examples:\n\n```json=\n{\n    \"@id\": \"123456781\",\n    \"@type\": \"https://didcomm.org/messagepickup/2.0/delivery-request\",\n    \"limit\": 10,\n    \"recipient_key\": \"<key for messages>\"\n}\n

    ```json= { \"@type\": \"https://didcomm.org/messagepickup/2.0/delivery-request\", \"limit\": 1 }

    `limit` is a REQUIRED attribute, and after receipt of this message, the _Mediator_ SHOULD deliver up to the `limit` indicated. \n\n`recipient_key` is optional. When [specified](), the _Mediator_ MUST only return messages sent to that recipient key.\n\nIf no messages are available to be sent, a `status` message MUST be sent immediately.\n\nDelivered messages MUST NOT be deleted until delivery is acknowledged by a `messages-received` message.\n\n### Message Delivery\n\nMessages delivered from the queue must be delivered in a batch `delivery` message as attachments. The ID of each attachment is used to confirm receipt. The ID is an opaque value, and the _Recipient_ should not infer anything from the value.\n\nThe ONLY valid type of attachment for this message is a DIDComm Message in encrypted form.\n\nThe `recipient_key` attribute is only included when responding to a `delivery-request` message that indicates a `recipient_key`.\n\n```json=\n{\n    \"@id\": \"123456781\",\n    \"~thread\": {\n        \"thid\": \"<message id of delivery-request message>\"\n      },\n    \"@type\": \"https://didcomm.org/messagepickup/2.0/delivery\",\n    \"recipient_key\": \"<key for messages>\",\n    \"~attach\": [{\n        \"@id\": \"<messageid>\",\n        \"data\": {\n            \"base64\": \"\"\n        }\n    }]\n}\n

    This method of delivery does incur an encoding cost, but is much simpler to implement and a more robust interaction.

    "},{"location":"features/0685-pickup-v2/#messages-received","title":"Messages Received","text":"

    After receiving messages, the Recipient sends an ack message indiciating which messages are safe to clear from the queue.

    "},{"location":"features/0685-pickup-v2/#example_1","title":"Example:","text":"

    ```json= { \"@type\": \"https://didcomm.org/messagepickup/2.0/messages-received\", \"message_id_list\": [\"123\",\"456\"] }

    `message_id_list` is a list of ids of each message received. The id of each message is present in the attachment descriptor of each attached message of a `delivery` message.\n\nUpon receipt of this message, the _Mediator_ knows which messages have been received, and can remove them from the collection of queued messages with confidence. The mediator SHOULD send an updated `status` message reflecting the changes to the queue.\n\n### Multiple Recipients\n\nIf a message arrives at a _Mediator_ addressed to multiple _Recipients_, the message MUST be queued for each _Recipient_ independently. If one of the addressed _Recipients_ retrieves a message and indicates it has been received, that message MUST still be held and then removed by the other addressed _Recipients_.\n\n## Live Mode\nLive mode is the practice of delivering newly arriving messages directly to a connected _Recipient_. It is disabled by default and only activated by the _Recipient_. Messages that arrive when Live Mode is off MUST be stored in the queue for retrieval as described above. If Live Mode is active, and the connection is broken, a new inbound connection starts with Live Mode disabled.\n\nMessages already in the queue are not affected by Live Mode - they must still be requested with `delivery-request` messages.\n\nLive mode MUST only be enabled when a persistent transport is used, such as WebSockets.\n\n_Recipients_ have three modes of possible operation for message delivery with various abilities and level of development complexity:\n\n1. Never activate live mode. Poll for new messages with a `status_request` message, and retrieve them when available.\n2. Retrieve all messages from queue, and then activate Live Mode. This simplifies message processing logic in the _Recipient_.\n3. Activate Live Mode immediately upon connecting to the _Mediator_. Retrieve messages from the queue as possible. When receiving a message delivered live, the queue may be queried for any waiting messages delivered to the same key for processing.\n\n### Live Mode Change\nLive Mode is changed with a `live-delivery-change` message.\n\n#### Example:\n\n```json=\n{\n    \"@type\": \"https://didcomm.org/messagepickup/2.0/live-delivery-change\",\n    \"live_delivery\": true\n}\n

    Upon receiving the live_delivery_change message, the Mediator MUST respond with a status message.

    If sent with live_delivery set to true on a connection incapable of live delivery, a problem_report SHOULD be sent as follows:

    json= { \"@type\": \"https://didcomm.org/notification/1.0/problem-report\", \"~thread\": { \"pthid\": \"<message id of offending live_delivery_change>\" }, \"description\": \"Connection does not support Live Delivery\" }

    "},{"location":"features/0685-pickup-v2/#prior-art","title":"Prior art","text":"

    Version 1.0 of this protocol served as the main inspiration for this version. Version 1.0 suffered from not being very explicit, and an incomplete model of message delivery signaling.

    "},{"location":"features/0685-pickup-v2/#alternatives","title":"Alternatives","text":""},{"location":"features/0685-pickup-v2/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0693-credential-representation/","title":"0693: Cross-Platform Credential Representation","text":""},{"location":"features/0693-credential-representation/#summary","title":"Summary","text":"

    Aries Agent developers currently build end user products without a standard method of rendering credentials. This RFC proposes how the Aries community can reuse available open technologies to build such a rendering method.

    Key results include: - Feasibility of cross platform rendering. - Enable branding of credentials.

    This RFC also enumerate the specific challenges that by using this method could be tackled next.

    "},{"location":"features/0693-credential-representation/#motivation","title":"Motivation","text":"

    The human computer interaction between agents and their users will always gravitate around credentials. This interaction is more useful for users when their representation resembles that of their conventional (physical) counterparts.

    Achieving effortless semiotic parity with analog credentials doesn't come easy or cheap. In fact, when reviewing new Aries-base projects, is always the case that the rendering of credentials with any form of branding is a demanding portion of the roadmap.

    Since the work required here is never declarative the work never stops feeling sysyphean. Indeed, the cost of writing code of representing a credential remains constant over time, no matter how many times we do it.

    Imagine if we achieve declarative while empowering branding.

    "},{"location":"features/0693-credential-representation/#entering-svg","title":"Entering SVG","text":"

    The solution we propose is to adopt SVG as the default format to describe how to represent SSI credentials, and to introduce a convention to ensure that credentials values could be embedded in the final user interface. The following images illustrates how this can work:

    "},{"location":"features/0693-credential-representation/#svg-credential-values","title":"SVG + Credential Values","text":"

    We propose a notation of the form {{credential.values.[AttributeName]}} and {{credential.names.[AttributeName]}}. This way both values and attributes names can be used in branding activities.

    "},{"location":"features/0693-credential-representation/#cross-platform","title":"Cross Platform","text":"

    Since SVG is a web standard based on XML there isn't a shortage of existing tools to power brand and engineering needs right away. Indeed, any implementation will be powered by native SVG renderer and XML parser.

    "},{"location":"features/0693-credential-representation/#future-work","title":"Future work","text":""},{"location":"features/0699-push-notifications-apns/","title":"Aries RFC 0699: Push Notifications apns Protocol 1.0","text":"

    Note: This protocol is currently written to support native push notifications for iOS via Apple Push Notification Service. For the implementation for Android (using fcm), please refer to 0734: Push Notifications fcm

    "},{"location":"features/0699-push-notifications-apns/#summary","title":"Summary","text":"

    A protocol to coordinate a push notification configuration between two agents.

    "},{"location":"features/0699-push-notifications-apns/#motivation","title":"Motivation","text":"

    This protocol would give an agent enough information to send push notifications about specific events to an iOS device. This would be of great benefit for mobile wallets, as a holder can be notified when new messages are pending at the mediator. Mobile applications, such as wallets, are often killed and can not receive messages from the mediator anymore. Push notifications would resolve this problem.

    "},{"location":"features/0699-push-notifications-apns/#tutorial","title":"Tutorial","text":""},{"location":"features/0699-push-notifications-apns/#name-and-version","title":"Name and Version","text":"

    URI: https://didcomm.org/push-notifications-apns/1.0

    Protocol Identifier: push-notifications-apns

    Version: 1.0

    Since apns only supports iOS, no -ios or -android is required as it is implicit.

    "},{"location":"features/0699-push-notifications-apns/#key-concepts","title":"Key Concepts","text":"

    When an agent would like to receive push notifications at record event changes, e.g. incoming credential offer, incoming connection request, etc., the agent could initiate the protocol by sending a message to the other agent.

    This protocol only defines how an agent would get the token which is necessary for push notifications.

    Each platform is has its own protocol so that we can easily use 0031: Discover Features 1.0 and 0557: Discover Features 2.X to see which specific services are supported by the other agent.

    "},{"location":"features/0699-push-notifications-apns/#roles","title":"Roles","text":"

    notification-sender

    notification-receiver

    The notification-sender is an agent who will send the notification-receiver notifications. The notification-receiver can get and set their push notification configuration at the notification-sender.

    "},{"location":"features/0699-push-notifications-apns/#services","title":"Services","text":"

    This RFC focusses on configuring the data necessary for pushing notifications to iOS, via apns.

    In order to implement this protocol, the set-device-info and get-device-info messages MUST be implemented by the notification-sender and device-info message MUST be implemented by the notification-receiver.

    "},{"location":"features/0699-push-notifications-apns/#supported-services","title":"Supported Services","text":"

    The protocol currently supports the following push notification services

    "},{"location":"features/0699-push-notifications-apns/#messages","title":"Messages","text":"

    When a notification-receiver wants to receive push notifications from the notification-sender, the notification-receiver has to send the following message:

    "},{"location":"features/0699-push-notifications-apns/#set-device-info","title":"Set Device Info","text":"

    Message to set the device info using the native iOS device token for push notifications.

    {\n  \"@type\": \"https://didcomm.org/push-notifications-apns/1.0/set-device-info\",\n  \"@id\": \"<UUID>\",\n  \"device_token\": \"<DEVICE_TOKEN>\"\n}\n

    Description of the fields:

    It is important to note that the set device info message can be used to set, update and remove the device info. To set, and update, these values the normal messages as stated above can be used. To remove yourself from receiving push notifications, you can send the same message where all values MUST be null. If either value is null a problem-report MAY be sent back with missing-value.

    "},{"location":"features/0699-push-notifications-apns/#get-device-info","title":"Get Device Info","text":"

    When a notification-receiver wants to get their push-notification configuration, they can send the following message:

    {\n  \"@type\": \"https://didcomm.org/push-notifications-apns/1.0/get-device-info\",\n  \"@id\": \"<UUID>\"\n}\n
    "},{"location":"features/0699-push-notifications-apns/#device-info","title":"Device Info","text":"

    Response to the get device info:

    {\n  \"@type\": \"https://didcomm.org/push-notifications-apns/1.0/device-info\",\n  \"device_token\": \"<DEVICE_TOKEN>\",\n  \"~thread\": {\n    \"thid\": \"<GET_DEVICE_INFO_UUID>\"\n  }\n}\n

    This message can be used by the notification-receiver to receive their device info, e.g. device_token. If the notification-sender does not have this field for that connection, a problem-report MAY be used as a response with not-registered-for-push-notifications.

    "},{"location":"features/0699-push-notifications-apns/#adopted-messages","title":"Adopted messages","text":"

    In addition, the ack message is adopted into the protocol for confirmation by the notification-sender. The ack message SHOULD be sent in response to any of the set-device-info messages.

    "},{"location":"features/0699-push-notifications-apns/#sending-push-notifications","title":"Sending Push Notifications","text":"

    When an agent wants to send a push notification to another agent, the payload of the push notifications MUST include the @type property, and COULD include the message_tag property, to indicate the message is sent by the notification-sender. Guidelines on notification messages are not defined.

    {\n  \"@type\": \"https://didcomm.org/push-notifications-apns\",\n  \"message_tag\": \"<MESSAGE_TAG>\",\n  \"message_id\": \"<MESSAGE_ID>\",\n  ...\n}\n

    Description of the fields:

    "},{"location":"features/0699-push-notifications-apns/#drawbacks","title":"Drawbacks","text":"

    Each service requires a considerable amount of domain knowledge. The RFC can be extended with new services over time.

    The @type property in the push notification payload currently doesn't indicate which agent the push notification came from. In e.g. the instance of using multiple mediators, this means the notification-receiver does not know which mediator to retrieve the message from.

    "},{"location":"features/0699-push-notifications-apns/#prior-art","title":"Prior art","text":""},{"location":"features/0699-push-notifications-apns/#unresolved-questions","title":"Unresolved questions","text":"

    None

    "},{"location":"features/0699-push-notifications-apns/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0721-revocation-notification-v2/","title":"Aries RFC 0721: Revocation Notification 2.0","text":""},{"location":"features/0721-revocation-notification-v2/#summary","title":"Summary","text":"

    This RFC defines the message format which an issuer uses to notify a holder that a previously issued credential has been revoked.

    "},{"location":"features/0721-revocation-notification-v2/#change-log","title":"Change Log","text":""},{"location":"features/0721-revocation-notification-v2/#motivation","title":"Motivation","text":"

    We need a standard protocol for an issuer to notify a holder that a previously issued credential has been revoked.

    For example, suppose a passport agency revokes Alice's passport. The passport agency (an issuer) may want to notify Alice (a holder) that her passport has been revoked so that she knows that she will be unable to use her passport to travel.

    "},{"location":"features/0721-revocation-notification-v2/#tutorial","title":"Tutorial","text":"

    The Revocation Notification protocol is a very simple protocol consisting of a single message:

    This simple protocol allows an issuer to choose to notify a holder that a previously issued credential has been revoked.

    It is the issuer's prerogative whether or not to notify the holder that a credential has been revoked. It is not a security risk if the issuer does not notify the holder that the credential has been revoked, nor if the message is lost. The holder will still be unable to use a revoked credential without this notification.

    "},{"location":"features/0721-revocation-notification-v2/#roles","title":"Roles","text":"

    There are two parties involved in a Revocation Notification: issuer and holder. The issuer sends the revoke message to the holder.

    "},{"location":"features/0721-revocation-notification-v2/#messages","title":"Messages","text":"

    The revoke message sent by the issuer to the holder. The holder should verify that the revoke message came from the connection that was originally used to issue the credential.

    Message format:

    {\n  \"@type\": \"https://didcomm.org/revocation_notification/2.0/revoke\",\n  \"@id\": \"<uuid-revocation-notification>\",\n  \"revocation_format\": \"<revocation_format>\",\n  \"credential_id\": \"<credential_id>\",\n  \"comment\": \"Some comment\"\n}\n

    Description of fields:

    "},{"location":"features/0721-revocation-notification-v2/#revocation-credential-identification-formats","title":"Revocation Credential Identification Formats","text":"

    In order to support multiple credential revocation formats, the following dictates the format of revocation formats and their credential ids. As additional credential revocation formats are determined their credential id formats should be added.

    Revocation Format Credential Identifier Format Example indy-anoncreds <revocation-registry-id>::<credential-revocation-id> AsB27X6KRrJFsqZ3unNAH6:4:AsB27X6KRrJFsqZ3unNAH6:3:cl:48187:default:CL_ACCUM:3b24a9b0-a979-41e0-9964-2292f2b1b7e9::1 anoncreds <revocation-registry-id>::<credential-revocation-id> did:indy:sovrin:5nDyJVP1NrcPAttP3xwMB9/anoncreds/v0/REV_REG_DEF/56495/npdb/TAG1::1"},{"location":"features/0721-revocation-notification-v2/#reference","title":"Reference","text":""},{"location":"features/0721-revocation-notification-v2/#drawbacks","title":"Drawbacks","text":"

    If we later added support for more general event subscription and notification message flows, this would be redundant.

    "},{"location":"features/0721-revocation-notification-v2/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0721-revocation-notification-v2/#prior-art","title":"Prior art","text":""},{"location":"features/0721-revocation-notification-v2/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0721-revocation-notification-v2/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0728-device-binding-attachments/","title":"Aries RFC 0728 : Device Binding Attachments","text":""},{"location":"features/0728-device-binding-attachments/#summary","title":"Summary","text":"

    Extends existing present-proof protocols to allow proofing the control of a hardware bound key embedded within a verfiable credential.

    "},{"location":"features/0728-device-binding-attachments/#motivation","title":"Motivation","text":"

    To enable use-cases which require a high level of assurance a verifier must reach a high degree of confidence that a verifiable credential (VC) can only be used by the person it was issued for. One way to enforce this requirement is that the issuer additionally binds the VC to a hardware bound public key and therefore binding the credential to the device, as discussed in the DIF Wallet Security WG. The issaunce process, including the attestation of the wallet and the hardware bound key is off-scope for this Aries RFC. A valid presentation of the VC then requires an additional challenge which proofs that the presenter is in control of the corresponding private key. Since the proof of control must be part of legitimate presentation it makes sense to extend all current present-proof protocols.

    Note: The focus so far has been on AnonCreds, we will also look into device binding of W3C VC, however this is currently lacking in the examples.

    Warning: This concept is primarily meant for regulated, high-security usecases. Please review the drawbacks before considering using this.

    "},{"location":"features/0728-device-binding-attachments/#tutorial","title":"Tutorial","text":"

    To proof the control of a hardware bound key the holder must answer a challenge for one or more public keys embedded within verifiable credentials.

    "},{"location":"features/0728-device-binding-attachments/#challenge","title":"Challenge","text":"

    The following challenge object must be provided by the verifier.

    "},{"location":"features/0728-device-binding-attachments/#device-binding-challenge","title":"device-binding-challenge","text":"

    ```json= { \"@type\": \"https://didcomm.org/device-binding/%ver/device-binding-challenge\", \"@id\": \"\", \"nonce\": \"\", // recommend at least 128-bit unsigned integer \"requests\": [ { \"id\": \"libindy-request-presentation-0\", \"path\": \"$.requested_attributes.attr2_referent.names.hardwareDid\", } ] }

    Description of attributes:\n\n- `nonce` -- a nonce which has to be signed by the holder to proof control\n- `requests` -- an array of referenced presentation requests\n    - `id` -- reference to an attached presentation request of `request-presentation` message (e.g. libindy request) \n    - `path` -- JsonPath to a requested attribute which represents a public key of a hardware bound key pair - represented as did:key\n\n\nThe `device-binding-challenge` must be attached to the `request-presentations~attach` array of the `request-presentation` message defined by [RFC-0037](https://github.com/hyperledger/aries-rfcs/blob/main../../features/0037-present-proof/README.md#request-presentation) and [RFC-0454](https://github.com/hyperledger/aries-rfcs/tree/main../../features/0454-present-proof-v2#request-presentation).\n\n#### Example request-presentation messages\n\nThe following represents a request-presentation message with an attached libindy presentation request and a corresponding device-binding-challenge.\n\n**Present Proof v1**\n```json=\n{\n    \"@type\": \"https://didcomm.org/present-proof/1.0/request-presentation\",\n    \"@id\": \"<uuid-request>\",\n    \"comment\": \"some comment\",\n    \"request_presentations~attach\": [\n        {\n            \"@id\": \"libindy-request-presentation-0\",\n            \"mime-type\": \"application/json\",\n            \"data\":  {\n                \"base64\": \"<bytes for base64>\"\n            }\n        }\n    ],\n    \"device_binding~attach\": [\n        {\n            \"@id\": \"device-binding-challenge-0\"\n            \"mime-type\": \"application/json\",\n            \"data\":  {\n                \"base64\": \"<device-binding-challenge>\"\n            }\n        }\n    ]\n}\n

    Present Proof v2

    ```json= { \"@type\": \"https://didcomm.org/present-proof/2.0/request-presentation\", \"@id\": \"\", \"goal_code\": \"\", \"comment\": \"some comment\", \"will_confirm\": true, \"present_multiple\": false, \"formats\" : [ { \"attach_id\" : \"libindy-request-presentation-0\", \"format\" : \"hlindy/proof-req@v2.0\", } ], \"request_presentations~attach\": [ { \"@id\": \"libindy-request-presentation-0\", \"mime-type\": \"application/json\", \"data\": { \"base64\": \"\" } } ], \"device_binding~attach\": [ { \"@id\": \"device-binding-challenge-0\" \"mime-type\": \"application/json\", \"data\": { \"base64\": \"\" // inner object } } ] }

    ### Response\n\nThe following response must be generated by the holder of the VC.\n\n#### device-binding-reponse\n```json=\n{\n    \"@type\": \"https://didcomm.org/device-binding/%ver/device-binding-response\",\n    \"@id\": \"<uuid-challenge-response>\",\n    \"proofs\" : [\n        {\n            \"id\": \"libindy-presentation-0\",\n            \"path\": \"$.requested_proof.revealed_attrs.attr1_referent.raw\"\n        }\n    ]\n}\n

    Description of attributes:

    The device-binding-response must be attached to the device_binding~attach array of a presentation message defined by RFC-0037 or RFC-0454.

    "},{"location":"features/0728-device-binding-attachments/#example-presentation-messages","title":"Example presentation messages","text":"

    The following represents a presentation message with an attached libindy presentation and a corresponding device-binding-response.

    Present Proof v1

    ```json= { \"@type\": \"https://didcomm.org/present-proof/1.0/presentation\", \"@id\": \"\", \"comment\": \"some comment\", \"presentations~attach\": [ { \"@id\": \"libindy-presentation-0\", \"mime-type\": \"application/json\", \"data\": { \"base64\": \"\" } } ], \"device_binding~attach\": [ { \"@id\": \"device-binding-response-0\", \"mime-type\": \"application/json\", \"data\": { \"base64\": \"\", \"jws\": { \"header\": { \"kid\": \"didz6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\" }, \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\", \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\" } } } ] }

    **Present Proof v2**\n```json=\n{\n    \"@type\": \"https://didcomm.org/present-proof/%VER/presentation\",\n    \"@id\": \"<uuid-presentation>\",\n    \"goal_code\": \"<goal-code>\",\n    \"comment\": \"some comment\",\n    \"last_presentation\": true,\n    \"formats\" : [\n        {\n            \"attach_id\" : \"libindy-presentation-0\",\n            \"format\" : \"hlindy/proof-req@v2.0\",\n        }\n    ],\n    \"presentations~attach\": [\n        {\n            \"@id\": \"libindy-presentation-0\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"<libindy presentation>\"\n            }\n        }\n    ],\n    \"device_binding~attach\": [\n        {\n            \"@id\": \"device-binding-response-0\"\n            \"mime-type\": \"application/json\",\n            \"data\":  {\n                \"base64\": \"<device-binding-response>\",\n                \"jws\": {\n                    \"header\": {\n                        \"kid\": \"did:key:z6MkmjY8GnV5i9YTDtPETC2uUAW6ejw3nk5mXF5yci5ab7th\"\n                    },\n                    \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n                    \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n                }\n            }\n        }\n    ]\n}\n
    "},{"location":"features/0728-device-binding-attachments/#reference","title":"Reference","text":""},{"location":"features/0728-device-binding-attachments/#drawbacks","title":"Drawbacks","text":"

    Including a hardware-bound public key (as an attribute) into a Verifiable Credential/AnonCred is necessary for this concept but introduces a globally unique and therefore trackable identifier. As this public key is revealed to the verifier, there is a higher risk of correlation. The Issuer must always use a hardware-bound key for a single credential and the Wallet should enforce to never reuse the key. Additionally, the holder should ideally be informed about the increased correlation risk by the wallet UX.

    "},{"location":"features/0728-device-binding-attachments/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    The rationale behind this proposal is to formalize the way a holder wallet can proof the control of a (hardware-bound) key.

    This proposal tries to extend existing protocols to reduce the implementation effort for existing solutions. It might be reasonable to include this only in a new version of the present proof protocol (e.g. present-proof v3).

    "},{"location":"features/0728-device-binding-attachments/#prior-art","title":"Prior art","text":"

    None to our knowledge.

    "},{"location":"features/0728-device-binding-attachments/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0728-device-binding-attachments/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"features/0734-push-notifications-fcm/","title":"Aries RFC 0734: Push Notifications fcm Protocol 1.0","text":"

    Note: This protocol is currently written to support native push notifications using fcm. For the implementation for iOS (via apns), please refer to 0699: Push Notifications apns

    "},{"location":"features/0734-push-notifications-fcm/#summary","title":"Summary","text":"

    A protocol to coordinate a push notification configuration between two agents.

    "},{"location":"features/0734-push-notifications-fcm/#motivation","title":"Motivation","text":"

    This protocol would give an agent enough information to send push notifications about specific events to a device that supports fcm. This would be of great benefit for mobile wallets, as a holder can be notified when new messages are pending at the mediator. Mobile applications, such as wallets, are often killed and can not receive messages from the mediator anymore. Push notifications would resolve this problem.

    "},{"location":"features/0734-push-notifications-fcm/#tutorial","title":"Tutorial","text":""},{"location":"features/0734-push-notifications-fcm/#name-and-version","title":"Name and Version","text":"

    URI: https://didcomm.org/push-notifications-fcm/1.0

    Protocol Identifier: push-notifications-fcm

    Version: 1.0

    "},{"location":"features/0734-push-notifications-fcm/#key-concepts","title":"Key Concepts","text":"

    When an agent would like to receive push notifications at record event changes, e.g. incoming credential offer, incoming connection request, etc., the agent could initiate the protocol by sending a message to the other agent.

    This protocol only defines how an agent would get the token and platform that is necessary for push notifications.

    Each platform has its own protocol so that we can easily use 0031: Discover Features 1.0 and 0557: Discover Features 2.X to see which specific services are supported by the other agent.

    "},{"location":"features/0734-push-notifications-fcm/#roles","title":"Roles","text":"

    notification-sender

    notification-receiver

    The notification-sender is an agent who will send the notification-receiver notifications. The notification-receiver can get and set their push notification configuration at the notification-sender.

    "},{"location":"features/0734-push-notifications-fcm/#services","title":"Services","text":"

    This RFC focuses on configuring the data necessary for pushing notifications via Firebase Cloud Messaging.

    In order to implement this protocol, the set-device-info and get-device-info messages MUST be implemented by the notification-sender and device-info message MUST be implemented by the notification-receiver.

    "},{"location":"features/0734-push-notifications-fcm/#supported-services","title":"Supported Services","text":"

    The protocol currently supports the following push notification services

    "},{"location":"features/0734-push-notifications-fcm/#messages","title":"Messages","text":"

    When a notification-receiver wants to receive push notifications from the notification-sender, the notification-receiver has to send the following message:

    "},{"location":"features/0734-push-notifications-fcm/#set-device-info","title":"Set Device Info","text":"

    Message to set the device info using the fcm device token and device platform for push notifications.

    {\n  \"@type\": \"https://didcomm.org/push-notifications-fcm/1.0/set-device-info\",\n  \"@id\": \"<UUID>\",\n  \"device_token\": \"<DEVICE_TOKEN>\",\n  \"device_platform\": \"<DEVICE_PLATFORM>\"\n}\n

    Description of the fields:

    It is important to note that the set device info message can be used to set, update and remove the device info. To set, and update, these values the normal messages as stated above can be used. To remove yourself from receiving push notifications, you can send the same message where all values MUST be null. If either value is null, a problem-report MAY be sent back with missing-value.

    "},{"location":"features/0734-push-notifications-fcm/#get-device-info","title":"Get Device Info","text":"

    When a notification-receiver wants to get their push-notification configuration, they can send the following message:

    {\n  \"@type\": \"https://didcomm.org/push-notifications-fcm/1.0/get-device-info\",\n  \"@id\": \"<UUID>\"\n}\n
    "},{"location":"features/0734-push-notifications-fcm/#device-info","title":"Device Info","text":"

    Response to the get device info:

    {\n  \"@type\": \"https://didcomm.org/push-notifications-fcm/1.0/device-info\",\n  \"device_token\": \"<DEVICE_TOKEN>\",\n  \"device_platform\": \"<DEVICE_PLATFORM>\",\n  \"~thread\": {\n    \"thid\": \"<GET_DEVICE_INFO_UUID>\"\n  }\n}\n

    This message can be used by the notification-receiver to receive their device info, e.g. device_token and device_platform. If the notification-sender does not have this field for that connection, a problem-report MAY be used as a response with not-registered-for-push-notifications.

    "},{"location":"features/0734-push-notifications-fcm/#adopted-messages","title":"Adopted messages","text":"

    In addition, the ack message is adopted into the protocol for confirmation by the notification-sender. The ack message SHOULD be sent in response to any of the set-device-info messages.

    "},{"location":"features/0734-push-notifications-fcm/#sending-push-notifications","title":"Sending Push Notifications","text":"

    When an agent wants to send a push notification to another agent, the payload of the push notifications MUST include the @type property, and COULD include the message_tags property, to indicate the message is sent by the notification-sender. Guidelines on notification messages are not defined.

    {\n  \"@type\": \"https://didcomm.org/push-notifications-fcm\",\n  \"message_tags\": [\"<MESSAGE_TAG>\"],\n  \"message_ids\": [\"<MESSAGE_ID>\"],\n  ...\n}\n

    Description of the fields:

    "},{"location":"features/0734-push-notifications-fcm/#drawbacks","title":"Drawbacks","text":"

    Each service requires a considerable amount of domain knowledge. The RFC can be extended with new services over time.

    The @type property in the push notification payload currently doesn't indicate which agent the push notification came from. In e.g. the instance of using multiple mediators, this means the notification-receiver does not know which mediator to retrieve the message from.

    "},{"location":"features/0734-push-notifications-fcm/#prior-art","title":"Prior art","text":""},{"location":"features/0734-push-notifications-fcm/#unresolved-questions","title":"Unresolved questions","text":"

    None

    "},{"location":"features/0734-push-notifications-fcm/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0748-n-wise-did-exchange/","title":"Aries RFC 0748: N-wise DID Exchange Protocol 1.0","text":""},{"location":"features/0748-n-wise-did-exchange/#summary","title":"Summary","text":"

    This RFC defines a protocol for creating and managing relationships within a group of SSI subjects. In a certain sense, this RFC is a generalization of the pairwise concept and protocols 0160-connection-protocol and 0023-did-exchange for an arbitrary number of parties (n-wise).

    "},{"location":"features/0748-n-wise-did-exchange/#motivation","title":"Motivation","text":"

    SSI subjects and agents representing them must have a way to establish relationships with each other in a trustful manner. In the simplest case, when only two participants are involved, this goal is achieved using 0023-did-exchange protocol by creating and securely sharing their DID Documents directly between agents. However, it is often desirable to organize an interaction involving more than two paries. The number of parties of such an interaction may change over time, and most of the agents may be mobile ones. The simplest and most frequently used example of such interaction is a group chat in instant messenger. The trusted nature of SSI technology makes it possible to use group relationships for holding legally significant unions, such as board of directors, territorial community or dissertation councils.

    "},{"location":"features/0748-n-wise-did-exchange/#tutorial","title":"Tutorial","text":""},{"location":"features/0748-n-wise-did-exchange/#name-and-version","title":"Name and Version","text":"

    n-wise, version 1.0

    URI: https://didcomm.org/n-wise/1.0

    "},{"location":"features/0748-n-wise-did-exchange/#registry-of-n-wise-states","title":"Registry of n-wise states","text":"

    The current state of n-wise is an up-to-date list of the parties' DID Documents. In pairwise relation the state is stored by the participants and updated by a direct notification of the other party. When there are more than two participants, the problem of synchronizing the state of this n-wise (i.e. consensus) arising. It should be borne in mind that the state may change occasionally: users may be added or deleted, DID Documents may be modified (when keys are rotated or endpoints are changed).

    In principle, any trusted repository can act as a registry of n-wise states. The following options for storing the n-wise state can be distinguished:

    The concept of pluggable consensus implies choosing the most appropriate way to maintain a registry of states, depending on the needs.

    N-wise state update is performed by committing the corresponding transaction to the registry of n-wise states. To get the current n-wise state, the agent receives a list of transactions from the registry of states, verifies them and applies sequentially, starting with the genesisTx. Incorrect transactions (without a proper signature or missing the required fields) are ignored. Thus, n-wise can be considered as a replicated state machine, which is executed on each participant.

    The specifics of recording and receiving transactions depend on the particular method of maintaining the n-wise registry and on a particular ledger. This RFC DOES NOT DEFINE specific n-wise registry implementations.

    "},{"location":"features/0748-n-wise-did-exchange/#directly-on-the-agents-side-edge-chain","title":"Directly on the agent's side (Edge chain)","text":""},{"location":"features/0748-n-wise-did-exchange/#public-or-private-distributed-ledger","title":"Public or private distributed ledger","text":""},{"location":"features/0748-n-wise-did-exchange/#centralized-storage","title":"Centralized storage","text":""},{"location":"features/0748-n-wise-did-exchange/#roles","title":"Roles","text":""},{"location":"features/0748-n-wise-did-exchange/#user","title":"User","text":""},{"location":"features/0748-n-wise-did-exchange/#owner","title":"Owner","text":""},{"location":"features/0748-n-wise-did-exchange/#creator","title":"Creator","text":""},{"location":"features/0748-n-wise-did-exchange/#inviter","title":"Inviter","text":""},{"location":"features/0748-n-wise-did-exchange/#invitee","title":"Invitee","text":""},{"location":"features/0748-n-wise-did-exchange/#actions","title":"Actions","text":""},{"location":"features/0748-n-wise-did-exchange/#n-wise-creation","title":"N-wise creation","text":"

    The creation begins with the initialization of the n-wise registry. This RFC DOES NOT SPECIFY the procedure for n-wise registry creation. After creating the registry, the creator commits the genesisTx transaction. The creator automatically obtains the role of owner. The creator MUST generate a unique DID and DID Document for n-wise.

    "},{"location":"features/0748-n-wise-did-exchange/#invitation-of-a-new-party","title":"Invitation of a new party","text":"

    Any n-wise party can create an invitation to join n-wise. First, inviter generates a pair of public and private invitation keys according to Ed25519. The public key of the invitation is pushed to the registry using the invitationTx transaction. Then the Invitation message with the invitation private key is sent out-of-band to the invitee. The invitation key pair is unique for each invitee and can be used only once.

    "},{"location":"features/0748-n-wise-did-exchange/#accepting-the-invitation","title":"Accepting the invitation","text":"

    Once Invitation received, the invite generates a unique DID and DID Document for the n-wise and commits AddParticipantTx transaction to the registry. It is NOT ALLOWED to reuse DID from other relationships.

    The process of adding a new participant is shown in the figure below

    "},{"location":"features/0748-n-wise-did-exchange/#updating-did-document","title":"Updating DID Document","text":"

    Updating the user's DID Document is required for the key rotation or endpoint updating. To update the associated DID Document, user commits the updateParticipantTx transaction to the registry.

    "},{"location":"features/0748-n-wise-did-exchange/#removing-a-party-form-n-wise","title":"Removing a party form n-wise","text":"

    Removing is performed using the removeParticipantTx transaction. The user can delete itself (the corresponding transaction is signed by the user's public key). The owner can delete any user (the corresponding transaction is signed by the owner's public key).

    "},{"location":"features/0748-n-wise-did-exchange/#updating-n-wise-meta-information","title":"Updating n-wise meta information","text":"

    Meta information can be updated by the owner using the updateMetadataTx transaction.

    "},{"location":"features/0748-n-wise-did-exchange/#transferring-the-owner-role-to-other-user","title":"Transferring the owner role to other user","text":"

    The owner can transfer control of the n-wise to other user. The old owner loses the corresponding privileges and becomes a regular user. The operation is performed using the NewOwnerTx transaction.

    "},{"location":"features/0748-n-wise-did-exchange/#notification-on-n-wise-state-update","title":"Notification on n-wise state update","text":"

    Just after committing the transaction to the n-wise registry, the participant MUST send the ledger-update-notify message to all other parties. The participant who received ledger-update-notify SHOULD fetch updates from the n-wise registry.

    "},{"location":"features/0748-n-wise-did-exchange/#didcomm-messaging-within-n-wise","title":"DIDComm messaging within n-wise","text":"

    It is allowed to exchange DIDComm messages of any type within n-wise. The belonging of the sender to a certain n-wise is determined by the sender's verkey.

    This RFC DOES NOT DEFINE a procedure of exchanging messages within n-wise. In the simplest case, this can be implemented as sending a message to each participant in turn. In case of a large number of parties, it is advisable to consider using a centralized coordinator who would be responsible for the ordering and guaranteed sending of messages from the sender to the rest of parties.

    "},{"location":"features/0748-n-wise-did-exchange/#reference","title":"Reference","text":""},{"location":"features/0748-n-wise-did-exchange/#n-wise-registry-transactions","title":"N-wise registry transactions","text":"

    N-wise state is modified using transactions in the following form

    {\n  \"type\": \"transaction type\",\n  ...\n  \"proof\" {\n    \"type\": \"JcsEd25519Signature2020\",\n    \"verificationMethod\": \"did:alice#key1\",\n    \"signatureValue\": \"...\"\n\n  }\n}\n
    "},{"location":"features/0748-n-wise-did-exchange/#attributes","title":"Attributes","text":""},{"location":"features/0748-n-wise-did-exchange/#genesistx","title":"GenesisTx","text":"

    'GenesisTx' is a mandatory initial transaction that defines the basic properties of the n-wise.

    {\n  \"type\": \"genesisTx\",\n  \"label\": \"Council\",\n  \"creatorNickname\": \"Alice\",\n  \"creatorDid\": \"did:alice\",\n  \"creatorDidDoc\": {\n   ..\n  },\n  \"ledgerType\": \"iota@1.0\",\n  \"metaInfo\" {\n    ...\n  }\n}\n
    "},{"location":"features/0748-n-wise-did-exchange/#attributes_1","title":"Attributes","text":"

    The genesisTx transaction MUST be signed by the creator's public key defined in his DID Document.

    "},{"location":"features/0748-n-wise-did-exchange/#invitationtx","title":"InvitationTx","text":"

    This transaction adds the invitation public keys to the n-wise registry.

    {\n  \"type\": \"invitationTx\",\n  \"publicKey\": [\n    {\n      \"id\": \"invitationVerkeyForBob\",\n      \"type\": \"Ed25519VerificationKey2018\",\n      \"publicKeyBase58\": \"arekhj893yh3489qh\"\n    }\n  ]\n}\n
    "},{"location":"features/0748-n-wise-did-exchange/#attributes_2","title":"Attributes","text":"

    invitationTx` MUST be signed by the user's public key defined in it's DID Document.

    "},{"location":"features/0748-n-wise-did-exchange/#invitation-message","title":"Invitation message","text":"

    The message is intended to invite a new participant. It is sent via an arbitrary communication channel (pairwise, QR code, e-mail, etc.).

    {\n  \"@id\": \"5678876542345\",\n  \"@type\": \"https://didcomm.org/n-wise/1.0/invitation\",\n  \"label\": \"Invitaion to join n-wise\",\n  \"invitationKeyId\": \"invitationVerkeyForBob\",\n  \"invitationPrivateKeyBase58\": \"qAue25rghuFRhrue....\",\n  \"ledgerType\": \"iota@1.0\",\n  \"ledger~attach\": [\n    {\n      \"@id\": \"attachment id\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"base64\": \"<bytes for base64>\"\n      }\n    }  \n  ]\n}\n
    "},{"location":"features/0748-n-wise-did-exchange/#attributes_3","title":"Attributes","text":""},{"location":"features/0748-n-wise-did-exchange/#addparticipanttx","title":"AddParticipantTx","text":"

    The transaction is designed to add a new user to n-wise.

    {\n  \"id\": \"addParticipantTx\",\n  \"nickname\": \"Bob\",\n  \"did\": \"did:bob\",\n  \"didDoc\": {\n    ...\n  }\n\n}\n
    "},{"location":"features/0748-n-wise-did-exchange/#attributes_4","title":"Attributes","text":"

    AddParticipantTx transaction MUST be signed by the invitation private key (invitationPrivateKeyBase58), received in Invitation message. As committing the AddParticipantTx transaction, the corresponding invitation key pair is considered deactivated (other invitations cannot be signed by it).

    The transaction executor MUST verify if the invitation key was indeed previously added. Execution of the transaction entails the addition of a new party to n-wise.

    "},{"location":"features/0748-n-wise-did-exchange/#updateparticipanttx","title":"UpdateParticipantTx","text":"

    The transaction is intended to update information about the participant.

    {\n  \"type\": \"updateParticipantTx\",\n  \"did\": \"did:bob\",\n  \"nickname\": \"Updated Bob\",\n  \"didDoc\" {\n    ...\n  }\n}\n
    "},{"location":"features/0748-n-wise-did-exchange/#attributes_5","title":"Attributes","text":"

    Transaction MUST be signed by the public key of the user being updated. The specified public key MUST be defined in the previous version of the DID Document.

    Execution of the transaction entails updating information about the participant.

    "},{"location":"features/0748-n-wise-did-exchange/#removeparticipanttx","title":"RemoveParticipantTx","text":"

    The transaction is designed to remove a party from n-wise.

    {\n  \"type\": \"removeParticipantTx\",\n  \"did\": \"did:bob\"\n}\n
    "},{"location":"features/0748-n-wise-did-exchange/#attributes_6","title":"Attributes","text":"

    The execution of the transaction entails the removal of the user and his DID Document from the list of n-wise parties.

    The transaction MUST be signed by the public key of the user who is going to be removed from n-wise, or with the public key of the owner.

    "},{"location":"features/0748-n-wise-did-exchange/#updatemetadatatx","title":"UpdateMetadataTx","text":"

    The transaction is intended to update the meta-information about n-wise.

    {\n    \"type\": \"updateMetadataTx\",\n    \"label\": \"Updated Council\"\n    \"metaInfo\": {\n      ...\n    }\n}\n
    "},{"location":"features/0748-n-wise-did-exchange/#attributes_7","title":"Attributes","text":"

    The transaction MUST be signed by the owner's public key.

    "},{"location":"features/0748-n-wise-did-exchange/#newownertx","title":"NewOwnerTx","text":"

    The transaction is intended to transfer the owner role to another user. The old owner simultaneously becomes a regular user.

    {\n    \"type\": \"newOwnerTx\",\n    \"did\": \"did:bob\"\n}\n
    "},{"location":"features/0748-n-wise-did-exchange/#attributes_8","title":"Attributes","text":"

    The transaction MUST be signed by the owner's public key.

    "},{"location":"features/0748-n-wise-did-exchange/#ledger-update-notify","title":"ledger-update-notify","text":"

    The message is intended to notify participants about the modifications of the n-wise state.

    {\n  \"@id\": \"4287428424\",\n  \"@type\": \"https://didcomm.org/n-wise/1.0/ledger-update-notify\"\n}\n
    "},{"location":"features/0748-n-wise-did-exchange/#drawbacks","title":"Drawbacks","text":""},{"location":"features/0748-n-wise-did-exchange/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    Public DID methods use blockchain networks or other public storages for its DID Documents. Peer DID rejects the use of external storage, which is absolutely justified for a pairwise relationship, since a DID Document can be stored by the other participant. If there are more than two participants, consensus on the list of DID Documents is required. N-wise is somewhat of a middle ground between a Peer DID (DID document is stored only by a partner) and a public DID (DID document is available to everyone in the internet). So, the concept of n-wise state registry was introduced in this RFC, and its specific implementations (consensus between participants or a third-party trusted registry) remain at the discretion of the n-wise creator. The concept of microledger is also considerable to use for the n-wise state registry.

    One more promising high-level concept for building n-wise protocols is Gossyp.

    "},{"location":"features/0748-n-wise-did-exchange/#prior-art","title":"Prior art","text":"

    The term of n-wise was proposed in Peer DID specification, and previously discussed in document. However, no strict formalization of this process was proposed, as well as the need for consensus between the participants was not noted.

    "},{"location":"features/0748-n-wise-did-exchange/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0748-n-wise-did-exchange/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes Sirius SDK Java IOTA Ledger based implementation (IOTA n-wise registry spec). See a detailed example in Jupyter notebook."},{"location":"features/0755-oca-for-aries/","title":"0755: Overlays Capture Architecture (OCA) For Aries","text":""},{"location":"features/0755-oca-for-aries/#summary","title":"Summary","text":"

    Overlays Capture Architecture (OCA) is, per the OCA specification, a \"standardized global solution for data capture and exchange.\" Given a data structure (such as a verifiable credential), OCA allows for the creation of purpose-specific overlays of information about that data structure. Each overlay provides some knowledge (human and machine-readable) about the overall data structure or the individual attributes within it. The information in the overlays makes it possible to create useful software for capturing data, displaying it and exchanging it. While the OCA website and OCA specification can be reviewed for a detailed background of OCA and its various purposes, in this RFC we'll focus on its purpose in Aries, which is quite constrained and pragmatic--a mechanism for an issuer to provide information about a verifiable credential to allow holder and verifier software to display the credential in a human-friendly way, including using the viewers preferred language, and the issuer's preferred branding. The image below shows an Aries mobile Wallet displaying the same credential without and with OCA overlays applied in two languages. All of the differences in the latter two screenshots from the first come from issuer-supplied OCA data.

    This RFC formalizes how Aries verifiable credential issuers can make a JSON OCA Bundle (a set of related OCA overlays about a base data structure) available to holders and verifiers that includes the following information for each type of credential they issue.

    The standard flow of data between participants is as follows:

    While the issuer providing the OCA Bundle for a credential type using the credential supplement mechanism is the typical flow (as detailed in this RFC), other flows, outside of the scope of this RFC are possible. See the rationale and alternatives section of this RFC for some examples.

    "},{"location":"features/0755-oca-for-aries/#motivation","title":"Motivation","text":"

    The core data models for verifiable credentials are more concerned about the correct cryptographic processing of the credentials than about general processing of the attribute data, and the user experience of those using credentials. An AnonCreds verifiable credential contains the bare minimum of metadata about a credential--basically, just the developer-style names for the type of credential and the attributes within it. JSON-LD-based verifiable credentials has the capacity to add more information about the attributes in a credential, but the data is not easily accessed and is provided to enable machine processing rather than improving user experience.

    OCA allows credential issuers to declare information about the verifiable credential types it issues to improve the handling of those credentials by holder and verifier Aries agents, and to improve the on-screen display of the credentials, through the application of issuer-specified branding elements.

    "},{"location":"features/0755-oca-for-aries/#tutorial","title":"Tutorial","text":"

    The tutorial section of this RFC defines the coordination necessary for an the creation, publishing, retrieval and use of an OCA Bundle for a given type of verifiable credential.

    In this overview, we assume the the use of OCA specifically for verifiable\ncredentials, and further, specifically for AnonCreds verifiable credentials. OCA\ncan also be used be applied to any data structure, not just verifiable\ncredentials, and for other verifiable credential models, such as those based on\nthe JSON-LD- or JWT-style verifiable credentials. As the Aries\ncommunity applies OCA to other styles of verifiable credential, we\nwill extend this RFC.\n
    "},{"location":"features/0755-oca-for-aries/#issuer-activities","title":"Issuer Activities","text":"

    The use of OCA as defined in this RFC begins with an issuer preparing an OCA Bundle for each type of credential they issue. An OCA Bundle is a JSON data structure consisting of the Capture Base, and some additional overlays of different types (listed in the next section).

    While an OCA Bundle can be manually maintained in an OCA Bundle JSON file, a common method of maintaining OCA source data is to use a spreadsheet, and generating the OCA Bundle from the Excel source. See the section of this RFC called OCA Tooling for a link to an OCA Source spreadsheet, and information on tools available for managing the OCA Source data and generating a corresponding OCA Bundle.

    The creation of the OCA Bundle and the configuration of the issuer's Aries Framework to deliver the OCA Bundle during credential issuance should be all that a specific issuer needs to do in using OCA for Aries. An Aries Framework that supports OCA for Aries should handle the rest of the technical requirements.

    "},{"location":"features/0755-oca-for-aries/#oca-specification-overlays","title":"OCA Specification Overlays","text":"

    All OCA data is based on a Capture Base, which defines the data structure described in the overlays. For AnonCreds, the Capture Base attributes MUST be the list of attributes in the AnonCreds schema for the given credential type. The Capture Base also MUST contain:

    With the Capture Base defined, the following additional overlay types MAY be created by the Issuer and SHOULD be expected by holders and verifiers. Overlay types flagged \"multilingual\" may have multiple instances of the overlay, one for each issuer-supported language (e.g en for English, fr French, SP Spanish, etc.) or country-language (e.g., en-CA for Canadian English, fr-CA for Canadian French), as defined in the OCA Specification about languages.

    An OCA Bundle that contains overlay types that a holder or verifier does not expect MUST be processed, with the unexpected overlays ignored.

    "},{"location":"features/0755-oca-for-aries/#aries-specific-dates-in-the-oca-format-overlay","title":"Aries-Specific Dates in the OCA Format Overlay","text":"

    In AnonCreds, zero-knowledge proof (ZKP) predicates (used, for example, to prove older than a given age based on date of birth without sharing the actual date of birth) must be based on integers. In the AnonCreds/Aries community, common ways for representing dates and date/times as integers so that they can be used in ZKP predicates are the dateint and Unix Time formats, respectively.

    In an OCA for Aries OCA Bundle, a dateint and Unix Time attributes MUST have the following values in the indicated overlays:

    A recipient of an OCA Bundle with the combination of overlay values referenced above for dateint and Unix Time SHOULD convert the integer attribute data into a date or date/time (respectively) and display the information as appropriate for the user. For example, a mobile app should display the data as a date or date/time based on the user's language/country setting and timezone, possibly combined with an app setting for showing the data in short, medium, long or full form.

    "},{"location":"features/0755-oca-for-aries/#aries-specific-branding-overlay","title":"Aries Specific \"branding\" Overlay","text":"

    In addition to the core OCA Overlays listed earlier, Aries issuers MAY include an additional Aries-specific extension overlay, the \"branding\" overlay, that gives the issuer a way to provide a set of data elements about the branding that they would like to see applied to a given type of credential. The branding overlay is similar to the multilanguage Meta overlay (e.g. ones for English, French and Spanish), with a specified set of name/value pairs. Holders (and verifiers) use the branding values from the issuer when rendering a credential of that type according the RFC0756 OCA for Aries Style Guide.

    An example of the use of the branding overlay is as follows, along with a definition of the name/value pair elements, and a sample image of how the elements are to be used. The sample is provide only to convey the concept of the branding overlay and how it is to be used. Issuers, holders and verifiers should refer to RFC0756 OCA for Aries Style Guide for details on how the elements are to be provided and used in displaying credentials.

    {\n    \"type\": \"aries/overlays/branding/1.0\"\n    \"digest\": \"EBQbQEV6qSEGDzGLj1CqT4e6yzESjPimF-Swmyltw5jU\",\n    \"capture_base\": \"EKpcSmz06sJs0b4g24e0Jc7OerbJrGN2iMVEnwLYKBS8\",\n    \"logo\": \"https://raw.githubusercontent.com/hyperledger/aries-rfcs/oca4aries../../features/0755-oca-for-aries/best-bc-logo.png\",\n    \"background_image\": \"https://raw.githubusercontent.com/hyperledger/aries-rfcs/oca4aries../../features/best-bc-background-image.png\",\n    \"background_image_slice\": \"https://raw.githubusercontent.com/hyperledger/aries-rfcs/oca4aries../../features/best-bc-background-image-slice.png\",\n    \"primary_background_color\": \"#003366\",\n    \"secondary_background_color\": \"#003366\",\n    \"secondary_attribute\": \"given_names\",\n    \"primary_attribute\": \"family_name\",\n    \"secondary_attribute\": \"given_names\",\n    \"issued_date_attribute\": \"\",\n    \"expiry_date_attribute\": \"expiry_date_dateint\",\n}\n

    It is deliberate that the credential branding defined in this RFC does not attempt to achieve pixel-perfect on screen rendering of the equivalent paper credential. There are two reasons for this:

    Instead, the guidance in this RFC and the RFC0756 OCA for Aries Style Guide gives the issuer a few ways to brand their credentials, and holder/verifier apps information on how to use those issuer-provided elements in a manner consistent for all issuers and all credentials.

    "},{"location":"features/0755-oca-for-aries/#oca-issuer-tools","title":"OCA Issuer Tools","text":"

    An Aries OCA Bundle can be managed as pure JSON as found in this sample OCA for Aries OCA Bundle. However, managing such multilingual content in JSON is not easy, particularly if the language translations come from team members not comfortable with working in JSON. An easier way to manage the data is to use an OCA source spreadsheet for most of the data, some in a source JSON file, and to use a converter to create the OCA Bundle JSON from the two sources. We recommend that an issuer maintain the spreadsheet file and source JSON in version control and use a pipeline action to generate the OCA Bundle when the source files are updated.

    The OCA Source Spreadsheet, an example of which is attached to this RFC, contains the following:

    The JSON Source file contains the Aries-specific Branding Overlay. Attached to this RFC is an example Branding Overlay JSON file that issuers can use to start.

    The following is how to create an OCA Source spreadsheet and from that, generate an OCA Bundle. Over time, we expect that this part of the RFC will be clarified as the tooling evolves.

    NOTE: The capture_base and digest fields in the branding overlay of the resulting OCA Bundle JSON file will not be updated to be proper self-addressing identifiers (SAIDs) as required by the OCA Specification. We are looking into how to automate the updating of those data elements.

    Scripting the generation process should be relatively simple, and our expectation is that the community will evolve the [Parser from the Human Colossus Foundation] to simplify the process further.

    Over time, we expect to see other tooling become available--notably, a tool for issuers to see what credentials will look like when their OCA Bundle is applied.

    "},{"location":"features/0755-oca-for-aries/#issuing-a-credential","title":"Issuing A Credential","text":"

    This section of the specification remains under consideration. The use of the credential supplement as currently described here is somewhat problematic for a number of reasons.

    We are currently investigating if an OCA Bundle can be published to the same VDR as holds an AnonCreds Schema or Credential Definition. We think that would overcome each of those concerns and make it easier to both publish and retrieve OCA Bundles.

    The currently preferred mechanism for an issuer to provide an OCA Bundle to a holder is when issuing a credential using RFC0453 Issue Credential, version 2.2 or later, the issuer provides, in the credential offer message, an OCA Bundle as a credential supplement.

    The reason OCA Bundle attachment must be signed by the issuer so that if the holder passes the OCA Bundle on to the verifier, the verifier can be certain that the issuer provided the OCA Bundle, and that it was not created by a malicious holder.

    Issuers should be aware that to ensure that the signature on a linked OCA Bundle (using the attachment type link) remains verifiable, the content resolved by the link must not change over time. For example, an Issuer might publish their OCA Bundles in a public GitHub repository, and send a link to the OCA Bundle during issuance. In that case the Issuer is advised to send a commit-based GitHub URL, rather than a branch-based reference. The Issuer may update the OCA Bundle sent to different holders over time, but once issued, each OCA Bundle MUST remain accessible.

    "},{"location":"features/0755-oca-for-aries/#warning-external-attachments","title":"Warning: External Attachments","text":"

    The use of an attachment of type link for the OCA Bundle itself, or the use of external references to the images in the branding Overlay could provide malicious issuers with a mechanism for tracking the use of a holder's verifiable credential. Specifically, the issuer could:

    A holder MAY choose not to attach an OCA Bundle to a verifier if it contains any external references. Non-malicious issuers are encouraged to not use external references in their OCA Bundles and as such, to minimize the inlined images in the branding overlay.

    "},{"location":"features/0755-oca-for-aries/#holder-activities","title":"Holder Activities","text":"

    Before processing a credential and an associated OCA Bundle, the holder SHOULD determine if the issuer is known in an ecosystem and has a sufficiently positive reputation. For example, the holder might determine if the issuer is in a suitable Trust Registry or request a presentation from the issuer about their identity.

    On receipt of a credential with an OCA Bundle supplement, the holder SHOULD retrieve the OCA Bundle attachment, verify the signature is from the issuer's public DID, verify the signature, and verify that the [OCA Capture Base] is for the credential being offered or issued to the holder. If verified, the holder should associate the OCA Bundle with the credential, including the signature.

    The holder SHOULD take appropriate security precautions in handling the remainder of the OCA data, especially the images as they could contain a malicious payload. The security risk is comparable to a browser receiving a web page containing images.

    Holder software should be implemented to use the OCA Bundle when processing and displaying the credential as noted in the list below. Developers of holder software should be familiar with the overlays the issuer is likely to provide (see list here) and how to use them according to RFC0756 OCA for Aries Style Guide.

    A recommended tactic when adding OCA support to a holder is when a credential is issued without an associated OCA Bundle, generate an OCA Bundle for the credential using the information available about the type of the credential, default images, and randomly generated colors. That allows for the creation of screens that assume an OCA Bundle is available. The RFC0756 OCA for Aries Style Guide contains guidelines for doing that.

    "},{"location":"features/0755-oca-for-aries/#adding-oca-bundles-to-present-proof-messages","title":"Adding OCA Bundles to Present Proof Messages","text":"

    Once a holder has an OCA Bundle that was issued with the credential, it MAY pass the OCA Bundle to a verifier when a presenting a proof that includes claims from that credential. This can be done via the present proof credential supplements approach, similar to what used when the credential was issued to the holder. When constructing the present_proof message to hold a proof, the holder would iterate through the credentials in the proof, and if there is an issuer-supplied OCA Bundle for the credentials, add the OCA Bundle as a supplement to the message. The signature from the Issuer MUST be included with the supplement.

    A holder SHOULD NOT send an OCA Bundle to a verifier if the OCA Bundle is a link, or if any of the data items in the OCA Bundle are links, as noted in the in the warning about external attachments in OCA Bundles.

    "},{"location":"features/0755-oca-for-aries/#verifier-activities","title":"Verifier Activities","text":"

    On receipt of a presentation with OCA Bundle supplements, the verifier SHOULD retrieve the OCA Bundle attachments, verify the signatures are from the credential issuers' public DIDs, verify the signatures, and verify that the [OCA Capture Base] is for the credentials being presented to the verifier. If verified, the verifier should associate the OCA Bundle with the source credential from the presentation.

    On receipt of a presentation with OCA Bundle supplements, the verifier MAY process the OCA Bundle attachment and verify the issuer's signature. If it verifies, the verifier should associate the OCA Bundle with the source credential from the presentation. The verifier SHOULD take appropriate security precautions in handling the data, especially the images. The holder software should be implemented to use the OCA Bundle when processing and displaying the credential as noted in the list below.

    Developers of verifier software should be familiar with the overlays the issuer is likely to provide (see list here) and how to use them according to RFC0756 OCA for Aries Style Guide. The list of how to use the OCA Bundle as a holder applies equally to verifiers.

    "},{"location":"features/0755-oca-for-aries/#reference","title":"Reference","text":""},{"location":"features/0755-oca-for-aries/#drawbacks","title":"Drawbacks","text":""},{"location":"features/0755-oca-for-aries/#rationale-and-alternatives","title":"Rationale and alternatives","text":""},{"location":"features/0755-oca-for-aries/#prior-art","title":"Prior art","text":"

    None, as far as we are aware.

    "},{"location":"features/0755-oca-for-aries/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0755-oca-for-aries/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"features/0756-oca-for-aries-style-guide/","title":"0756: OCA for Aries Style Guide","text":""},{"location":"features/0756-oca-for-aries-style-guide/#summary","title":"Summary","text":"

    Support for credential branding in Aries agents is provided by information provided from the issuer of a given credential type using Overlays Capture Architecture (OCA) overlays. Aries agents (software) use the issuer-provided OCA data when displaying (rendering) the issuer\u2019s credential on screens. This style guide is for issuers to know what information to include in the OCA overlays and how those elements will be used by holders and verifiers. The style guide is also for Aries holder and verifier software makers about how to use the OCA data provided from issuers for a given credential type. It is up to the software makers to use OCA data provided by the issuers as outlined in this guide.

    For more information about the use of OCA in Aries, please see RFC0755 OCA for Aries

    "},{"location":"features/0756-oca-for-aries-style-guide/#motivation","title":"Motivation","text":"

    OCA Bundles is intended to be used by ALL Aries issuers and ALL Aries Holders. Some Aries verifiers might also use OCA Bundles. This Style Guide provides guidance for issuers about what overlays to populate and with what information, and guidance for holders (and verifiers) about how to use the OCA Bundle data provided by the issuers when rendering Credentials on screen.

    Issuers, holders and verifiers expect other issuers, holders and verifiers to follow this Style Guide. Issuers, holders and verifiers not following this Style Guide will likely cause end users to see unpredictable and potential \"unfriendly\" results when credentials are displayed.

    It is in the best interest of the Aries community as a whole for those writing Aries agent software to use OCA Bundles and to follow this Style Guide in displaying credentials.

    "},{"location":"features/0756-oca-for-aries-style-guide/#tutorial","title":"Tutorial","text":"

    Before reviewing this Style Guide, please review and be familiar with RFC0755 OCA for Aries. It provides the technical details about OCA, the issuer role in creating an OCA Bundle and delivering to holders (and optionally, from holders to verifiers) and the holders role in extracting information from the OCA Bundle about a held credential. This Style Guide provides the details about what each participant is expected to do in creating OCA Bundles and using the data in OCA Bundles to render credentials on screen.

    "},{"location":"features/0756-oca-for-aries-style-guide/#oca-for-aries-style-guide","title":"OCA for Aries Style Guide","text":"

    A Credential User Interface (UI) pulls from a issuer-provided OCA Bundle the following elements:

    "},{"location":"features/0756-oca-for-aries-style-guide/#credential-layouts","title":"Credential Layouts","text":"

    This style guide defines three layouts for credentials, the credential list layout, the stacked list layout, and the single credential layout. Holders and verifiers SHOULD display credentials using only these layouts in the context of a screen containing either a list of credentials or a single credential, respectively. Holders and verifiers MAY display other relevant information on the page along with one of the layouts.

    The stacked list is the same as the credential layout, with the credentials that are stacked cutoff between elements 6 and 7. Examples of the stacked layout can be seen in the Stacking section of this document. In the Stacked layout, one of the credentials in the stack may be displayed using the full credential list layout.

    Credential List Layout Single Credential Layout

    Figure: Credential Layouts

    The numbered items in the layouts are as follows. In the list, the OCA data element(s) is provided first, and, where the needed data element(s) is not available through an OCA Bundle, a calculation for a fallback is defined. It is good practice to have code that populates a per credential data structure with data from the credential\u2019s OCA Bundle if available, and if not, populated by the fallbacks. That way, the credentials are displayed in the same way with or without an OCA Bundle per credential. Unless noted, all of the data elements come from the \u201cbranding\u201d overlay. Items 10 and 11 are not included in the layouts but are included to document the fallbacks for those values.

    1. logo
      • Fallback: First letter of the alias of the DIDComm connection
    2. background_image_slice if present, else secondary_background_color
      • Fallback: Black overlay at 24% opacity
    3. primary_background_color
      • Fallback: Randomly generated color
    4. Credential Status derived from revocation status and expiry date (if available)
      • Fallback: Empty
    5. Meta overlay item issuer_name
      • Fallback: Alias of the DIDComm connection
    6. Meta overlay item name
      • Fallback: The AnonCreds Credential Definition tag, unless the value is either credential or default, otherwise the AnonCreds schema_name attribute from the AnonCreds schema
    7. primary_attribute
      • Fallback: Empty
    8. secondary_attribute
      • Fallback: Empty
    9. background_image if present, else secondary_background_color
      • Fallback: Black overlay at 24% opacity (default)
    10. issued_date_attribute
      • Fallback: If tracked, the date the credential was received by the Holder, else empty.
    11. expiry_date_attribute
      • Fallback: Empty

    Figure: Template layers

    The font color is either black or white, as determined by calculating contrast levels (following Web Content Accessibility Guidelines) against the background colors from either the OCA Bundle or the generated defaults.

    Figure: example of a credential with no override specifications

    "},{"location":"features/0756-oca-for-aries-style-guide/#logo-image-specifications","title":"Logo Image Specifications","text":"

    The image in the top left corner is a space for the issuer logo. This space should not be used for anything other than the issuer logo. The logo image may be masked to fit within a rounded square with varying corner radii. Thus, the logo must be a square image (aspect ratio 1:1), as noted in the table below. The background is default white, therefore logo files with a transparent background will overlay a white background.

    The following are the specifications for the credential logo for issuers.

    Images should be as small as possible to balance quality and download speed. To ensure image quality on all devices, it is recommended to use vector based file types such as SVG.

    Preferred file type SVG, JPG, PNG with transparent background, Aspect ratio 1:1 Recommended image size 240x240 px Color space RGB"},{"location":"features/0756-oca-for-aries-style-guide/#background-image-slice-specifications","title":"Background Image Slice Specifications","text":"

    For issuers to better represent their brand, issuers may specify an image slice that will be used as outlined in the samples below. Note the use of the image in a long, narrow space and the dynamic height. The image slice will be top aligned, scaled (preserving aspect ratio) and cropped as needed to fill the space.

    Credential height is dependent on the content and can be unpredictable. Different languages (English, French, etc.) will add more length to names, OS level settings such as font changes or text enlargement will unpredictably change the height of the credential. The recommended image size below is suggested to accommodate for most situations. Note that since the image is top aligned, the top area of the image is certain to be displayed, while the bottom section of the image may not always be visible.

    Figure: Examples of the image slice behavior

    Types of images best used in this area are abstract images or graphical art. Do not use images that are difficult to interpret when cropped.

    Do

    Use an abstract image that can work even when cropped unexpectedly. Don\u2019t

    Use images that are hard to interpret when cropped. Avoid words

    Figure: Background image slice Do\u2019s and Don\u2019ts

    Preferred file type SVG, PNG, JPG Aspect ratio 1:10 Recommended image size 120x1200 px Color space RGB"},{"location":"features/0756-oca-for-aries-style-guide/#background-image-specifications","title":"Background Image Specifications","text":"

    The background image is to give issuers more opportunities to represent their brand and is used in some credential display screens. Avoid text in the background image.

    Do

    Use an image that represents your brand. Don\u2019t

    Use this image as a marketing platform. Avoid the use of text.

    Figure: Background image Do\u2019s and Don\u2019ts

    Preferred file type SVG, PNG, JPG Aspect ratio 3:1 Recommended image size 1080x360 px Color space RGB"},{"location":"features/0756-oca-for-aries-style-guide/#credential-status","title":"Credential Status","text":"

    To reduce visual clutter, the issued date (if present), expiry date (if present), and revocation status (if applicable) may be interpreted by an icon at the top right corner when:

    Figure: An example demonstrating how the revocation date, expiry date or issued date may be represented.

    The interpretation of the issued date, expiry date and revocation status may be dependent on the holder software, such as a wallet. For example, the specific icons used may vary by wallet, or the full status data may be printed over the credential.

    "},{"location":"features/0756-oca-for-aries-style-guide/#credential-name-and-issuer-name-guidelines","title":"Credential name and Issuer name guidelines","text":"

    Issuers should be mindful of the length of text on the credential as lengthy text will dynamically change the height of the credential. Expansive credentials risk reducing the number of fully visible credentials in a list.

    Figure: An example demonstrating how lengthy credentials can limit the number of visible credentials.

    Be mindful of other factors that may increase the length of text and hence, the height of the credential such as translated languages or the font size configured at the OS level.

    Figure: Examples showing the treatment of lengthy names

    "},{"location":"features/0756-oca-for-aries-style-guide/#primary-and-secondary-attribute-guidelines","title":"Primary and Secondary Attribute Guidelines","text":"

    If issuers expect people to hold multiples of their credentials of the same type, they may want to specify a primary and secondary attribute to display on the card face.

    Note that wallet builders or holders may limit the visibility of the primary and secondary attributes on the card face to mitigate privacy concerns. Issuers can expect that these attributes may be fully visible, redacted, or hidden.

    To limit personal information from being displayed on a card face, only specify what is absolutely necessary for wallet holders to differentiate between credentials of the same type. Do not display private information such as medical related attributes.

    Do

    Use attributes that help users identify their credentials. Always consider if a primary and secondary attribute is absolutely necessary. Don\u2019t

    Display attributes that contain private information.

    Figure: Primary/secondary attribute Do\u2019s and Don\u2019ts

    "},{"location":"features/0756-oca-for-aries-style-guide/#non-production-watermark","title":"Non-production watermark","text":"

    To identify non-production credentials, issuers can add a watermark to their credentials. The watermark is a simple line of text that can be customized depending on the issuer needs. The line of text will also appear as a prefix to the credential name. The line of text should be succinct to ensure legibility. This watermark is not intended to be used for any other purpose than to mark non-production credentials. Ensure proper localization to the watermark is present in all languages.

    Example text include:

    Do

    Use succinct words to describe the type of issued credential. This ensures legibility and does not increase the size of the credential unnecessarily. Don\u2019t

    Use long works or words that do not describe non-production credentials."},{"location":"features/0756-oca-for-aries-style-guide/#credential-resizing","title":"Credential resizing","text":"

    Credential size depends on the content of the credential and the size of the device. Text areas are resized according to the width.

    Figure: Treatment of the credential template on different devices

    Figure: An example of credential on different devices

    "},{"location":"features/0756-oca-for-aries-style-guide/#stacking","title":"Stacking","text":"

    Credentials may be stacked to overlap each other to increase the number of visible credentials in the viewport. The header remains unchanged. The issuer name, logo and credential name will always be visible but the primary and secondary attributes and the image slice will be obscured.

    Figure: An example of stacked credentials with default and enlarged text.

    "},{"location":"features/0756-oca-for-aries-style-guide/#accessibility","title":"Accessibility","text":"

    The alt-tags for the logo and background images come from the multilingual OCA Meta Overlay for the issuer name and credential type name.

    "},{"location":"features/0756-oca-for-aries-style-guide/#more-variations","title":"More Variations","text":"

    To view more credential variations using this template, view the Adobe XD file.

    "},{"location":"features/0756-oca-for-aries-style-guide/#drawbacks","title":"Drawbacks","text":"

    Defining and requesting adherence to a style guide is a lofty goal. With so many independent issuers, holders and verifiers using Aries, it is a challenge to get everyone to agree on a single way to display credentials for users. However, the alternative of everyone \"doing their own thing\", perhaps in small groups, will result in a poor experience for users, and be frustrating to both issuers trying to convey their brand, and holders (and verifiers) trying to create a beautiful experience for their users.

    "},{"location":"features/0756-oca-for-aries-style-guide/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    In coming up with this Style Guide, we consider how much control to give issuers, ultimately deciding that given them too much control (e.g., pixel precise layout of their credential) creates a usage/privacy risk (people using their credentials by showing them on screen, with all private data showing), is technical extremely difficult given the variations of holder devices, and likely to result in a very poor user experience.

    A user experience group in Canada came up with the core design, and the Aries Working Group reviewed and approved of the Style Guide.

    "},{"location":"features/0756-oca-for-aries-style-guide/#prior-art","title":"Prior art","text":"

    The basic concept of giving issuers a small set of parameters that they could control in branding of their data is used in many applications and communities. Relevant to the credential use case is the application of this concept in the Apple Wallet and Google Wallet. Core to this is the setting of expectations of all of the participants of how their data will be used, and how to use the data provided. In the Aries holder (and verifier) case, unlike that of the Apple Wallet and Google Wallet, is that there is not just one holder that is using the data from many issuers to render the data on screen, but many holders that are expected to adhere to this Style Guide.

    "},{"location":"features/0756-oca-for-aries-style-guide/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0756-oca-for-aries-style-guide/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"features/0771-anoncreds-attachments/","title":"Aries RFC 0771: AnonCreds Attachment Formats for Requesting and Presenting Credentials","text":""},{"location":"features/0771-anoncreds-attachments/#summary","title":"Summary","text":"

    This RFC registers attachment formats used with Hyperledger AnonCreds ZKP-oriented credentials in the Issue Credential Protocol 2.0 and Present Proof Protocol 2.0. If not specified otherwise, this follows the rules as defined in the AnonCreds Specification.

    "},{"location":"features/0771-anoncreds-attachments/#motivation","title":"Motivation","text":"

    Allows AnonCreds credentials to be used with credential-related protocols that take pluggable formats as payloads.

    "},{"location":"features/0771-anoncreds-attachments/#reference","title":"Reference","text":""},{"location":"features/0771-anoncreds-attachments/#credential-filter-format","title":"Credential Filter format","text":"

    The potential holder uses this format to propose criteria for a potential credential for the issuer to offer. The format defined here is not part of the AnonCreds spec, but is a Hyperledger Aries-specific message.

    The identifier for this format is anoncreds/credential-filter@v1.0. The data structure allows specifying zero or more criteria from the following structure:

    {\n  \"schema_issuer_id\": \"<schema_issuer_id>\",\n  \"schema_name\": \"<schema_name>\",\n  \"schema_version\": \"<schema_version>\",\n  \"schema_id\": \"<schema_identifier>\",\n  \"issuer_id\": \"<issuer_id>\",\n  \"cred_def_id\": \"<credential_definition_identifier>\"\n}\n

    The potential holder may not know, and need not specify, all of these criteria. For example, the holder might only know the schema name and the (credential) issuer id. Recall that the potential holder may specify target attribute values and MIME types in the credential preview.

    For example, the JSON structure might look like this:

    {\n  \"schema_issuer_id\": \"did:sov:4RW6QK2HZhHxa2tg7t1jqt\",\n  \"schema_name\": \"bcgov-mines-act-permit.bcgov-mines-permitting\",\n  \"issuer_id\": \"did:sov:4RW6QK2HZhHxa2tg7t1jqt\"\n}\n

    A complete propose-credential message from the Issue Credential protocol 2.0 embeds this format as an attachment in the filters~attach array:

    {\n  \"@id\": \"<uuid of propose message>\",\n  \"@type\": \"https://didcomm.org/issue-credential/%VER/propose-credential\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"<attach@id value>\",\n      \"format\": \"anoncreds/credential-filter@v1.0\"\n    }\n  ],\n  \"filters~attach\": [\n    {\n      \"@id\": \"<attach@id value>\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"base64\": \"ewogICAgInNjaGVtYV9pc3N1ZXJfZGlkIjogImRpZDpzb3Y... (clipped)... LMkhaaEh4YTJ0Zzd0MWpxdCIKfQ==\"\n      }\n    }\n  ]\n}\n
    "},{"location":"features/0771-anoncreds-attachments/#credential-offer-format","title":"Credential Offer format","text":"

    This format is used to clarify the structure and semantics (but not the concrete data values) of a potential credential, in offers sent from issuer to potential holder.

    The identifier for this format is anoncreds/credential-offer@v1.0. It must follow the structure of a Credential Offer as defined in the AnonCreds specification.

    The JSON structure might look like this:

    {\n    \"schema_id\": \"4RW6QK2HZhHxa2tg7t1jqt:2:bcgov-mines-act-permit.bcgov-mines-permitting:0.2.0\",\n    \"cred_def_id\": \"4RW6QK2HZhHxa2tg7t1jqt:3:CL:58160:default\",\n    \"nonce\": \"57a62300-fbe2-4f08-ace0-6c329c5210e1\",\n    \"key_correctness_proof\" : <key_correctness_proof>\n}\n

    A complete offer-credential message from the Issue Credential protocol 2.0 embeds this format as an attachment in the offers~attach array:

    {\n    \"@type\": \"https://didcomm.org/issue-credential/%VER/offer-credential\",\n    \"@id\": \"<uuid of offer message>\",\n    \"replacement_id\": \"<issuer unique id>\",\n    \"comment\": \"<some comment>\",\n    \"credential_preview\": <json-ld object>,\n    \"formats\" : [\n        {\n            \"attach_id\" : \"<attach@id value>\",\n            \"format\": \"anoncreds/credential-offer@v1.0\"\n        }\n    ],\n    \"offers~attach\": [\n        {\n            \"@id\": \"<attach@id value>\",\n            \"mime-type\": \"application/json\",\n            \"data\": {\n                \"base64\": \"ewogICAgInNjaGVtYV9pZCI6ICI0Ulc2UUsySFpoS... (clipped)... jb3JyZWN0bmVzc19wcm9vZj4KfQ==\"\n            }\n        }\n    ]\n}\n
    "},{"location":"features/0771-anoncreds-attachments/#credential-request-format","title":"Credential Request format","text":"

    This format is used to formally request a credential. It differs from the Credential Offer above in that it contains a cryptographic commitment to a link secret; an issuer can therefore use it to bind a concrete instance of an issued credential to the appropriate holder. (In contrast, the credential offer describes the schema and cred definition, but not enough information to actually issue to a specific holder.)

    The identifier for this format is anoncreds/credential-request@v1.0. It must follow the structure of a Credential Request as defined in the AnonCreds specification.

    The JSON structure might look like this:

    {\n    \"entropy\" : \"e7bc23ad-1ac8-4dbc-92dd-292ec80c7b77\",\n    \"cred_def_id\" : \"4RW6QK2HZhHxa2tg7t1jqt:3:CL:58160:default\",\n    // Fields below can depend on Cred Def type\n    \"blinded_ms\" : <blinded_master_secret>,\n    \"blinded_ms_correctness_proof\" : <blinded_ms_correctness_proof>,\n    \"nonce\": \"fbe22300-57a6-4f08-ace0-9c5210e16c32\"\n}\n

    A complete request-credential message from the Issue Credential protocol 2.0 embeds this format as an attachment in the requests~attach array:

    {\n  \"@id\": \"cf3a9301-6d4a-430f-ae02-b4a79ddc9706\",\n  \"@type\": \"https://didcomm.org/issue-credential/%VER/request-credential\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"7cd11894-838a-45c0-a9ec-13e2d9d125a1\",\n      \"format\": \"anoncreds/credential-request@v1.0\"\n    }\n  ],\n  \"requests~attach\": [\n    {\n      \"@id\": \"7cd11894-838a-45c0-a9ec-13e2d9d125a1\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"base64\": \"ewogICAgInByb3Zlcl9kaWQiIDogImRpZDpzb3Y6YWJjeHl.. (clipped)... DAtNTdhNi00ZjA4LWFjZTAtOWM1MjEwZTE2YzMyIgp9\"\n      }\n    }\n  ]\n}\n
    "},{"location":"features/0771-anoncreds-attachments/#credential-format","title":"Credential format","text":"

    A concrete, issued AnonCreds credential may be transmitted over many protocols, but is specifically expected as the final message in Issuance Protocol 2.0. The identifier for this format is anoncreds/credential@v1.0.

    This is a credential that's designed to be held but not shared directly. It is stored in the holder's wallet and used to derive a novel ZKP or W3C-compatible verifiable presentation just in time for each sharing of credential material.

    The encoded values of the credential MUST follow the encoding algorithm as described in Encoding Attribute Data. It must follow the structure of a Credential as defined in the AnonCreds specification.

    The JSON structure might look like this:

    {\n    \"schema_id\": \"4RW6QK2HZhHxa2tg7t1jqt:2:bcgov-mines-act-permit.bcgov-mines-permitting:0.2.0\",\n    \"cred_def_id\": \"4RW6QK2HZhHxa2tg7t1jqt:3:CL:58160:default\",\n    \"rev_reg_id\", \"EyN78DDGHyok8qw6W96UBY:4:EyN78DDGHyok8qw6W96UBY:3:CL:56389:CardossierOrgPerson:CL_ACCUM:1-1000\",\n    \"values\": {\n        \"attr1\" : {\"raw\": \"value1\", \"encoded\": \"value1_as_int\" },\n        \"attr2\" : {\"raw\": \"value2\", \"encoded\": \"value2_as_int\" }\n    },\n    // Fields below can depend on Cred Def type\n    \"signature\": <signature>,\n    \"signature_correctness_proof\": <signature_correctness_proof>\n    \"rev_reg\": <revocation registry state>\n    \"witness\": <witness>\n}\n

    An exhaustive description of the format is out of scope here; it is more completely documented in the AnonCreds Specification.

    "},{"location":"features/0771-anoncreds-attachments/#proof-request-format","title":"Proof Request format","text":"

    This format is used to formally request a verifiable presenation (proof) derived from an AnonCreds-style ZKP-oriented credential.

    The format can also be used to propose a presentation, in this case the nonce field MUST NOT be provided. The nonce field is required when the proof request is used to request a proof.

    The identifier for this format is anoncreds/proof-request@v1.0. It must follow the structure of a Proof as defined in the AnonCreds specification.

    Here is a sample proof request that embodies the following: \"Using a government-issued ID, disclose the credential holder\u2019s name and height, hide the credential holder\u2019s sex, get them to self-attest their phone number, and prove that their age is at least 18\":

    {\n    \"nonce\": \"2934823091873049823740198370q23984710239847\",\n    \"name\":\"proof_req_1\",\n    \"version\":\"0.1\",\n    \"requested_attributes\":{\n        \"attr1_referent\": {\"name\":\"sex\"},\n        \"attr2_referent\": {\"name\":\"phone\"},\n        \"attr3_referent\": {\"names\": [\"name\", \"height\"], \"restrictions\": <restrictions specifying government-issued ID>}\n    },\n    \"requested_predicates\":{\n        \"predicate1_referent\":{\"name\":\"age\",\"p_type\":\">=\",\"p_value\":18}\n    }\n}\n
    "},{"location":"features/0771-anoncreds-attachments/#proof-format","title":"Proof format","text":"

    This is the format of an AnonCreds-style ZKP. The raw values encoded in the presentation MUST be verified against the encoded values using the encoding algorithm as described in Encoding Attribute Data

    The identifier for this format is anoncreds/proof@v1.0. It must follow the structure of a Presentation as defined in the AnonCreds specification.

    A proof that responds to the previous proof request sample looks like this:

    {\n  \"proof\":{\n    \"proofs\":[\n      {\n        \"primary_proof\":{\n          \"eq_proof\":{\n            \"revealed_attrs\":{\n              \"height\":\"175\",\n              \"name\":\"1139481716457488690172217916278103335\"\n            },\n            \"a_prime\":\"5817705...096889\",\n            \"e\":\"1270938...756380\",\n            \"v\":\"1138...39984052\",\n            \"m\":{\n              \"master_secret\":\"375275...0939395\",\n              \"sex\":\"3511483...897083518\",\n              \"age\":\"13430...63372249\"\n            },\n            \"m2\":\"1444497...2278453\"\n          },\n          \"ge_proofs\":[\n            {\n              \"u\":{\n                \"1\":\"152500...3999140\",\n                \"2\":\"147748...2005753\",\n                \"0\":\"8806...77968\",\n                \"3\":\"10403...8538260\"\n              },\n              \"r\":{\n                \"2\":\"15706...781609\",\n                \"3\":\"343...4378642\",\n                \"0\":\"59003...702140\",\n                \"DELTA\":\"9607...28201020\",\n                \"1\":\"180097...96766\"\n              },\n              \"mj\":\"134300...249\",\n              \"alpha\":\"827896...52261\",\n              \"t\":{\n                \"2\":\"7132...47794\",\n                \"3\":\"38051...27372\",\n                \"DELTA\":\"68025...508719\",\n                \"1\":\"32924...41082\",\n                \"0\":\"74906...07857\"\n              },\n              \"predicate\":{\n                \"attr_name\":\"age\",\n                \"p_type\":\"GE\",\n                \"value\":18\n              }\n            }\n          ]\n        },\n        \"non_revoc_proof\":null\n      }\n    ],\n    \"aggregated_proof\":{\n      \"c_hash\":\"108743...92564\",\n      \"c_list\":[ 6 arrays of 257 numbers between 0 and 255]\n    }\n  },\n  \"requested_proof\":{\n    \"revealed_attrs\":{\n      \"attr1_referent\":{\n        \"sub_proof_index\":0,\n        \"raw\":\"Alex\",\n        \"encoded\":\"1139481716457488690172217916278103335\"\n      }\n    },\n    \"revealed_attr_groups\":{\n      \"attr4_referent\":{\n        \"sub_proof_index\":0,\n        \"values\":{\n          \"name\":{\n            \"raw\":\"Alex\",\n            \"encoded\":\"1139481716457488690172217916278103335\"\n          },\n          \"height\":{\n            \"raw\":\"175\",\n            \"encoded\":\"175\"\n          }\n        }\n      }\n    },\n    \"self_attested_attrs\":{\n      \"attr3_referent\":\"8-800-300\"\n    },\n    \"unrevealed_attrs\":{\n      \"attr2_referent\":{\n        \"sub_proof_index\":0\n      }\n    },\n    \"predicates\":{\n      \"predicate1_referent\":{\n        \"sub_proof_index\":0\n      }\n    }\n  },\n  \"identifiers\":[\n    {\n      \"schema_id\":\"NcYxiDXkpYi6ov5FcYDi1e:2:gvt:1.0\",\n      \"cred_def_id\":\"NcYxi...cYDi1e:2:gvt:1.0:TAG_1\",\n      \"rev_reg_id\":null,\n      \"timestamp\":null\n    }\n  ]\n}\n
    "},{"location":"features/0771-anoncreds-attachments/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Name / Link Implementation Notes"},{"location":"features/0780-data-urls-images/","title":"RFC 0780: Use Data URLs for Images and More in Credential Attributes","text":""},{"location":"features/0780-data-urls-images/#summary","title":"Summary","text":"

    Some credentials include attributes that are not simple strings or numbers, such as images or JSON data structures. When complex data is put in an attribute the issuer SHOULD issue the attribute as a Data URL, as defined in IETF RFC 2397, and whose use is described in this Mozilla Developer Documentation article.

    On receipt of all credentials and presentations, holders and verifiers SHOULD check all string attributes to determine if they are Data URLs. If so, they SHOULD securely process the data according to the metadata information in the Data URL, including:

    This allows, for example, an Aries Mobile Wallet to detect that a data element is an image and how it is encoded, and display it for the user as an image, not as a long (long) string of gibberish.

    "},{"location":"features/0780-data-urls-images/#motivation","title":"Motivation","text":"

    Holders and verifiers want to enable a delightful user experience when an issuer issues attributes that contain other than strings or numbers, such as an image or a JSON data structure. In such cases, the holder and verifiers need a way to know the format of the data so it can be processed appropriately and displayed usefully. While the Aries community encourages the use of the Overlays Capture Architecture specification as outlined in RFC 0755 OCA for Aries for such information, there will be times where an OCA Bundle is not available for a given credential. In the absence of an OCA Bundle, the holders and verifiers of such attributes need data type information for processing and displaying the attributes.

    "},{"location":"features/0780-data-urls-images/#tutorial","title":"Tutorial","text":"

    An issuer wants to issue a verifiable credential that contains an image, such as a photo of the holder to which the credential is issued. Issuing such an attribute is typically done by converting the image to a base64 string. This is handled by the various verifiable credential formats supported by Aries issuers. The challenge is to convey to the holder and verifiers that the attribute is not \"just another string\" that can be displayed on screen to the user. By making the attribute a Data URL, the holder and verifiers can detect the type and encoding of the attribute, process it, and display it correctly.

    For example, this image (from the IETF 2793 specification):

    can be issued as the attribute photo in a verifiable credential with its value a Data URL as follows:

    {\n\"photo\": \"data:image/png;base64,R0lGODdhMAAwAPAAAAAAAP///ywAAAAAMAAwAAAC8IyPqcvt3wCcDkiLc7C0qwyGHhSWpjQu5yqmCYsapyuvUUlvONmOZtfzgFzByTB10QgxOR0TqBQejhRNzOfkVJ+5YiUqrXF5Y5lKh/DeuNcP5yLWGsEbtLiOSpa/TPg7JpJHxyendzWTBfX0cxOnKPjgBzi4diinWGdkF8kjdfnycQZXZeYGejmJlZeGl9i2icVqaNVailT6F5iJ90m6mvuTS4OK05M0vDk0Q4XUtwvKOzrcd3iq9uisF81M1OIcR7lEewwcLp7tuNNkM3uNna3F2JQFo97Vriy/Xl4/f1cf5VWzXyym7PHhhx4dbgYKAAA7\"\n}\n

    The syntax of a Data URL is described in IETF RFC 2397. The \\ version is:

    A holder or verifier receiving a credential or presentation MUST check each attribute is a string, and if so, if it is a Data URL (likely by using a regular expression). If it is a Data URL it should be securely processed accordingly.

    Aries Data URL verifiable credential attributes MUST include the <MIME type>.

    "},{"location":"features/0780-data-urls-images/#image-size","title":"Image Size","text":"

    A separate issue from the use of Data URLs is how large an image (or other data type) can be put into an attribute and issued as a verifiable credential. That is an issue that is dependent on the verifiable credential implementation and other factors. For AnonCreds credentials, the attribute will be treated as a string, a hash will be calculated over the string, and the resulting number will be signed--just as for any string. The size of the image does not matter. However, there may be other components in your deployment that might impact how big an attribute in a credential can be. Many in the community have successfully experimented with the use of images in credentials, so consulting others on the question might be helpful.

    For the purpose of this RFC, the amount of data in the attribute is not relevant.

    "},{"location":"features/0780-data-urls-images/#security","title":"Security","text":"

    As noted in this Mozilla Developer Documentation and this Mozilla Security Blog Post about Data URLs, Data URLs are blocked from being used in the Address Bar of all major browsers. That is because Data URLs may contain HTML that can contain anything, including HTML forms that collect data from users. Since Aries holder and verifier agents are not general purpose content presentation engines (as are browsers) the use of Data URLs are less of a security risk. Regardless, holders and verifiers MUST limit their processing of attributes containing Data URLs to displaying the data, and not executing the data. Further, Aries holders and verifiers MUST stay up on dependency vulnerabilities, such as images constructed to exploit vulnerabilities in libraries that display images.

    "},{"location":"features/0780-data-urls-images/#reference","title":"Reference","text":"

    References for implementing this RFC are:

    "},{"location":"features/0780-data-urls-images/#drawbacks","title":"Drawbacks","text":"

    The Aries community is moving to the use of the [Overlay Capture Architecture Specification] to provide a more generalized way to accomplish the same thing (understanding the meaning, format and encoding of attributes), so this RFC is duplicating a part of that capability. That said, it is easier and faster for issuers to start using, and for holders and verifiers to detect and use.

    Issuers may choose to issue Data URLs with MIME types not commonly known to Aries holder and verifier components. In such cases, the holder or verifier MUST NOT display the data.

    Even if the MIME type of the data is known to the holders and verifiers, it may not be obvious how to present the data on screen in a useful way. For example, an attribute holding a JSON data structure with an array of values may not easily be displayed.

    "},{"location":"features/0780-data-urls-images/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    We considered using the same approach as is used in RFC 0441 Present Proof Best Practices of a special suffix (_img) for the attribute name in a credential to indicate that the attribute held an image. However, that provides far less information than this approach (e.g., what type of image?), and its use is limited to images. This RFC defines a far more complete, standard, and useful approach.

    As noted in the drawbacks section, this same functionality can (and should) be achieved with the broad deployment of [Overlay Capture Architecture Specification] and RFC 0755 OCA for Aries. However, the full deployment of RFC 0755 OCA for Aries will take some time, and in the meantime, this is a \"quick and easy\" alternate solution that is useful alongside OCA for Aries.

    "},{"location":"features/0780-data-urls-images/#prior-art","title":"Prior art","text":"

    In the use cases of which we are aware of issuers putting images and JSON structures into attributes, there was no indicator of the attribute content, and the holders and verifiers were assumed to either \"know\" about the data content based on the type of credential, or they just displayed the data as a string.

    "},{"location":"features/0780-data-urls-images/#unresolved-questions","title":"Unresolved questions","text":"

    Should this RFC define a list (or the location of a list) of MIME types that Aries issuers can use in credential attributes?

    For supported MIME types that do not have obvious display methods (such as JSON), should there be a convention for how to display the data?

    "},{"location":"features/0780-data-urls-images/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"features/0793-unqualfied-dids-transition/","title":"Aries RFC 0793: Unqualified DID Transition","text":""},{"location":"features/0793-unqualfied-dids-transition/#summary","title":"Summary","text":"

    Historically, Aries use of the Indy SDK's wallet included the use of 'unqualified DIDs' or DIDs without a did: prefix and method. This transition documents the process of migrating any such DIDs still in use to fully qualified DIDs.

    This process involves the adoption of the Rotate DID protocol and algorithm 4 of the Peer DID Method, then the rotation from the unqualified DIDs to any fully qualified DID, with preference for did:peer:4.

    The adoption of these specs will further prepare the Aries community for adoption of DIDComm v2 by providing an avenue for adding DIDComm v2 compatible endpoints.

    Codebases that do not use unqualified DIDs MUST still adopt DID Rotation and did:peer:4 as part of this process, even if no unqualified DIDs must be rotated.

    This RFC follows the guidance in RFC 0345 about community-coordinated updates to (try to) ensure that independently deployed, interoperable agents remain interoperable throughout this transition.

    The transition from the unqualified to qualified DIDs will occur in four steps:

    The community coordination triggers between the steps above will be as follows:

    "},{"location":"features/0793-unqualfied-dids-transition/#motivation","title":"Motivation","text":"

    To enable agent builders to independently update their code bases and deployed agents while maintaining interoperability.

    "},{"location":"features/0793-unqualfied-dids-transition/#tutorial","title":"Tutorial","text":"

    The general mechanism for this type of transition is documented in RFC 0345 about community-coordinated updates.

    The specific sequence of events to make this particular transition is outlined in the summary section of this RFC.

    "},{"location":"features/0793-unqualfied-dids-transition/#reference","title":"Reference","text":"

    See the summary section of this RFC for the details of this transition.

    "},{"location":"features/0793-unqualfied-dids-transition/#drawbacks","title":"Drawbacks","text":"

    None identified.

    "},{"location":"features/0793-unqualfied-dids-transition/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    This approach balances the speed of adoption with the need for independent deployment and interoperability.

    "},{"location":"features/0793-unqualfied-dids-transition/#prior-art","title":"Prior art","text":"

    The approach outlined in RFC 0345 about community-coordinated updates is a well-known pattern for using deprecation to make breaking changes in an ecosystem. That said, this is the first attempt to use this approach in Aries. Adjustments to the transition plan will be made as needed, and RFC 0345 will be updated based on lessons learned in executing this plan.

    "},{"location":"features/0793-unqualfied-dids-transition/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0793-unqualfied-dids-transition/#implementations","title":"Implementations","text":"

    The following table lists the status of various agent code bases and deployments with respect to the steps of this transition. Agent builders MUST update this table as they complete steps of the transition.

    Name / Link Implementation Notes Aries Protocol Test Suite No steps completed Aries Framework - .NET No steps completed Trinsic.id No steps completed Aries Cloud Agent - Python No steps completed Aries Static Agent - Python No steps completed Aries Framework - Go No steps completed Connect.Me No steps completed Verity No steps completed Pico Labs No steps completed IBM No steps completed IBM Agent No steps completed Aries Cloud Agent - Pico No steps completed Aries Framework JavaScript No steps completed"},{"location":"features/0794-did-rotate/","title":"Aries RFC 0794: DID Rotate 1.0","text":""},{"location":"features/0794-did-rotate/#summary","title":"Summary","text":"

    This protocol signals the change of DID in use between parties.

    This protocol is only applicable to DIDComm v1 - in DIDComm v2 use the more efficient DID Rotation header.

    "},{"location":"features/0794-did-rotate/#motivation","title":"Motivation","text":"

    This mechanism allows a party in a relationship to change the DID they use to identify themselves in that relationship. This may be used to switch DID methods, but also to switch to a new DID within the same DID method. For non-updatable DID methods, this allows updating DID Doc attributes such as service endpoints. Inspired by (but different from) the DID rotation feature of the DIDComm Messaging (DIDComm v2) spec.

    "},{"location":"features/0794-did-rotate/#implications-for-software-implementations","title":"Implications for Software Implementations","text":"

    Implementations will need to consider how data (public keys, DIDs and the ID for the relationship) related to the relationship is managed. If the relationship DIDs are used as identifiers, those identifiers may need to be updated during the rotation to maintain data integrity. For example, both parties might have to retain and be able to use as identifiers for the relationship the existing DID and the rotated to DID, and their related keys for a period of time until the rotation is complete.

    "},{"location":"features/0794-did-rotate/#tutorial","title":"Tutorial","text":""},{"location":"features/0794-did-rotate/#name-and-version","title":"Name and Version","text":"

    DID Rotate 1.0

    URI: https://didcomm.org/did-rotate/1.0/"},{"location":"features/0794-did-rotate/#roles","title":"Roles","text":"

    rotating_party: this party is rotating the DID in use for this relationship. They send the rotate message.

    observing_party: this party is notified of the DID rotation

    "},{"location":"features/0794-did-rotate/#messages","title":"Messages","text":""},{"location":"features/0794-did-rotate/#rotate","title":"Rotate","text":"

    Message Type URI: https://didcomm.org/did-rotate/1.0/rotate

    to_did: The new DID to be used to identify the rotating_party

    {\n    \"@id\": \"123456780\",\n    \"@type\": \"https://didcomm.org/did-rotate/1.0/rotate\",\n    \"to_did\": \"did:example:newdid\",\n\n}\n

    The rotating_party is expected to receive messages on both the existing and new DIDs and their associated keys for a reasonable period that MUST extend at least until the following ack message has been received.

    This message MUST be sent using AuthCrypt or as a signed message in order to establish the provenance of the new DID. In Aries implementations, messages sent within the context of a relationship are by default sent using AuthCrypt. Proper provenance prevents injection attacks that seek to take over a relationship. Any rotate message received without being authcrypted or signed MUST be discarded and not processed.

    DIDComm v1 uses public keys as the outer message identifiers. This means that rotation to a new DID using the same public key will not result in a change for new inbound messages. The observing_party must not assume that the new DID uses the same keys as the existing relationship.

    "},{"location":"features/0794-did-rotate/#ack","title":"Ack","text":"

    Message Type URI: https://didcomm.org/did-rotate/1.0/ack

    This message has been adopted from [the ack protocol] (https://github.com/hyperledger/aries-rfcs/tree/main../../features/0015-acks).

    This message is still sent to the prior DID to acknowledge the receipt of the rotation. Following messages will be sent to the new DID.

    In order to correctly process out of order messages, the The observing_party may choose to receive messages from the old DID for a reasonable period. This allows messages sent before rotation but received after rotation in the case of out of order message delivery.

    In this message, the thid (Thread ID) MUST be included to allow the rotating_party to correlate it with the sent rotate message.

    {\n    \"@id\": \"123456780\",\n    \"@type\": \"https://didcomm.org/did-rotate/1.0/ack\",\n      \"~thread\"          : {\n        \"thid\": \"<id of rotate message>\"\n    },\n\n}\n
    "},{"location":"features/0794-did-rotate/#problem-report","title":"Problem Report","text":"

    Message Type URI: https://didcomm.org/did-rotate/1.0/problem-report

    This message has been adopted from [the report-problem protocol] (https://github.com/hyperledger/aries-rfcs/blob/main../../features/0035-report-problem/README.md).

    If the observing_party receives a rotate message with a DID that they cannot resolve, they MUST return a problem-report message.

    The description code must be set to one of the following: - e.did.unresolvable - used for a DID who's method is supported, but will not resolve - e.did.method_unsupported - used for a DID method for which the observing_party does not support resolution. - e.did.doc_unsupported - used for a DID for which the observing_party does not find information sufficient for a DIDComm connection in the resolved DID Document. This would include compatible key types and a DIDComm capable service endpoint.

    Upon receiving this message, the rotating_party must not complete the rotation and resolve the issue. Further rotation attempts must happen in a new thread.

    {\n  \"@type\"            : \"https://didcomm.org/did-rotate/1.0/problem-report\",\n  \"@id\"              : \"an identifier that can be used to discuss this error message\",\n  \"~thread\"          : {\n        \"pthid\": \"<id of rotate message>\"\n    },\n  \"description\"      : { \"en\": \"DID Unresolvable\", \"code\": \"e.did.unresolvable\" },\n  \"problem_items\"    : [ {\"did\": \"<did_passed_in_rotate>\"} ],\n}\n
    "},{"location":"features/0794-did-rotate/#hangup","title":"Hangup","text":"

    Message Type URI: https://didcomm.org/did-rotate/1.0/hangup

    This message is sent by the rotating_party to inform the observing_party that they are done with the relationship and will no longer be responding.

    There is no response message.

    Use of this message does not require or indicate that all data has been deleted by either party, just that interaction has ceased.

    {\n    \"@id\": \"123456780\",\n    \"@type\": \"https://didcomm.org/did-rotate/1.0/hangup\"\n}\n
    "},{"location":"features/0794-did-rotate/#prior-art","title":"Prior art","text":"

    This protocol is inspired by the rotation feature of DIDComm Messaging (DIDComm v2). The implementation differs in important ways. The DIDComm v2 method is a post rotate operation: the first message sent AFTER the rotation contains the prior DID and a signature authorizing the rotation. This is efficient, but requires the use of a message header and a higher level of integration with message processing. This protocol is a pre rotate operation: notifying the other party of the new DID in advance is a less efficient but simpler approach. This was done to minimize adoption pain. The pending move to DIDComm v2 will provide the efficiency.

    "},{"location":"features/0794-did-rotate/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"features/0804-didcomm-rpc/","title":"0804: DIDComm Remote Procedure Call (DRPC)","text":""},{"location":"features/0804-didcomm-rpc/#summary","title":"Summary","text":"

    The DIDComm Remote Procedure Call (DRPC) protocol enables a JSON-RPC-based request-response interaction to be carried out across a DIDComm channel. The protocol is designed to enable custom interactions between connected agents, and to allow for the rapid prototyping of experimental DIDComm protocols. An agent sends a DIDComm message to request a JSON-RPC service be invoked by another agent, and gets back the JSON-RPC-format response in subsequent DIDComm message. The protocol enables any request to be conveyed that the other agent understands. Out of scope of this protocol is how the requesting agent discovers the services available from the responding agent, and how the two agents know the semantics of the specified JSON-RPC requests and responses. By using DIDComm between the requesting and responding agents, the security and privacy benefits of DIDComm are accomplished, and the generic parameters of the requests allow for flexibility in how and where the protocol can be used.

    "},{"location":"features/0804-didcomm-rpc/#motivation","title":"Motivation","text":"

    There are several use cases that are driving the initial need for this protocol.

    "},{"location":"features/0804-didcomm-rpc/#app-attestation","title":"App Attestation","text":"

    A mobile wallet needs to get an app attestation verifiable credential from the wallet publisher. To do that, the wallet and publisher need to exchange information specific to to the attestation process with the Google and Apple stores. The sequence is as follows:

    The wallet and service are using instances of three protocols (two DRPC and one Issue Credential) to carry out a full business process. Each participant must have knowledge of the full business process--there is nothing inherent in the DRPC protocol about this process, or how it is being used. The DRPC protocol is included to provide a generic request-response mechanism that alleviates the need for formalizing special purpose protocols.

    App attestation is a likely candidate for a having its own DIDComm protocol. This use of DRPC is ideal for developing and experimenting with the necessary agent interactions before deciding on if a use-specific protocol is needed and its semantics.

    "},{"location":"features/0804-didcomm-rpc/#video-verification-service","title":"Video Verification Service","text":"

    A second example is using the DRPC protocol is to implement a custom video verification service that is used by a specific mobile wallet implementation and a proprietary backend service prior to issuing a credential to the wallet. Since the interactions are to a proprietary service, so an open specification does not make sense, but the use of DIDComm is valuable. In this example, the wallet communicates over DIDComm to a Credential Issuer agent that (during verification) proxies the requests/responses to a backend (\"behind the firewall\") service. The wallet is implemented to use DRPC protocol instances to initiate the verification and receive the actions needed to carry out the steps of the verification (take picture, take video, instruct movements, etc.), sending to the Issuer agent the necessary data. The Issuer conveys the requests to the verification service and the responses back to the mobile wallet. At the end of the process, the Issuer can see the result of the process, and decide on the next actions between it and the mobile wallet, such as issuing a credential.

    Again, after using the DRPC protocol for developing and experimenting with the implementation, the creators of the protocol can decide to formalize their own custom, end-to-end protocol, or continue to use the DRPC protocol instances. Important is that they can begin development without doing any Aries frameworks customizations or plugins by using DRPC.

    "},{"location":"features/0804-didcomm-rpc/#tutorial","title":"Tutorial","text":""},{"location":"features/0804-didcomm-rpc/#name-and-version","title":"Name and Version","text":"

    This is the DRPC protocol. It is uniquely identified by the URI:

    \"https://didcomm.org/drpc/1.0\"\n
    "},{"location":"features/0804-didcomm-rpc/#key-concepts","title":"Key Concepts","text":"

    This RFC assumes that you are familiar with DID communication.

    The protocol consists of a DIDComm request message carrying an arbitrary JSON-RPC request to a responding agent, and a second message that carries the result of processing the request back to the client of the first message. The interpretation of the request, how to carry out the request, the content of the response, and the interpretation of the response, are all up to the business logic (controllers) of the participating agents. There is no discovery of remote services offered by agents--it is assumed that the two participants are aware of the DRPC capabilities of one another through some other means. For example, from the App Attestation use case, functionality to carry out the app attestation process, and the service to use it is built into the mobile wallet.

    Those unfamiliar with JSON-RPC, the <tl;dr> is that it is a very simple request response protocol using JSON where the only data shared is:

    The response is likewise simple:

    An example of a simple JSON-RPC request/response pair from the specification is:

    --> {\"jsonrpc\": \"2.0\", \"method\": \"subtract\", \"params\": [42, 23], \"id\": 1}\n<-- {\"jsonrpc\": \"2.0\", \"result\": 19, \"id\": 1}\n

    A JSON-RPC request may be a batch of requests, each with a different id value, and the response a similar array, with an entry for each of the requests.

    JSON-RPC follows a similar \"parameters defined by the message type\" pattern as DIDComm. As a result, in this protocol we do not need to add any special handling around the params such as Base64 encoding, signing, headers and so on, as the parties interacting with the protocol by definition must have a shared understanding of the content of the params and can define any special handling needed amongst themselves.

    It is expected (although not required) that an Aries Framework receiving a DRPC message will simply pass to its associated \"business logic\" (controller) the request from the client, and wait on the controller to provide the response content to be sent back to the original client. Apart from the messaging processing applied to all inbound and outbound messages, the Aries Framework will not perform any of the actual processing of the request.

    "},{"location":"features/0804-didcomm-rpc/#roles","title":"Roles","text":"

    There are two roles, adopted from the JSON-RPC specification, in the protocol client and server:

    "},{"location":"features/0804-didcomm-rpc/#states","title":"States","text":""},{"location":"features/0804-didcomm-rpc/#client-states","title":"Client States","text":"

    The client agent goes through the following states:

    The state transition table for the client is:

    State / Events Send Request Receive Response Start Transition to request-sent request-sent Transition to complete completed problem-report received Transition to abandoned abandoned"},{"location":"features/0804-didcomm-rpc/#server-states","title":"Server States","text":"

    The server agent goes through the following states:

    The state transition table for the server is:

    State / Events Receive Request Send Response or Problem Report Start Transition to request-received request-received Transition to complete completed"},{"location":"features/0804-didcomm-rpc/#messages","title":"Messages","text":"

    The following are the messages in the DRPC protocol. The response message handles all positive responses, so the ack (RFC 0015 ACKs) message is NOT adopted by this protocol. The RFC 0035 Report Problem is adopted by this protocol in the event that a request is not recognizable as a JSON-RPC message and as such, a JSON-RPC response message cannot be created. See the details below in the Problem Report Message section.

    "},{"location":"features/0804-didcomm-rpc/#request-message","title":"Request Message","text":"

    The request message is sent by the client to initiate the protocol. The message contains the JSON-RPC information necessary for the server to process the request, prepare the response, and send the response message back to the client. It is assumed the client knows what types of requests the server is prepared to receive and process. If the server does not know how to process the error, JSON-RPC has a standard response, outlined in the response message section below. How the client and server coordinate that understanding is out of scope of this protocol.

    The request message uses the same JSON items as JSON-RPC, skipping the id in favor of the existing DIDComm @id and thread handling.

      {\n    \"@type\": \"https://didcomm.org/drpc/1.0/request\",\n    \"@id\": \"2a0ec6db-471d-42ed-84ee-f9544db9da4b\",\n    \"request\" : {\"jsonrpc\": \"2.0\", \"method\": \"subtract\", \"params\": [42, 23], \"id\": 1}\n  }\n

    The items in the message are as follows:

    Per the JSON-RPC specification, if the id field of a JSON-RPC request is omitted, the server should not respond. In this DRPC DIDComm protocol, the server is always expected to send a response, but MUST NOT include a JSON-RPC response for any JSON-RPC request for which the id is omitted. This is covered further in the response message section (below).

    "},{"location":"features/0804-didcomm-rpc/#response-message","title":"Response Message","text":"

    A response message is sent by the server to following the processing of the request to convey the output of the processing to the client. As with the request the format mostly exactly that of a JSON-RPC response.

    If the request is unrecognizable as a JSON-RPC message such that a JSON-RPC message cannot be generated, the server SHOULD send a RFC 0035 Report Problem message to the client.

    It is assumed the client understands what the contents of the response message means in the context of the protocol instance. How the client and server coordinate that understanding is out of scope of this protocol.

      {\n    \"@type\": \"https://didcomm.org/drpc/1.0/response\",\n    \"@id\": \"63d6f6cf-b723-4eaf-874b-ae13f3e3e5c5\",\n    \"response\": {\"jsonrpc\": \"2.0\", \"result\": 19, \"id\": 1}\n  }\n

    The items in the message are as follows:

    As with all DIDComm messages that are not the first in a protocol instance, a ~thread decorator MUST be included in the response message.

    The special handling of the \"all JSON-RPC requests are notifications\" described above is to simplify the DRPC handling to know when a DRPC protocol instance is complete. If a response message is not always required, the DRPC handler would have to inspect the request message to look for ids to determine when the protocol completes.

    If the server does not understand how to process a given JSON-RPC request, a response error SHOULD be returned (as per the JSON-RPC specification) with:

    "},{"location":"features/0804-didcomm-rpc/#problem-report-message","title":"Problem Report Message","text":"

    A RFC 0035 Report Problem message SHOULD be sent by the server instead of a response message only if the request is unrecognizable as a JSON-RPC message. An JSON-RPC errors MUST be provided to the client by the server via the response message, not a problem-report. The client MUST NOT respond to a response message, even if the response message is not a valid JSON-RPC response. This is because once the server sends the response, the protocol is in the completed state (from the server's perspective) and so is subject to deletion. As such, a follow up problem-report message would have an invalid thid (thread ID) and (at best) be thrown away by the server.

    "},{"location":"features/0804-didcomm-rpc/#constraints","title":"Constraints","text":"

    The primary constraint with this protocol is that the two parties using the protocol must understand one another--what JSON-RPC request(s) to use, what parameters to provide, how to process the those requests, what the response means, and so on. It is not a protocol to be used between arbitrary parties, but rather one where the parties have knowledge outside of DIDComm of one another and their mutual capabilities.

    On the other hand, that constraint enables great flexibility for explicitly collaborating agents (such as a mobile wallet and the agent of its manufacturer) to accomplish request-response transactions over DIDComm without needing to define additional DIDComm protocols. More complex interactions can be accomplished by carrying out a sequence of DRPC protocol instances between agents.

    The flexibility the DRPC protocol allows for experimenting with specific interactions between agents that could later evolve into formal DIDComm \"fit for purpose\" protocols.

    "},{"location":"features/0804-didcomm-rpc/#reference","title":"Reference","text":""},{"location":"features/0804-didcomm-rpc/#codes-catalog","title":"Codes Catalog","text":"

    A JSON-RPC request codes catalog could be developed over time and be included in this part of the RFC. This might an intermediate step in transitioning a given interaction implemented using DRPC into formally specified interaction. On the other hand, simply defining a full DIDComm protocol will often be a far better approach.

    At this time, there are no codes to be cataloged.

    "},{"location":"features/0804-didcomm-rpc/#drawbacks","title":"Drawbacks","text":"

    Anything that can be done by using the DRPC protocol can be accomplished by a formally defined protocol specific to the task to be accomplished. The advantage of the DRPC protocol is that pairs of agent instances that are explicitly collaborating can use this protocol without having to first define a task-specific protocol.

    "},{"location":"features/0804-didcomm-rpc/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    We considered not supporting the notification and batch forms of the JSON-RPC specification, and decided it made sense to allow for the full support of the JSON-RPC specification, including requests of those forms. That said, we also found that the concept of not having a DRPC response message in some (likely, rare) cases based on the contents of the request JSON item (e.g., when all of the ids are omitted from the JSON-RPC requests) would unnecessarily complicate the DIDComm protocol instance handling about when it is complete. As a result, a DRPC response message is always required.

    This design builds on the experience of implementations of this kind of feature using RFC 0095 Basic Message and RFC 0335 HTTP Over DIDComm. This design tries to build off the learnings gained from both of those implementations.

    Based on feedback to an original version of the RFC, we looked as well at using gRPC as the core of this protocol, versus JSON-RPC. Our assessment was that gRPC was a much heavier weight mechanism that required more effort between parties to define and implement what will often be a very simple request-response transaction -- at the level of defining a DIDComm protocol.

    The use of params and leaving the content and semantics of the params up to the client and server means that they can define the appropriate handling of the parameters. This eliminates the need for the protocol to define, for example, that some data needs to be Base64 encoding for transmission, or if some values need to be cryptographically signed. Such details are left to the participants and how they are using the protocol.

    "},{"location":"features/0804-didcomm-rpc/#prior-art","title":"Prior art","text":"

    This protocol has similar goals to the RFC 0335 HTTP Over DIDComm protocol, but takes a lighter weight, more flexible approach. We expect that implementing HTTP over DIDComm using this protocol will be as easy as using RFC 0335 HTTP Over DIDComm, where the JSON-RPC request's params data structure holds the headers and body elements for the HTTP request. On the other hand, using the explicit RFC 0335 HTTP Over DIDComm is might be a better choice if it is available and exactly what is needed.

    One of the example use cases for this protocol has been implemented by \"hijacking\" the RFC 0095 Basic Message protocol to carry out the needed request/response actions. This approach is less than ideal in that:

    "},{"location":"features/0804-didcomm-rpc/#unresolved-questions","title":"Unresolved questions","text":""},{"location":"features/0804-didcomm-rpc/#implementations","title":"Implementations","text":"

    The following lists the implementations (if any) of this RFC. Please do a pull request to add your implementation. If the implementation is open source, include a link to the repo or to the implementation within the repo. Please be consistent in the \"Name\" field so that a mechanical processing of the RFCs can generate a list of all RFCs supported by an Aries implementation.

    Implementation Notes may need to include a link to test results.

    Name / Link Implementation Notes"},{"location":"features/0809-w3c-data-integrity-credential-attachment/","title":"Aries RFC 0809: W3C Verifiable Credential Data Integrity Attachment format for requesting and issuing credentials","text":""},{"location":"features/0809-w3c-data-integrity-credential-attachment/#summary","title":"Summary","text":"

    This RFC registers an attachment format for use in the issue-credential V2 protocol based on W3C Verifiable Credentials with Data Integrity Proofs from the VC Data Model.

    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#motivation","title":"Motivation","text":"

    The Issue Credential protocol needs an attachment format to be able to exchange W3C verifiable credentials. It is desirable to make use of specifications developed in an open standards body, such as the Credential Manifest for which the attachment format is described in RFC 0511: Credential-Manifest Attachment format. However, the Credential Manifest is not finished and ready yet, and therefore there is a need to bridge the gap between standards.

    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#tutorial","title":"Tutorial","text":"

    Complete examples of messages are provided in the reference section.

    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#reference","title":"Reference","text":""},{"location":"features/0809-w3c-data-integrity-credential-attachment/#credential-offer-attachment-format","title":"Credential Offer Attachment Format","text":"

    Format identifier: didcomm/w3c-di-vc-offer@v0.1

    {\n  \"data_model_versions_supported\": [\"1.1\", \"2.0\"],\n  \"binding_required\": true,\n  \"binding_method\": {\n    \"anoncreds_link_secret\": {\n      \"nonce\": \"1234\",\n      \"cred_def_id\": \"did:key:z6MkwXG2WjeQnNxSoynSGYU8V9j3QzP3JSqhdmkHc6SaVWoT/credential-definition\",\n      \"key_correctness_proof\": \"<key_correctness_proof>\"\n    },\n    \"didcomm_signed_attachment\": {\n      \"algs_supported\": [\"EdDSA\"],\n      \"did_methods_supported\": [\"key\", \"web\"],\n      \"nonce\": \"1234\"\n    }\n  },\n  \"credential\": {\n    \"@context\": [\n      \"https://www.w3.org/2018/credentials/v1\",\n      \"https://w3id.org/security/data-integrity/v2\",\n      {\n        \"@vocab\": \"https://www.w3.org/ns/credentials/issuer-dependent#\"\n      }\n    ],\n    \"type\": [\"VerifiableCredential\"],\n    \"issuer\": \"did:key:z6MkwXG2WjeQnNxSoynSGYU8V9j3QzP3JSqhdmkHc6SaVWoT\",\n    \"issuanceDate\": \"2024-01-10T04:44:29.563418Z\",\n    \"credentialSubject\": {\n      \"height\": 175,\n      \"age\": 28,\n      \"name\": \"Alex\",\n      \"sex\": \"male\"\n    }\n  }\n}\n
    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#credential-offer-exceptions","title":"Credential Offer Exceptions","text":"

    To allow for validation of the credential according to the corresponding VC Data Model version, the credential in the offer MUST be conformant to the corresponding VC Data Model version, except for the exceptions listed below. This still allows the credential to be validated, knowing which deviations are possible.

    The list of exception is as follows:

    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#credential-request-attachment-format","title":"Credential Request Attachment Format","text":"

    Format identifier: didcomm/w3c-di-vc-request@v0.1

    This format is used to request a verifiable credential. The JSON structure might look like this:

    {\n  \"data_model_version\": \"2.0\",\n  \"binding_proof\": {\n    \"anoncreds_link_secret\": {\n      \"entropy\": \"<random-entropy>\",\n      \"cred_def_id\": \"did:key:z6MkwXG2WjeQnNxSoynSGYU8V9j3QzP3JSqhdmkHc6SaVWoT/credential-definition\",\n      \"blinded_ms\": {},\n      \"blinded_ms_corectness_proof\": {},\n      \"nonce\": \"<random-nonce>\"\n    },\n    \"didcomm_signed_attachment\": {\n      \"attachment_id\": \"<@id of the attachment>\"\n    }\n  }\n}\n
    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#credential-attachment-format","title":"Credential Attachment Format","text":"

    Format identifier: didcomm/w3c-di-vc@v0.1

    This format is used to transmit a verifiable credential. The JSON structure might look like this:

    {\n  \"credential\": {\n    // vc with proof object or array\n  }\n}\n

    It is up to the issuer to the pick an appropriate cryptographic suite to sign the credential. The issuer may use the cryptographic binding material provided by the holder to select the cryptographic suite. For example, when the anoncreds_link_secret binding method is used, the issuer should use an DataIntegrityProof with the anoncredsvc-2023 cryptographic suite. When a holder provides a signed attachment as part of the binding proof using the EdDSA JWA alg, the issuer could use a DateIntegrityProof with the eddsa-rdfc-2022 cryptographic suite. However, it is not required for the cryptographic suite used for the signature on the credential to be in any way related to the cryptographic suite used for the binding proof, unless the binding method explicitly requires this (for example the anoncreds_link_secret binding method).

    A complete issue-credential message from the Issue Credential protocol 2.0 might look like this:

    {\n  \"@id\": \"284d3996-ba85-45d9-964b-9fd5805517b6\",\n  \"@type\": \"https://didcomm.org/issue-credential/2.0/issue-credential\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"format\": \"didcomm/w3c-di-vc@v0.1\"\n    }\n  ],\n  \"credentials~attach\": [\n    {\n      \"@id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"base64\": \"ewogICAgICAgICAgIkBjb250ZXogWwogICAgICAg...(clipped)...RNVmR0SXFXZhWXgySkJBIgAgfQogICAgICAgIH0=\"\n      }\n    }\n  ]\n}\n
    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#binding-methods","title":"Binding Methods","text":"

    The attachment format supports different methods to bind the credential to the receiver of the credential. In the offer message the issuer can indicate which binding methods are supported in the binding_methods object. Each key represents the id of the supported binding method.

    This section defines a set of binding methods supported by this attachment format, but other binding methods may be used. Based on the binding method, the request needs to include a binding_proof object where the key matches the key of the binding method from the offer.

    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#anoncreds-link-secret","title":"AnonCreds Link Secret","text":"

    Identifier: anoncreds_link_secret

    This binding method is intended to be used in combination with a credential containing an AnonCreds proof.

    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#binding-method-in-offer","title":"Binding Method in Offer","text":"

    The structure of the binding method in the offer MUST match the structure of the Credential Offer as defiend in the AonCreds specification, with the exclusion of the schema_id key.

    {\n  \"nonce\": \"1234\",\n  \"cred_def_id\": \"did:key:z6MkwXG2WjeQnNxSoynSGYU8V9j3QzP3JSqhdmkHc6SaVWoT/credential-definition\",\n  \"key_correctness_proof\": {\n    /* key correctness proof object */\n  }\n}\n
    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#binding-proof-in-request","title":"Binding Proof in Request","text":"

    The structure of the binding proof in the request MUST match the structure of the Credential Request as defined in the AnonCreds specification.

    {\n  \"anoncreds_link_secret\": {\n    \"entropy\": \"<random-entropy>\",\n    \"blinded_ms\": {\n      /* blinded ms object */\n    },\n    \"blinded_ms_corectness_proof\": {\n      /* blinded ms correctness proof object */\n    },\n    \"nonce\": \"<random-nonce>\"\n  }\n}\n
    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#binding-in-credential","title":"Binding in Credential","text":"

    The issued credential should be bound to the holder by including the blinded link secret in the credential as defined in the Issue Credential section of the AnonCreds specification. Credential bound using the AnonCreds link secret binding method MUST contain an proof with proof.type value of DataIntegrityProof and cryptosuite value of anoncredsvc-2023, and conform to the AnonCreds W3C Verifiable Credential Representation.

    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#didcomm-signed-attachment","title":"DIDComm Signed Attachment","text":"

    Identifier: didcomm_signed_attachment

    This binding method leverages DIDComm signed attachments to bind a credential to a specific key and/or identifier.

    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#binding-method-in-offer_1","title":"Binding Method in Offer","text":"
    {\n  \"didcomm_signed_attachment\": {\n    \"algs_supported\": [\"EdDSA\"],\n    \"did_methods_supported\": [\"key\"],\n    \"nonce\": \"b19439b0-4dc9-4c28-b796-99d17034fb5c\"\n  }\n}\n
    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#binding-proof-in-request_1","title":"Binding Proof in Request","text":"

    The binding proof in the request points to an appended attachment containing the signed attachment.

    {\n  \"didcomm_signed_attachment\": {\n    \"attachment_id\": \"<@id of the attachment>\"\n  }\n}\n
    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#signed-attachment-content","title":"Signed Attachment Content","text":"

    The attachment MUST be signed by including a signature in the jws field of the attachment. The data MUST be a JSON document encoded in the base64 field of the attachment. The structure of the signed attachment is described below.

    JWS Payload

    {\n  \"nonce\": \"<request_nonce>\",\n}\n

    Protected Header

    {\n  \"alg\": \"EdDSA\",\n  \"kid\": \"did:key:z6MkkwiqX7BvkBbi37aNx2vJkCEYSKgHd2Jcgh4AUhi4YY1u#z6MkkwiqX7BvkBbi37aNx2vJkCEYSKgHd2Jcgh4AUhi4YY1u\"\n}\n

    A signed binding request attachment appended to a request message might look like this:

    {\n  \"@id\": \"284d3996-ba85-45d9-964b-9fd5805517b6\",\n  \"@type\": \"https://didcomm.org/issue-credential/2.0/request-credential\",\n  \"comment\": \"<some comment>\",\n  \"formats\": [\n    {\n      \"attach_id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"format\": \"didcomm/w3c-di-vc-request@v0.1\"\n    }\n  ],\n  \"~attach\": [\n    {\n      \"@id\": \"123\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"base64\": \"<base64-encoded-json-attachment-content>\",\n        \"jws\": {\n          \"protected\": \"eyJhbGciOiJFZERTQSIsImlhdCI6MTU4Mzg4... (bytes omitted)\",\n          \"signature\": \"3dZWsuru7QAVFUCtTd0s7uc1peYEijx4eyt5... (bytes omitted)\"\n        }\n      }\n    }\n  ],\n  \"credentials~attach\": [\n    {\n      \"@id\": \"5b38af88-d36f-4f77-bb7a-2f04ab806eb8\",\n      \"mime-type\": \"application/json\",\n      \"data\": {\n        \"base64\": \"ewogICAgICAgICAgIkBjb250ZXogWwogICAgICAg...(clipped)...RNVmR0SXFXZhWXgySkJBIgAgfQogICAgICAgIH0=\"\n      }\n    }\n  ]\n}\n
    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#binding-in-credential_1","title":"Binding in Credential","text":"

    The issued credential should be bound to the holder by including the did in the credential as credentialSubject.id or holder.

    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#drawbacks","title":"Drawbacks","text":""},{"location":"features/0809-w3c-data-integrity-credential-attachment/#rationale-and-alternatives","title":"Rationale and alternatives","text":"

    RFC 0593: JSON-LD Credential Attachment, W3C VC API allows issuance of credentials using only linked data signatures, while RFC 0592: Indy Attachment supports issuance of AnonCreds credentials. This attachment format aims to support issuance of both previous attachment formats (while for AnonCreds it now being in the W3C model), as well as supporting additional ../../features such as issuance W3C JWT VCs, credentials with multiple proofs, and cryptographic binding of the credential to the holder.

    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#prior-art","title":"Prior art","text":"

    The attachment format in this RFC is heavily inspired by RFC 0593: JSON-LD Credential Attachment, W3C VC API and OpenID for Verifiable Credential Issuance.

    "},{"location":"features/0809-w3c-data-integrity-credential-attachment/#unresolved-questions","title":"Unresolved questions","text":""}]} \ No newline at end of file diff --git a/main/sitemap.xml.gz b/main/sitemap.xml.gz index ca4f8397..cfbca6b3 100644 Binary files a/main/sitemap.xml.gz and b/main/sitemap.xml.gz differ