diff --git a/docs/developer-docs/cost-estimations-and-examples.mdx b/docs/developer-docs/cost-estimations-and-examples.mdx index 190f12f810..052c66d681 100644 --- a/docs/developer-docs/cost-estimations-and-examples.mdx +++ b/docs/developer-docs/cost-estimations-and-examples.mdx @@ -4,13 +4,14 @@ keywords: [beginner, concept, cycles, cycles costs, cycles cost estimations] import { MarkdownChipRow } from "/src/components/Chip/MarkdownChipRow"; -# Cycles cost estimations and examples +# Pricing calculator ## Overview -To get a rough estimate of how much your project may cost, below are common project architectures and their estimated monthly/yearly cost. These estimates are broken down into the groups: +To get a rough estimate of how much your project may cost, below is a a pricing calculator that can be used to estimate the cost of a dapp deployed on ICP. Costs are charged to a canister in cycles, which are fixed against the price of [XDR](/docs/current/concepts/glossary#xdr), where **1 trillion cycles equals 1 XDR**. +Cycles costs are calculated based on: - **Messaging**: Calls that are made to a canister's methods. Costs depend on the type of call being sent (query, update, inter-canister, etc), the size of the message's request and response bytes, and the total amount of messages a canister sends and receives. @@ -24,125 +25,45 @@ Note that query calls are currently free and do not incur a cost. - **Special features**: Include HTTPS outcalls, transmissions using the [Bitcoin API](/docs/current/references/bitcoin-how-it-works), and transmissions using the [chain-key signing API](/docs/current/references/t-sigs-how-it-works/). -## Units and fiat value +## ICP pricing calculator + +The calculator below estimates costs in USD. You will need to load your project with [cycles](gas-cost.mdx) by converting ICP tokens into cycles. [Learn more about how to obtain cycles](/docs/current/developer-docs/defi/cycles/converting_icp_tokens_into_cycles). + + + + -The price of cycles is fixed against the price of [XDR](/docs/current/concepts/glossary#xdr), where **1 trillion cycles equals 1 XDR**. -This documentation will use the following units: -| Abbr. | Name | In numbers | Cycles XDR value | Cycles USD value | -|-------|------|------------|------------------|------------------| -| T | Trillion | 1_000_000_000_000 | 1 | 1.34 | -| B | Billion | 1_000_000_000 | 0.001 | 0.00134 | -| M | Million | 1_000_000 | 0.000001 | 0.00000134 | -| k | Thousand | 1_000 | 0.000000001 | 0.00000000134 | -| – | (one) | 1 | 0.000000000001 | 0.00000000000134| - -## Sample project architectures - -:::caution -The estimates below are simply to demonstrate what different sample architectures may cost. The actual cost of your project will vary. The estimates below should only be used for gaining an idea of what a project may cost. For exact costs, calculate them using the [cycles and transmission costs chart](gas-cost.mdx). -::: - -:::info -These estimates use a 13-node subnet. Costs will be different if deployed on a 34-node subnet. Please refer to the [cycles and transmission costs chart](gas-cost.mdx). -::: - - -### Single canister - -The cost estimate for a single canister that provides a service used by 5 users, each stores 100KB of data per user, uses inter-canister calls, and stores 5GiB of data. The average number of request and response bytes per message is 351 and the number of instructions executed per message is 1_000_000. There are 100 daily messages generated per user. The canister performs 50 daily tasks, with 1_000_000 instructions executed per task. - - -| Transaction group | Cost per month in cycles | Cost per month in USD | Cost per year in cycles | Cost per year in USD | -| ----------------- | ------------------------ | --------------------- | ----------------------- | -------------------- | -| Messaging | 19B | $0.03 USD | 228B | $0.36 USD | -| Execution | 25.2B | $0.03 USD | 302.4B | $0.36 USD | -| Storage | 1.53T | $2.05 USD | 18.36T | $24.60 USD | -| HTTPS outcalls | 0 | $0 USD | 0 | $0 USD | - - -### Simple static website - -The cost estimate for a simple static website using a single canister for the website's assets. It is not called by other canisters or performs HTTPS outcalls. It stores 5GiB of data, has 100 total users, and has 10 daily active users that each generate 50 messages per day. The average number of request and response bytes per message is 351, and the number of instructions executed per message is 1_000_000. - - -| Transaction group | Cost per month in cycles | Cost per month in USD | Cost per year in cycles | Cost per year in USD | -| ----------------- | ------------------------ | --------------------- | ----------------------- | -------------------- | -| Messaging | 1.06B | $0.01 USD | 12.72B | $0.12 USD | -| Execution | 1.65B | $0.02 USD | 19.8B | $0.24 USD | -| Storage | 1.53T | $2.05 USD | 18.36T | $24.60 USD | -| HTTPS outcalls | 0 | $0 USD | 0 | $0 USD | - - -:::caution -When considering developing a website on ICP, there are several important benefits to consider that traditional Web2 web hosting services often hold behind additional paywalls, such as fees for multiple developers, access to workflows that use third-party services, advanced frontend functionality, site analysis, and advanced security functions. - -On ICP, the fees broken down in this document are the only fees that are charged for developing. Developers only pay for exactly what is used by their project's canisters in terms of resources. No features are restricted behind additional paywalls. -::: - - -### Smart contract web dapp - -The cost estimate for a simple smart contract-powered web dapp that uses two canisters. This dapp has 100 total users, 10 daily users that generate 1_000 messages per day. The average number of request and response bytes per message is 245, and the number of instructions executed per message is 1_442_185. The dapp stores 100KB of data per user and stores 10GiB of user-independent data. - - -| Transaction group | Cost per month in cycles | Cost per month in USD | Cost per year in cycles | Cost per year in USD | -| ----------------- | ------------------------ | --------------------- | ----------------------- | -------------------- | -| Messaging | 901B | $1.21 USD | 1.0812T | $14.52 USD | -| Execution | 1.34T | $1.79 USD | 16.08T | $21.48 USD | -| Storage | 3.07T | $4.11 USD | 36.84T | $49.32 USD | -| HTTPS outcalls | 0 | $0 USD | 0 | $0 USD | - - -### Social media dapp with two canisters - -The cost estimate for a project that creates a social media dapp using two canisters with 200 total users, 50 daily users that generate 6_127 messages per day. The average number of request and response bytes per message is 245, and the number of instructions executed per message is 1_442_185. This project also uses 2000 HTTPS outcalls per day, with an average of 250 request and response bytes per outcall. The project stores 100KB of data per user and 25GiB of user-independent storage. - - -| Transaction group | Cost per month in cycles | Cost per month in USD | Cost per year in cycles | Cost per year in USD | -| ----------------- | ------------------------ | --------------------- | ----------------------- | -------------------- | -| Messaging | 25.7T | $34.38 USD | 308.4T | $412.56 USD | -| Execution | 3.79T | $50.74 USD | 45.48T | $608.88 USD | -| Storage | 7.67T | $10.28 USD | 92.04T | $123.36 USD | -| HTTPS outcalls | 3.18T | $4.26 USD | 38.16T | $51.12 USD | - - -### Decentralized service using threshold ECDSA and HTTPS outcalls - -The cost estimate for a project that creates a decentralized service with 5_000 total users, 100 daily users that generate 4_400 messages per day. The average number of request and response bytes per message is 500, and the number of instructions executed per message is 437_253. This project also uses 3691 HTTPS outcalls per day, with an average of 332 request and response bytes per outcall. The project stores 100KB of data per user and 150GiB of user-independent storage. - -| Transaction group | Cost per month in cycles | Cost per month in USD | Cost per year in cycles | Cost per year in USD | -| ----------------- | ------------------------ | --------------------- | ----------------------- | -------------------- | -| Messaging | 198T | $265.36 USD | 2_376T | $3_184.32 USD | -| Execution | 402T | $538.90 USD | 4_824T | $6_466.80 USD | -| Storage | 46.1T | $61.83 USD | 553.2T | $741.96 USD | -| HTTPS outcalls | 6.01T | $8.06 USD | 72.12T | $96.72 USD | - -:::caution -This example resembles that of an enterprise-level project. For reference, the [Orally](https://orally.network/) enterprise application on ICP averages between 35_000 and 46_000 HTTPS outcalls per month. - -An enterprise-level project of this size could potentially cost several thousands of dollars if deployed on a traditional Web2 platform. Web2 infrastructure services often charge additional fees that scale with the number of requests that your project serves per day or per month. -::: - - -### Instant messaging dapp with thousands of canisters - -The cost estimate for a project that creates a messaging dapp where each user's data is stored in its own canister with 15_000 total users. 1_500 of these users are active daily, generating 5_700 messages each per day. The average number of request and response bytes per message is 624, and the number of instructions executed per message is 74_983. The project stores 10MB of data per user and 750GiB of user-independent storage. - - -| Transaction group | Cost per month in cycles | Cost per month in USD | Cost per year in cycles | Cost per year in USD | -| ----------------- | ------------------------ | --------------------- | ----------------------- | -------------------- | -| Messaging | 464T | $622.12 USD | 5_568T | $7_465.44 USD | -| Execution | 382T | $511.89 USD | 4_584T | $6_142.68 USD | -| Storage | 276T | $369.73 USD | 3_312T | $4_436.76 USD | -| HTTPS outcalls | 0 | $0.00 USD | 0 | $0.00 USD | - -:::caution -In this example, a new canister is created for each user. That means each time a new user signs up for the dapp a cost of 100_000_000_000 is charged. This additional cost should be considered when choosing a similar architecture. -::: - ## How is the number of cycles charged to a canister estimated? The number of cycles charged to the canister can be estimated using the following parameters: diff --git a/roadmap/roadmap.d.ts b/roadmap/roadmap.d.ts index 267804784d..830d1d17c0 100644 --- a/roadmap/roadmap.d.ts +++ b/roadmap/roadmap.d.ts @@ -5,21 +5,19 @@ interface RootObject { } interface Milestone { - name: string; - description: string; + name?: string | string; milestone_id: string; - eta: null | string | string; + eta: null | string; status?: string; elements: Element[]; + description?: string; } interface Element { title: string; overview: string; - description: string; forum: string; proposal: string; - wiki?: string; docs: string; eta?: string; status: string; @@ -28,3 +26,4 @@ interface Element { imported?: boolean; milestone_id?: string; } + diff --git a/roadmap/roadmap.json b/roadmap/roadmap.json index 525a3823a4..0b06d61ff5 100644 --- a/roadmap/roadmap.json +++ b/roadmap/roadmap.json @@ -5,7 +5,6 @@ "milestones": [ { "name": "Reduced End-to-end Latency", - "description": "This milestone marks a significant reduction of user-perceived end-to-end latency for processing ingress messages. It is achieved by multiple concerted measures at different levels of the protocol stack and brings the user experience of Web3 closer to Web2.", "milestone_id": "Tokamak", "eta": null, "status": "in_progress", @@ -13,10 +12,8 @@ { "title": "QUIC-based transport and P2P layer for consensus", "overview": "A QUIC-based implementation of the transport and P2P layer for consensus that cuts down latency and increases throughput.", - "description": "The initial transport and P2P layer of ICP was based on TCP. This feature realizes a QUIC-based re-implementation of the transport and P2P layer for consensus that cuts down latency and increases throughput compared to the original implementation.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "Q1 2024", "status": "deployed", @@ -27,10 +24,8 @@ { "title": "Synchronous message submission endpoint", "overview": "Providing a synchronous ingress submission endpoint in addition to the current asynchronous endpoint to reduce perceived end-to-end client-observed latency.", - "description": "A synchronous message submission endpoint considerably reduces the end-to-end client-observed latency for submitted messages as it eliminates the need for the client polling for the response message.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "Q2 2024", "status": "in_progress", @@ -41,10 +36,8 @@ { "title": "Latency-aware ingress routing", "overview": "Boundary nodes route ingress messages to subnet nodes with lower network distance in terms of latency instead of random nodes.", - "description": "Currently, Boundary Nodes route messages to random nodes of the target subnet of the message. Selecting the destination node of the message based on its \"distance\" in terms of network latency can help significantly reduce the user-perceived latency.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "in_progress", @@ -56,7 +49,6 @@ }, { "name": "Increased Storage Capacity and Throughput", - "description": "Increased storage capacity of ICP subnets, storage per canister smart contract, and message throughput between subnets. Those improvements lead to each subnet being able to host larger workloads and better utilization of ICP’s hardware resources. The world's largest smart contracts can now grow even larger, far surpassing smart contracts on any other chain.", "milestone_id": "Stellarator", "eta": null, "status": "in_progress", @@ -64,10 +56,8 @@ { "title": "Increase Stable Memory Limit to 500GiB", "overview": "Increasing the stable memory limit of a canister to 500GiB.", - "description": "This feature will increase the stable memory limit of a canister to 500GiB for all canisters, allowing canisters to hold more state. The API will remain unchanged. A single message, however, will not be able to write to more than 8GiB.", "forum": "https://forum.dfinity.org/t/increased-canister-smart-contract-memory/6148/128", "proposal": "", - "wiki": "", "docs": "", "eta": "Q2 2024", "status": "in_progress", @@ -78,10 +68,8 @@ { "title": "Log-structured merge tree (LSMT) storage layer", "overview": "Rewrite the storage layer to rely on log-structured merge trees (LSMTs) instead of XFS reflinking. This will give more fine-grained control over what happens during checkpointing and thus enable further, more targeted, performance improvements.", - "description": "The state manager currently relies on XFS reflinks to achieve copy-on-write-like functionality upon writing checkpoints. While this is conceptually nice it delegates some tasks with high optimization potential to the file system. The goal of this feature is to implement an LSMT-based storage layer which will enable optimizations further down the road. We also expect some immediate positive performance impact, such as improved checkpointing times, from this feature.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -92,10 +80,8 @@ { "title": "Support 1TB of replicated storage", "overview": "Protocol improvements and optimizations of on-disk access and for syncing to reach 1TB of replicated state.", - "description": "This feature captures the goal to support 1TB of replicated storage per subnet. Work on this feature includes protocol improvements and optimizations related to interacting with the replicated state on disk and when syncing it via state sync. It also includes efforts to make sure that node machines match the hardware requirements in terms of physical disk size etc.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "in_progress", @@ -107,10 +93,8 @@ { "title": "Improved consensus throughput and latency", "overview": "Improved consensus throughput and latency by better, and less bursty, node bandwidth use. Achieved through not including full messages, but only their hashes and other metadata, in blocks.", - "description": "The goal of this feature is to better use the bandwidth of nodes by not including full messages, but only their hashes in blocks. This leads to a less bursty bandwidth usage of nodes compared to sending full blocks when being a block maker. The result of this is increased consensus throughput.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "in_progress", @@ -121,10 +105,8 @@ { "title": "Motoko orthogonal persistence", "overview": "Offering simple and scalable persistence across upgrades without needing to use stable memory in Motoko.", - "description": "Motoko already offers orthogonal persistence where objects are automatically retained across upgrades if reachable from stable variables. This simple-to-use and safe model will now be made as scalable as stable memory, by implementing instantaneous upgrades without serialization to and from stable memory and by allowing the same amount of orthogonally persistent data as with stable memory. For this purpose, the main memory obtains a self-descriptive format that can be retained across upgrades, while its address space is extended to 64-bit to pass the 4GB data limit.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "in_progress", @@ -135,10 +117,8 @@ { "title": "Asynchronous Checkpointing", "overview": "Replicas currently persist the replicated state to disk every couple hundreds of rounds in a process called checkpointing. Currently some parts of checkpointing are done synchronously, which leads to drops in the execution rate in the checkpointing rounds and users have to wait longer for their responses during these rounds. The goal of this feature is to make more checkpointing steps run asynchronously in the background to make the IC's performance more consistent and predictable.", - "description": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "in_progress", @@ -157,10 +137,8 @@ { "title": "Increase Stable Memory Limit to 32GiB", "overview": "Increasing the stable memory limit of a canister from 8GiB to 32 GiB.", - "description": "Currently, a canister can access 8GiB of stable memory. This feature will increase that limit to 32GiB for all canisters, allowing canisters to hold more state. The API will remain unchanged. A single message however will not be able to write to more than 8GiB.", "forum": "https://forum.dfinity.org/t/increased-canister-smart-contract-memory/6148/128", "proposal": "", - "wiki": "", "docs": "", "eta": "October 2022", "status": "deployed", @@ -171,10 +149,8 @@ { "title": "HTTPS Outcalls from Canisters", "overview": "Enables canisters to make calls to HTTP(S)-based servers. Trustless integration with Web2.", - "description": "This feature directly integrates Web 3.0 with the Web 2.0 worlds by enabling canister smart contracts on the Internet Computer to make HTTP(S) outcalls to Web 2.0 services outside the blockchain in a completely trustless manner. Using this feature, one can realize a substantial part of the functionalities currently offered by blockchain oracle services, just with better security guarantees and at a lower cost. Possible use cases include directly obtaining market data from HTTP servers for DeFi dapps and decentralized insurance services, sending notifications to end users via traditional communications channels, or implementing, by also using the threshold ECDSA feature, an Ethereum integration entirely in canister space.", "forum": "https://forum.dfinity.org/t/long-term-r-d-general-integration-proposal/9383", "proposal": "https://dashboard.internetcomputer.org/proposal/35639", - "wiki": "", "docs": "https://internetcomputer.org/https-outcalls", "eta": "2022", "status": "deployed", @@ -185,10 +161,8 @@ { "title": "Support 450GB replicated storage", "overview": "Enhance replicated storage further to 450GB per subnet. Requires protocol improvements and optimizations.", - "description": "This feature captures the goal to support 450GB of replicated storage on subnets. Work on this feature includes protocol improvements and optimizations related to interacting with the replicated state on disk and when syncing it via state sync. It also includes work to make sure that node machines match the hardware requirements in terms of physical disk size etc.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -199,10 +173,8 @@ { "title": "Wasm-native stable memory", "overview": "Letting stable read and write operations directly access replicated memory, similar to how Wasm load and store operations access the the heap.", - "description": "The goal of introducing Wasm-native stable memory is to improve the performance of stable reads and writes by letting these operations directly access stable memory in the same way Wasm loads and stores access the Wasm heap. This will make direct use of stable memory more practical and it will not require canister developers to make any changes to how they use stable memory.", "forum": "https://forum.dfinity.org/t/proposal-wasm-native-stable-memory/15966", "proposal": "https://dashboard.internetcomputer.org/proposal/88812", - "wiki": "", "docs": "", "eta": "2023", "status": "deployed", @@ -213,10 +185,8 @@ { "title": "Increase Stable Memory Limit to 48GiB", "overview": "Increasing the stable memory limit of a canister from 32GiB to 48 GiB.", - "description": "Currently, a canister can access 32GiB of stable memory. This feature will increase that limit to 48GiB for all canisters, allowing canisters to hold more state. The API will remain unchanged. A single message however will not be able to write to more than 8GiB.", "forum": "https://forum.dfinity.org/t/increased-canister-smart-contract-memory/6148/128", "proposal": "", - "wiki": "", "docs": "", "eta": "2023", "status": "deployed", @@ -226,10 +196,8 @@ { "title": "Increase Stable Memory Limit to 96GiB", "overview": "Increasing the stable memory limit of a canister from 48GiB to 96GiB.", - "description": "Currently, a canister can access 48GiB of stable memory. This feature will increase that limit to 96GiB for all canisters, allowing canisters to hold more state. The API will remain unchanged. A single message however will not be able to write to more than 8GiB.", "forum": "https://forum.dfinity.org/t/increased-canister-smart-contract-memory/6148/128", "proposal": "", - "wiki": "", "docs": "", "eta": "Jan 2023", "status": "deployed", @@ -239,10 +207,8 @@ { "title": "Bazel-based Build System", "overview": "Reducing build and testing times through a Bazel-based build system. More aggressive caching. Only perform necessary steps.", - "description": "As the code base of the Internet Computer grows, build and testing times increase. By switching to a Bazel-based build system, unnecessary build steps are automatically skipped, artifacts can be cached more broadly, and the CI times are significantly reduced. This will affect all community members who verify Internet Computer builds before voting on upgrades.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "July 2023", "status": "deployed", @@ -253,10 +219,8 @@ { "title": "Support 750GB replicated storage", "overview": "Enhance replicated storage further to 750GB per subnet. Requires protocol improvements and optimizations.", - "description": "This feature captures the goal to support 750GB of replicated storage on subnets. Work on this feature includes protocol improvements and optimizations related to interacting with the replicated state on disk and when syncing it via state sync. It also includes work to make sure that node machines match the hardware requirements in terms of physical disk size etc.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -267,10 +231,8 @@ { "title": "Locally timing out canister requests", "overview": "Timing out canister-to-canister messages during high load.", - "description": "As opposed to ingress messages, canister-to-canister messages can not timeout at the moment. This feature will introduce the possiblity for the IC to timeout canister-to-canister requests in high-load phases.", "forum": "https://forum.dfinity.org/t/long-term-r-d-scalability-proposal/9387", "proposal": "", - "wiki": "", "docs": "", "eta": "December 2022", "status": "deployed", @@ -281,10 +243,8 @@ { "title": "High-replication Subnets", "overview": "Subnets with higher replication factor than app subnets. Initially ~30 nodes. For highly security-critical dapps such as financial applications.", - "description": "By increasing the replication factor of a subnet, the security increases as the subnet can tolerate more faults or malicious actors. Highly sensitive dapps, such as DAOs, demand such security levels. With this feature, a developer can choose to deploy a dapp on a high-replication subnet. As there are more replica machines on such subnets, the cycle costs will be higher than on regular application subnets.", "forum": "https://forum.dfinity.org/t/introducing-the-first-fiduciary-subnet/17594", "proposal": "", - "wiki": "", "docs": "", "eta": "December 2022", "status": "deployed", @@ -295,10 +255,8 @@ { "title": "Composite queries", "overview": "Making queries composable so that a query can call other queries. Supported within a subnet initially.", - "description": "Canisters have two types of methods: updates and queries. In contrast to updates, queries are not composable. In other words, a query cannot call other queries. A composite query is a new type of query that can call other queries. This feature will make it easier for developers to build scalable dapps that shard data across multiple canisters.", "forum": "https://forum.dfinity.org/t/proposal-composite-queries/15979", "proposal": "https://dashboard.internetcomputer.org/proposal/87599", - "wiki": "", "docs": "", "eta": "2023", "status": "deployed", @@ -309,10 +267,8 @@ { "title": "Canister Timers", "overview": "Timer API to schedule tasks to be run on a canister. Improves on heartbeat API by configuring the frequency.", - "description": "This feature introduces a new Timer API that will allow canisters to schedule any number of periodic tasks with configurable frequency using a high-level timer library implemented in Motoko and Rust. This is an improvement over the existing Heartbeat API that has no way of configuring the frequency.", "forum": "https://forum.dfinity.org/t/heartbeat-improvements-timers-community-consideration/14201/", "proposal": "https://dashboard.internetcomputer.org/proposal/88293", - "wiki": "", "docs": "", "eta": "Q1 2023", "status": "deployed", @@ -323,10 +279,8 @@ { "title": "Network Scalability: State Sync, Certification, and XNet", "overview": "Improving network scalability in terms of state sync, certification, and XNet calls", - "description": "This feature ensures that the Internet Computer meets future scalability requirements in terms of number of subnets and size of their growing canister state. The main focus is on the scalability of the XNet communication protocol and the state sync protocol, including state certification.", "forum": "https://forum.dfinity.org/t/long-term-r-d-scalability-proposal/9387/3", "proposal": "https://dashboard.internetcomputer.org/proposal/35648", - "wiki": "", "docs": "", "eta": "February 2022", "status": "deployed", @@ -337,10 +291,8 @@ { "title": "Subnet Splitting MVP", "overview": "Subnet splitting allows a subnet and its canisters and state to be split into two subnets with minimal interruption of canisters on the subnet.", - "description": "The Internet Computer is designed to have unbounded capacity by scaling out to different subnet blockchains. Each subnet, however, has a bounded capacity: It is limited in how many messages it can process and how much canister memory it can hold. If a subnet becomes overloaded then the canisters on that subnet may become less responsive or unable to increase their memory usage. Subnet splitting aims to address such issues by providing functionality to split a single subnet into two subnets. The MVP version can be viewed as a first step towards the vision of full subnet splitting layed out in the forum post linked below. The goal is to have a fully functional and end to end verifiable process which consists of a series of NNS proposals. Compared to full subnet splitting the MVP version cuts some corners in terms of automation and minmizing downtime but otherwise follows the same ideas so that the MVP version can be turned into full subnet splitting in future incremental steps.", "forum": "https://forum.dfinity.org/t/long-term-r-d-subnet-splitting-proposal/9402/4", "proposal": "https://dashboard.internetcomputer.org/proposal/35672", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -351,10 +303,8 @@ { "title": "Deterministic Time Slicing", "overview": "A single message execution can span multiple IC rounds. Important for canister upgrades or inherently long-lasting computations.", - "description": "Deterministic time slicing allows for long(er) running, multi-round computations by suspending the execution at the end of one round and resuming it in the next. The feature is currently enabled on all application and verified application subnets. All messages except for queries are automatically sliced and executed in multiple rounds. The instruction limit for such messages has been increased from 5 billion instructions to 20 billion instructions. Further increases will follow after the \"Configurable Wasm Heap Limit\" feature ships.", "forum": "https://forum.dfinity.org/t/deterministic-time-slicing/10635", "proposal": "", - "wiki": "", "docs": "", "eta": "2023", "status": "deployed", @@ -365,10 +315,8 @@ { "title": "A subnet can support 100K+ canisters", "overview": "Addressing the biggest bottlenecks that prevent 100K+ canisters per subnet. Important for dapps that have a canister per user.", - "description": "It has been observed that performance degrades (e.g. reduced finalization rate) when a subnet holds many canisters, but in the long term it should be able to support 100K+ canisters on a single subnet. This feature includes work around identifying and addressing the biggest bottlenecks when a subnet is running with many canisters in order to ensure a smooth operation even when many thousands of canisters exist on the same subnet.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -379,10 +327,8 @@ { "title": "Improved I/O Handling in State Manager", "overview": "Improvements of interaction with the disk in the state manager. This includes reducing the number of interactions or taking them out of the state machine loop.", - "description": "This feature is a collection of improvements to how we interact with the disk in the state manager. By reducing the number of interactions, or taking them out of the state machine loop, we can reduce the instances where the state manager blocks, or otherwise interferes with, execution. The work in this feature is important to make sure that the IC can keep up with the scalability requirements.", "forum": "https://forum.dfinity.org/t/long-term-r-d-scalability-proposal/9387", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -393,9 +339,7 @@ { "title": "Larger Wasms", "overview": "Allow devs to deploy and update Wasm files larger than the current 2 MB limit resulting from the ingress size limit. Achieved by chunking large Wasms and reassembling fragments before installation or update.", - "description": "The Wasm size for ICP canisters is constrained to the maximum ingress message size of 2 MB because canister installation and upgrade are performed using single ingress messages. 2 MB Wasm files are not sufficient for more complex canisters. The support for larger Wasm files will be implemented by chunking a large Wasm file, uploading the chunks via multiple ingress messages, reconstituting the large Wasm file, and then performing the actual installation or upgrade.", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false, @@ -406,10 +350,8 @@ { "title": "Replica-signed queries", "overview": "With this feature replicas will start signing query responses. This allows users to verify that intermediaries, e.g., boundary nodes, did not tamper with the response.", - "description": "Regular queries on the IC are executed by a single replica. This feature will introduce signatures by the replicas, which will prevent dishonest intermediaries (such as boundary nodes) to change the content of a query response. Signed query responses are also a precondition for the follow up feature called certified queries which will execute queries against a quorum comprising more than 1/3 of the subnet nodes. If all responses agree on the response payload and contain valid signatures one can conclude that the query response an authentic response relative to the IC's state.", "forum": "https://forum.dfinity.org/t/feature-discussion-replica-signed-queries/21793", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -419,10 +361,8 @@ { "title": "Reassigning Nodes to Different Subnets", "overview": "Allowing nodes to be reassigned from one subnet to another subnet through NNS proposals, without redeploying.", - "description": "This featured enables nodes to be reassigned to other subnets through simple NNS proposals rather than redeploying nodes from scratch. Nodes now leave old subnet “gracefully”, without counting the departing node in the budget of faulty/malicious nodes in the subnet.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "March 2022", "status": "deployed", @@ -433,10 +373,8 @@ { "title": "Observability Solution for Nodes", "overview": "Consensus-verified on-chain data of node behaviour. Enables monitoring availability of nodes and penalizing non-performing nodes.", - "description": "Currently, no publicly verifiable data is available on chain about the behavior of nodes. This data is crucial for monitoring the availability of nodes and penalising nodes that are off line. A solution is explored to have metrics of node performance available on chain.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -447,10 +385,8 @@ { "title": "Reduction of P2P latency", "overview": "Latency improvements of P2P messaging through improved protocol and implementation. Reduces the latency of the consensus protocol and thus improves overall performance.", - "description": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -468,10 +404,8 @@ { "title": "Configurable Wasm memory limit", "overview": "Introduce an explicitly configurable Wasm heap limit. Exceeding the limit alerts the developer instead of bricking the canister when hitting the 4GiB limit.", - "description": "The Wasm heap of canisters is limited to 4GiB. The limit is fundamental and cannot be increased because of 32-bit memory addresses. If a canister uses all of the available heap space, it will start producing out-of-memory errors and may stop working, which could lead to data loss and bricking of the canister. The developer may not realize this until it is too late to fix the issue. This feature aims to introduce an explicit Wasm heap limit that can be configured in the canister settings. The default value for this limit will be a conservative amount, such as 3GiB. If a canister tries to use more memory than the limit, it will receive an out-of-memory error. This will alert the developer to the potential memory issue and allow them to safely upgrade the canister to a version that uses less memory.", "forum": "https://forum.dfinity.org/t/proposal-configurable-wasm-heap-limit/17794", "proposal": "https://dashboard.internetcomputer.org/proposal/105322", - "wiki": "", "docs": "", "eta": "2024", "status": "in_progress", @@ -482,11 +416,9 @@ { "title": "Best-effort messaging", "overview": "Extend canister messaging by an additional message type that enables more scalable and responsive dApps.", - "description": "The current model of guaranteed responses for inter-canister messages based on an extension of the actor model has a few issues that lead to bad user experience. This is the first iteration towards implementing the broader strategy (as described in the motion proposal linked below).", "status": "in_progress", "forum": "https://forum.dfinity.org/t/scalable-messaging-model/26920", "proposal": "https://dashboard.internetcomputer.org/proposal/127668", - "wiki": "", "docs": "", "eta": "2024", "is_community": false, @@ -496,11 +428,9 @@ { "title": "Small guaranteed-response messages", "overview": "Extend canister messaging by an additional message type with the same guarantees as current messages with a tighter upper bound on the message size to allow for more messages that can be in flight at the same time.", - "description": "The current model of guaranteed responses for inter-canister messages based on an extension of the actor model has a few issues that lead to bad user experience. This is the second iteration towards implementing the broader strategy (as described in the motion proposal linked below).", "status": "future", "forum": "https://forum.dfinity.org/t/scalable-messaging-model/26920", "proposal": "https://dashboard.internetcomputer.org/proposal/127668", - "wiki": "", "docs": "", "eta": "", "is_community": false, @@ -509,10 +439,8 @@ { "title": "Query call metrics", "overview": "Metrics about query calls executed for a canister during a time period. Requirement for a fair charging of cycles for query calls.", - "description": "Metrics about query calls executed for a canister during a time period. Requirement for a fair charging of cycles for query calls.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "in_progress", @@ -522,10 +450,8 @@ { "title": "Charging for query calls", "overview": "Charge a fair price in cycles for executed query calls. Currently query calls are free and only update calls are charged for.", - "description": "Charge a fair price in cycles for executed query calls. Currently query calls are free and only update calls are charged for.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -536,12 +462,10 @@ { "title": "Increasing instruction limits for query and update calls", "overview": "Raising instruction limits of query and update calls. They have limitations of a few billion instructions currently.", - "description": "Both query and update calls currently have limits in terms of how many instructions can be executed per call. This feature is about increasing those limits by further protocol enhancements. In the current design of the ICP, there are both inherent and practical limits by how much the instructions limits of canister calls can be raised and this feature will need to find a sweet spot within this space to obtain a decent ratio of improvement for the resources spent. Deterministic time slicing (DTS) has already raised the limits of update calls considerably by spanning them over multiple rounds. DTS offers some more potential to increase the number of instructions executed by update calls, but this is limited to what can be computed within a single epoch (the rounds between two checkpoints in a subnet).", "eta": "", "status": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false @@ -549,10 +473,8 @@ { "title": "Dapp-level metrics", "overview": "Dapp-level metrics allow for obtaining insights into statistics of dapps, e.g., the daily / weekly / monthly active users of a dapp.", - "description": "Dapp-level metrics are an important aspect of on-chain data analytics. Stakeholder groups such as the dapp developers and potential investors in the dapp or the overall ecosystem have an interest in being able to observe metrics like daily active users (DAUs). The collected metrics can not only be made available within the ICP ecosystem, but also in the systems of major crypto data aggregators, which helps make the data available to a much broader audience beyond the ICP ecosystem. It is still to be determined at which place the metrics are to be collected, e.g., at the protocol level, the Boundary Nodes, through dapps themselves, or a combination thereof.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -563,10 +485,8 @@ { "title": "Wallet apps (\"Wapps\")", "overview": "Wapps are the next evolutionary step of apps: Install your personal apps directly on the blockchain and keep control over your data.", - "description": "Wapps, which stands for Wallet Apps, are the next step in the evolution of apps. Coming from a world where people install apps on their personal mobile devices from the App Store or Google Play, Wapps are canisters installed on the blockchain from an on-chain store on ICP. This is the next evolution of apps and dapps, where users install their own canister for a dapp and this canister holds all the user's data related to this dapp. Thus, the user remains in full control over their apps and data and can easily delete their data if they wish to not participate in the dapp any more. This makes Wapps a strong concept for app decentralization and privacy.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -577,9 +497,7 @@ { "title": "Support of 1M canisters per subnet", "overview": "Allowing up to 1 million canisters on a subnet, up from the current 100K canisters.", - "description": "The current protocol implementation supports up to around 100K canisters per subnet. This is not sufficient in the long term, for example, for dapp architectures that spawn a canister per user and want to grow to millions of users as they would require many subnets just to host one dapp, while not hitting other subnet limitations such as compute or storage. The goal is to grow the supported number of canisters per subnet to around one million. This requires, among other things, revisiting the current architecture of the backing storage of canisters and how checkpointing of the storage is done. A key idea is to differentiate between active and passive canisters, i.e., such that have been active in a given DKG interval and such that have not, respectively. For non-active canisters, processing done during checkpointing, such as writing the canister's backing file to SSD, can be skipped.", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -591,10 +509,8 @@ { "title": "Support multiple TB of replicated storage", "overview": "Enhance replicated storage further to multiple terabytes per subnet. Requires protocol improvements and optimizations.", - "description": "This feature is the follow up feature to supporting 1TB of replicated storage. The goal is to support multiple terabytes of replicated storage on a subnets. Work on this feature includes protocol improvements and optimizations related to interacting with the replicated state on disk and when syncing it via state sync. It also includes work to make sure that node machines match the hardware requirements in terms of physical disk size etc.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -606,10 +522,8 @@ { "title": "Canister control audit trail", "overview": "Seamless audit trail of management actions on canisters (creation, update, deletion). Allows to trace the sequence of installed canister versions to source code through a Wasm hash.", - "description": "The current canister management architecture does not create a seamless audit trail of every change to a canister, such as creation and updates, by its controllers. This implies that there is no seamless audit trail of the code that has run on canisters. This feature addresses this issue by auditing every controller action performed on a canister and storing the associated metadata permanently in the subnet. I.e., all Wasm hashes of the canister's history are logged with timestamps and further metadata in a seamless audit trail. Using this feature and deterministic builds for canisters allows anyone to verify which code has been running at a given time in the canister.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "in_progress", @@ -619,10 +533,8 @@ { "title": "Improved state certification", "overview": "The protocol requires certifying parts of the replicated state every round, which includes hashing the respective parts of the state. The faster this is, the more time can be spent on other things during the round. This feature is about optimizing certification times.", - "description": "Certifying parts of the replicated state includes building a hash tree containing hashes of all the data that needs to be certified. The goal of this feature is to investigate how (and if) we could profit from caching parts of the trees to improve the certification time.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -633,10 +545,8 @@ { "title": "Messages to specified canister version", "overview": "Messages to canisters can be required to be only executed by a specific version of the call target to help guarantee that the intended (e.g., audited) version of the code is running.", - "description": "This feature allows for ingress or inter-canister messages to specify a specific version of the to-be-called canister. If the canister's actual version does not match the requested version, the call is rejected by the system. The benefit of this is security for the caller as the caller can be assured which version of the canister, and thus which code, is executed by the canister. For canisters that perform critical tasks, e.g., such with financial implications, this functionality is crucial in preventing unintended consequences when a canister has been upgraded and would perform unintended actions.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -646,10 +556,8 @@ { "title": "Capabilities", "overview": "Capabilities — untamperable self-contained permission tokens — allow users or canisters to perform actions with canisters. Authority can be delegated by delegating a capability to another entity.", - "description": "Currently, it is hard for a user to delegate authority over actions to smart contracts, e.g., to allow a dapp to perform token transfers on behalf of the user. Capabilities are untamperable tokens that grant an entity access rights that can be delegated to other entities. A capability on the IC will be realized as signed token that grants the recipient authority to perform actions defined in the capability. A first solution can be realized on the application layer, a full solution will integrate into the protocol stack so that capabilities are managed and evaluated by the system when inter-canister calls are executed.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -659,10 +567,8 @@ { "title": "Canister groups", "overview": "Guarantee collocation of canisters on the same subnet by adding them to canister groups. Groups of canisters are always moved together between subnets.", - "description": "Communication between canisters on the same subnet is significantly faster than between subnets. Since inter-canister communication latency is less of a concern, one can scale dapps through multi-canister architectures, as long as the canisters are guaranteed to be collocated. Since upcoming load-balancing mechanisms need to move canisters between subnets, we need to avoid splitting up canisters that are part of the same dapp. Canister groups would allow a dapp developer to explicitly indicate which canisters \"belong\" together and which should always be located on the same subnet.", "forum": "https://forum.dfinity.org/t/canister-groups/16015", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -673,10 +579,8 @@ { "title": "Secure XNet cycles protocol", "overview": "Securely sending cycles from a less-trusted subnet, e.g., a UTOPIA subnet, rented subnet, or low-replication subnet, to a regular ICP subnet.", - "description": "Future extensions of ICP such as private subnets or subnets with a smaller number of nodes may not fulfill the conditions of cycles security and thus require that inter-subnet calls initiated on such subnets to other ICP subnets be secured accordingly. This feature will look into the challenges of securely handling cycles in those types of subnets.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -686,11 +590,9 @@ { "title": "Messaging model enhancements", "overview": "Enhancing canister messaging by means such as named callbacks and one-shot messages.", - "description": "The current model of guaranteed responses for inter-canister messages based on an extension of the actor model has a few issues. Some of those issues are addressed with best-effort messaging and small guaranteed-response messages already. The enhancements in this feature address remaining issues with further extensions to the messaging model. Named callbacks are a solution where a call's response is not targeted at a specific memory address of the calling canister, but a named function, to allow upgrades even with pending responses. One-shot messages are canister messages without a response message, thus more aligned with the pure actor model.", "status": "future", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "is_community": false, @@ -699,11 +601,9 @@ { "title": "Queries at specified block height", "overview": "Execute a query at a specified block height. Prerequisite for use cases like replicating queries on multiple replicas, XNet queries or databases.", - "description": "Queries today are executed on the state at a block height determined by the replica executing the query. For having queries executed by multiple replicas to have update-like security guarantees, yet being more efficient than updates, it needs to be ensured that the query is executed on the same state height on each of the targeted replicas to receive the same response for honest replicas.", "status": "future", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "is_community": false, @@ -712,11 +612,9 @@ { "title": "Hyper-scalable inter-subnet message routing", "overview": "Hyper-scalable XNet messaging protocol and implementation supporting a practically unlimited number of subnets. The current implementation works well up to a reasonably large number of subnets.", - "description": "The current inter-subnet message routing protocol uses a point-to-point architecture where a subnet pulls XNet messages from the other subnets. This architecture does not scale to tens of thousands of subnets and needs to be improved to allow for unbounded future scaling. The solution may involve a hierarchic addressing scheme or routing via log n many subnets that are topologically \"in between\" the source and target.", "status": "future", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "is_community": false, @@ -725,11 +623,9 @@ { "title": "Canister migration", "overview": "Migrating canisters between subnets. Important for balancing subnet utilization and scaling the IC.", - "description": "Canister smart contracts on ICP are created on a specific subnet and so far cannot be moved to another subnet. The canister migration feature enables canisters to be moved to other subnets. This requires challenges in the space of canister addressing to be resolved, as well as the state migration to another subnet. Canister migration will enable more flexible and dynamic management of canisters, similar to how VMs or containers can be migrated in Web 2 public cloud or container orchestration environments.", "status": "future", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "is_community": false, @@ -738,11 +634,9 @@ { "title": "Subnet deletion", "overview": "Enable subnets to be deleted via NNS proposals.", - "description": "Currently, subnets on the Internet Computer cannot be deleted, but only changed in terms of their topology. This feature aims to extend the protocol so that it is also possible to subnets via NNS proposals.", "status": "future", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "in_beta": false @@ -750,11 +644,9 @@ { "title": "Libraries for large data transfers", "overview": "Provide more generalized abstractions to overcome message size limits and make them invisible to developers.", - "description": "The message size limit of 2MB for a single message on ICP requires dapps to chunk data transfers in order to transfer data sets larger than the limit. Abstracting message size limits through user-space libraries helps avoid that every dapp hitting those limits reimplements this functionality, e.g., chunking, on their own. This would also establish a quasi standard for this problem domain and thus make devs more efficient.", "status": "future", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "is_community": true, @@ -763,10 +655,8 @@ { "title": "Wasm deduplication", "overview": "Deduplicating Wasm code for running many instances of the same Wasm as is done in a-canister-per-user architectures.", - "description": "Dapps that are architected based on a one-canister-per-dapp model have the same Wasm file stored in a potentially large number of canisters. This unnecessarily consumes large amounts of subnet storage and costs cycles for storage and when updating the Wasm on each of the canisters. This feature is about a deduplication of Wasm files for such usage scenarios so that all canisters essentially can share a single Wasm file. There are some challenges in this area to be overcome and questions to be answered, e.g., does this require a \"master canister\" whose Wasm is managed and acts as template for the other canisters in the dapp, and what happens if this master canister is deleted.", "forum": "https://forum.dfinity.org/t/what-do-you-need-from-icp-in-2024/25726/9", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -776,11 +666,9 @@ { "title": "Reduction of call latencies", "overview": "Reduce call latencies for the most-used call patterns that negatively affect dapp responsiveness.", - "description": "The responsiveness of a dapp is affected mainly by the various kinds of call patterns used on ICP: Query calls, update calls, cross-canister calls, http outcalls, etc. Improving the platform for better dapp responsiveness requires an assessment of the different call patterns, their latencies, and how frequently they are used in dapps and what the worst offenders are for different dapp architectures. Based on this, efforts need to taken to improve where optimizations have the largest positive effect on overall dapp responsiveness. It should be noted that the ICP stack has already been quite aggressively optimized in some parts and that latency reduction is expected to hit inherent (physical) limitations, e.g., the message propagation times between the geographically distributed nodes of a subnet, sooner or later in those part of the stack.", "status": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "is_community": true, @@ -789,11 +677,9 @@ { "title": "WASI V2 support", "overview": "WASI V2 is the upcoming standard for running Wasm programs outside of browser environments. It is expected to become the de-facto standard and be targeted by many libraries and thus should be supported by ICP.", - "description": "WASI V2 is the upcoming standard for running Wasm programs outside of browser environments. It is expected to become the de-facto standard and be targeted by many libraries and thus should be supported by ICP.", "status": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "is_community": true, @@ -802,10 +688,8 @@ { "title": "HTTPS outcalls V2", "overview": "Additional feature for HTTPS outcalls: IPv4 support, single-node outcalls, and fire & forget outcalls. Allows for reduced latencies and reduced cycle costs.", - "description": "HTTPS outcalls on ICP are a feature that allow smart contracts to interact with Web2 services based on the HTTP protocol. This feature extends the existing HTTPS outcalls with additional functionality that make them better applicable for multiple scenarios. Multiple new functionalities are envisioned to be implemented. IPv4 support: This requires certain ICP nodes to obtain IPv4 addresses and allows to make outcalls to a much broader range of services than only IPv6-enabled services. Single-node outcalls: Only one node in the subnet instead of all perform the outcall. Reduces security for improved performance for non-critical outcalls. Fire & forget outcalls: Outcalls that are sent, while their response is not required and ignored, which is interesting for non-critical calls, e.g., sending push notifications to users.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -815,10 +699,8 @@ { "title": "HTTPS outcalls V3", "overview": "Support HTTPS outcalls also for queries, instead of only update calls.", - "description": "HTTPS outcalls on ICP are a feature that allow smart contracts to interact with Web2 services based on the HTTP protocol. This feature addresses HTTPS outcalls for queries in addition to update calls. Doing HTTPS outcalls in queries requires a different architecture and has different trust requirements compared to outcalls in update calls. Queries are executed on a single replica only, without consensus being involved. The same would hold for HTTPS outcalls invoked from queries: They are made from a single replica and do not involve consensus. Thus, they are very similar to how HTTPS calls are made by the typical Web2 service. A challenge w.r.t. this feature is the latency increase for queries when supporting outcalls.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -828,10 +710,8 @@ { "title": "XNet composite queries", "overview": "Extending composite queries to work also across subnets, which gives full power to composite queries and simplifies building large-scale decentralized systems.", - "description": "Currently, composite queries only work within a subnet when canisters send queries to other canisters. This feature extends composite queries to work for canisters across subnet boundaries. This enables new architecture patterns for dapps where canisters can use queries instead of update calls to obtain data from canisters on different subnets. ", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -841,10 +721,8 @@ { "title": "Capacity reservations for system calls", "overview": "Allowing canisters to reserve capacity for certain crucial system calls such as chain-key signing or HTTPS outcalls. Reserved capacity is guaranteed to be available to the canister.", - "description": "Allowing canisters to reserve capacity for certain crucial system calls such as chain-key signing or HTTPS outcalls. Reserved capacity is guaranteed to be available to the canister.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -854,10 +732,8 @@ { "title": "Replica key lifecycle V2 — Proactive security", "overview": "Apply key rotation or resharing to a broader range of keys managed by the replica. Improved security against temporarily compromised replicas.", - "description": "Key rotation or resharing for the private keys of a replica can achieve proactive security for the protocol. Proactive security ensures that in case of a temporary node compromise, security can be restored after the compromise. Currently, only a subset of the keys used by replicas are subject periodic key resharing or key rotation. This feature extends proactive security measures to further keys not yet covered by such.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -867,10 +743,8 @@ { "title": "Resilience against slow nodes", "overview": "Punishing ICP nodes that are constantly not performing to specification, i.e., failing too frequently to produce a block successfully when requested or being off-line for considerable time.", - "description": "The current version of ICP does not actively protect against nodes that are too slow, i.e., cannot keep up with the subnet in terms of processing blocks. Such nodes are constantly behind and thus do not contribute to the subnet. The protection mechanism against slow nodes will detect consistently slow nodes, eject them from the subnet, and replace them with new nodes.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -880,10 +754,8 @@ { "title": "User-paid messages", "overview": "Using the regular gas model for specific settings: The caller, instead of the canister, pays for update and query calls.", - "description": "Using the regular gas model for specific settings: The caller, instead of the canister, pays for update and query calls.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -893,10 +765,8 @@ { "title": "Subnet splitting V2", "overview": "An enhanced version of subnet splitting where all the heavy lifting is done by the protocol.", - "description": "The current first version of subnet splitting requires that the orchestration of the splitting by done by people through multiple NNS proposals. In order to further decentralize this, subnet splitting should be controlled by a single NNS proposal and all the orchestration should be done entirely by the protocol. An even more advanced variant could be fully autonomous and automatically split a subnet when it reaches its capacity. This more advanced scenario would bring ICP ceonceptually closer to what public cloud systems are doing in terms of autonomous capacity management.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -906,10 +776,8 @@ { "title": "P2P unicast", "overview": "Utilizing unicast at the P2P layer to communicate artefacts instead of gossip to decrease bandwidth requirements and improve throughput.", - "description": "Utilizing unicast at the P2P layer to communicate artefacts instead of gossip to decrease bandwidth requirements and improve throughput.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -919,10 +787,8 @@ { "title": "Blob storage", "overview": "Storage of large amounts of data in a subnet using a new blob storage architecture with the cost of around 2x the SSD cost. More efficient and cheaper compared to regular storage on ICP.", - "description": "The storage currently available on ICP is full-featured RAM-like storage that can be arbitrarily read or written by canisters. This generality comes with a cost in terms of replication factor and protocol overhead and a corresponding price tag. Many use cases can benefit from cheaper read-heavy storage where large objects of data can be written once and then only read or deleted (as a whole). The reduced capabilities of this kind of storage w.r.t. ICP current storage allow it to be realized with substantially less space overhead and thus for a more competitive price. The idea is to use erasure-coded blob storage that has a substantially reduced data replication as compared to today's ICP storage, while maintaining integrity and availability at the cost of increased overhead when reading the data. ICP can greatly benefit from the introduction of this kind of storage by fully utilizing the available physical storage of the nodes and thereby offering a much improved overall platform, potentially offering terabytes of additional storage per subnet in addition to the regular replicated storage. A canister author can choose which storage architecture to use for different parts of their use cases and thus benefit from reduced cycles cost and more storage capacity.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -932,10 +798,8 @@ { "title": "Blob streaming (download)", "overview": "High-throughput download of data from a subnet's blob storage. The implementation of this can have the bulk data bypass consensus when being read by a user.", - "description": "Storage blobs need to be read by canisters and dapps. If erasure coding will be the choice for the storage architecture, reading an erasure-coded blob requires one to read data from a number of subnet replicas greater than the 1/3 subnet threshold and recombining the blocks read to the data. Blob streaming realizes this for large storage objects in a chunked fashion to enable more efficient downloads of large contiguous amounts of blob data, e.g., an on-chain video stream or other object like an image. When a user is reading blob data, which is the main envisioned scenario, the bulk data can bypass consensus.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -945,10 +809,8 @@ { "title": "Blob streaming (upload)", "overview": "High-throughput upload of data to a subnet's blob storage. An optimized implementation of this can have the bulk data bypass consensus, if uploaded by a user.", - "description": "Blob data needs to be written by canisters and dapps. If erasure coding will be the choice for the storage architecture, writing data to erasure-coded storage requires one to write data to a number of subnet replicas greater than the 1/3 subnet threshold. Streaming does so for large storage objects in a chunked fashion to enable more efficient contiguous uploads of large amounts of blob data, e.g., an on-chain video stream or other object like an image. When a user is writing blob data, which is the main envisioned scenario, the bulk data can bypass consensus.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -958,10 +820,8 @@ { "title": "Eternal storage", "overview": "A storage tier on ICP for which a one-off payment for an unlimited storage duration instead of periodic rent is paid.", - "description": "The regular replicated memory on ICP requires a rent payment in cycles which is accounted for and charged for per consensus round. This feature intends to introduce a storage tier which rather requires a one-off payment in cycles for an unlimited storage duration.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -971,10 +831,8 @@ { "title": "Structured data storage", "overview": "Library for structured data storage in stable memory to enable more powerful and convenient out-of-the-box data storage and querying.", - "description": "Canisters do not have access to relational databases, but rather rely on storing their data in stable memory. This model requires programmers to take care of implementing their own data structures for efficient storage and querying of data. This feature is about implementing libraries that gives users a convenient higher-level storage abstraction that is well aligned with ICPs storage architecture. The feature complements stable memory data structures.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -984,10 +842,8 @@ { "title": "Inter-subnet messaging across trust zones", "overview": "Sovereign subnets are typically more centralized than ICP mainnet. Thus, messaging between subnets needs to consider the trust zones the subnets are in to maintain overall security.", - "description": "With sovereign subnets that are under the control of one entity or a consortium, the canister origin, or, in case of multiple subnets operated by the entity or consortium, even the subnet origin of messages cannot be entirely trusted, i.e., could have been tampered with by the operator of the sovereign subnet(s). This difference in the trust model when compared to regular subnets needs to be taken into consideration for inter-subnet messaging. A potential approach to solving this can be trust zones — sets of subnets where all networks trust each other and can freely communicate with each other. Communication between different trust zones (and mainnet, which can also be considered another trust zone) may be subject to constraints.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -997,10 +853,8 @@ { "title": "Using cycles across trust zones", "overview": "Transferring cycles from a sovereign network to ICP mainnet or subnets in different trust zones can, because of their weaker decentralization properties, not rely on directly sending cycles.", - "description": "Cycles sent from sovereign subnets to other sovereign subnets or ICP mainnet cannot be trusted because sovereign subnets are more centralized and the entity controlling them could compromise the subnet and illegitimately create cycles in unlimited amounts. A potential solution to this is to have the operator of a sovereign subnet deposit cycles into a ledger on the target subnet it wants to communicate with (or on a suitable subnet on mainnet) from which cycles are deducted when calling into the target subnet.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -1016,21 +870,17 @@ "description": "Today, users have to blindly trust AI models running on centralized servers with no visibility into how data is used and how AI models produce responses. Decentralized AI solves this problem by bringing the trustworthiness, security, verifiability, and resilience of smart contracts to AI applications.", "milestones": [ { - "name": "Onchain AI Inference of Larger Models", - "description": "In order to perform AI inference and training of larger models on chain, canister smart contracts need to run more compute- and memory-intensive workloads. This milestone expands the compute and memory capabilities of canisters and paves the way for future GPU hardware acceleration. A special focus is being placed on the developer experience by building tools to simplify writing AI applications.", "milestone_id": "Ignition", - "status": "in_progress", "eta": null, + "status": "in_progress", "elements": [ { "title": "API for AI computations", "overview": "API for smart contracts that allows them to run hardware-accelerated AI computations. Initially, hardware acceleration will rely on the CPU, paving the way for GPU hardware acceleration in the future.", - "description": "AI computations are typically a graph of a large number of floating-point operations. Abstracting those operations into an API streamlines the development of AI applications, and paves the way to introduce optiizations to these specialized workload.", "status": "in_progress", "milestone_id": "Ignition", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false @@ -1038,11 +888,9 @@ { "title": "Wasm64 execution environment", "overview": "The execution environment is lifted to Wasm64 with the benefit of a 64-bit address space and its much larger addressable memory, allowing developers to load larger models in main memory.", - "description": "The execution environment is lifted to Wasm64 with the benefit of a 64-bit address space and its much larger addressable memory.", "status": "in_progress", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false, @@ -1051,12 +899,32 @@ { "title": "Tooling and libraries for developing AI smart contracts", "overview": "Providing developer tooling and libraries for building AI smart contracts on ICP, reusing existing AI ecosystem tools wherever possible.", - "description": "Developers need tooling and libraries to implement AI smart contracts. Ideally, the existing AI ecosystem is reused as much as possible, but specialized tools and libraries might be needed for developing on-chain AI applications.", "status": "", "milestone_id": "Ignition", "forum": "", "proposal": "", - "wiki": "", + "docs": "", + "is_community": false, + "in_beta": false + }, + { + "title": "Public specification for GPU-enabled nodes", + "overview": "The ICP community needs to agree and adopt the hardware specification for replica nodes with GPUs.", + "status": "", + "milestone_id": "Ignition", + "forum": "", + "proposal": "", + "docs": "", + "is_community": false, + "in_beta": false + }, + { + "title": "AI-specialized subnets with GPU-enabled nodes", + "overview": "AI-specialized subnets are created from nodes with GPUs and host AI smart contracts for training and inference of large models.", + "status": "", + "milestone_id": "Ignition", + "forum": "", + "proposal": "", "docs": "", "is_community": false, "in_beta": false @@ -1065,7 +933,6 @@ }, { "name": "Onchain AI Inference", - "description": "Allow smart contracts to run inference using AI models with millions of parameters fully on chain. The focus of this milestone is performance. There are performance optimizations that can be implemented both in the WebAssembly engine and the AI inference engine. The expected speedup is 10x and more.", "milestone_id": "Cyclotron", "eta": "July 15, 2024", "status": "deployed", @@ -1073,11 +940,9 @@ { "title": "Faster deterministic floating-point operations", "overview": "Accelerating deterministic floating-point computations in the Wasm engine.", - "description": "Floating point operations are crucial for AI inference. In order to run on chain inside a smart contract, these operations need to be deterministic. The goal of this feature is to optimize the part of the Wasm engine that makes these operations deterministic.", "status": "deployed", "forum": "", "proposal": "", - "wiki": "", "docs": "https://medium.com/@dfinity/the-next-step-for-deai-on-chain-inference-enabling-face-recognition-589183203fc2", "is_community": false, "in_beta": false, @@ -1086,11 +951,9 @@ { "title": "Wasm SIMD instructions", "overview": "Deterministic SIMD (Single Instruction, Multiple Data) support in the Wasm execution engine for better performance of AI inference.", - "description": "SIMD stands for Single Instruction, Multiple Data. It is a set of CPU instructions that allow executing multiple operations with a single instruction. The arithmetic operations used in AI can be expressed as SIMD instructions. This feature introduces deterministic SIMD support in the Wasm execution engine, leading to an expected multiple-times performance improvement of AI operations.", "status": "deployed", "forum": "", "proposal": "", - "wiki": "", "docs": "https://medium.com/@dfinity/the-next-step-for-deai-on-chain-inference-enabling-face-recognition-589183203fc2", "is_community": false, "in_beta": false, @@ -1099,11 +962,9 @@ { "title": "Optimizing AI inference engine", "overview": "This feature brings SIMD support to the open source inference engine used on ICP to leverage the upcoming support of SIMD operations in Wasm.", - "description": "This feature brings SIMD support to the open source inference engine used on ICP to leverage the upcoming support of SIMD operations in Wasm.", "status": "deployed", "forum": "", "proposal": "", - "wiki": "", "docs": "https://medium.com/@dfinity/the-next-step-for-deai-on-chain-inference-enabling-face-recognition-589183203fc2", "is_community": false, "in_beta": false, @@ -1120,10 +981,8 @@ { "title": "Increase the message instruction limit to 40 billion instructions", "overview": "Increasing the message limit from 20 to 40 billion instructions to facilitate longer-running computations, which is crucial for AI inference.", - "description": "Previously, the instruction limit was 20 billion instructions. This feature doubles the instruction limit for update message in order to allow them to run longer computations. This is made possible by the deterministic time slicing. Future work will increase this limit more and also increase the limit for queries.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -1142,37 +1001,9 @@ { "title": "Increase the instruction limit in queries from 5 billion instructions to 40 billion instructions.", "overview": "The goal of this feature is to bring the instruction limit of queries on par with that of update calls by introducing a way for canisters to opt in into query charging.", - "description": "Currently, queries have a lower instruction limit compared to the update calls because canisters do not pay for query execution. The goal of this feature is to bring the instruction limit of queries on par with that of update calls by introducing a way for canisters to opt in into query charging.", "status": "future", "forum": "", "proposal": "", - "wiki": "", - "docs": "", - "is_community": false, - "in_beta": false - }, - { - "title": "Public specification for GPU-enabled nodes", - "overview": "The ICP community needs to agree and adopt the hardware specification for replica nodes with GPUs.", - "description": "The ICP community needs to agree and adopt the hardware specification for replica nodes with GPUs.", - "status": "", - "milestone_id": "Ignition", - "forum": "", - "proposal": "", - "wiki": "", - "docs": "", - "is_community": false, - "in_beta": false - }, - { - "title": "AI-specialized subnets with GPU-enabled nodes", - "overview": "AI-specialized subnets are created from nodes with GPUs and host AI smart contracts for training and inference of large models.", - "description": "AI-specialized subnets are created from nodes with GPUs and host AI smart contracts for training and inference of large models.", - "status": "", - "milestone_id": "Ignition", - "forum": "", - "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false @@ -1187,7 +1018,6 @@ "milestones": [ { "name": "Chain Fusion Supports Solana", - "description": "This milestone enables Chain Fusion for the Solana network, bringing Solana and ICP closer together, combining the powers of the two networks. Dapps leveraging the capabilities of both networks look and feel like single-network dapps.", "milestone_id": "Helium", "eta": null, "status": "in_progress", @@ -1195,12 +1025,10 @@ { "title": "Threshold EdDSA signing", "overview": "Threshold EdDSA support using cryptographic multiparty computation (MPC). Enables trustless integrations with all chains using EdDSA on the Ed25519 curve like Solana or Cardano.", - "description": "ICP already has a suite of threshold signing protocols for realizing threshold ECDSA signatures. Important use cases, however, require EdDSA threshold signatures. An example for the EdDSA requirement is the direct integration with blockchains using EdDSA to sign transactions, such as Cardano or Solana. Schnorr and EdDSA are very similar because EdDSA is based on a variant of Schnorr using a specific family of elliptic curves, thus they can share the same high-level protocol.", "status": "in_progress", "eta": "2024", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": true, "in_beta": false, @@ -1209,10 +1037,8 @@ { "title": "Solana RPC canister", "overview": "RPC canister connecting to Solana RPC providers to integrate with the Solana network. Enables two-way communication with the Solana network.", - "description": "Analogous to what has been done for Ethereum and EVM chains in general, this feature defines an RPC canister for Solana that allows for communicating with the Solana network, i.e., reading from and writing to the Solana blockchain. Using multiple RPC providers allows for reducing trust in any single entity, thereby making the approach more decentralized.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "upcoming", @@ -1224,7 +1050,6 @@ }, { "name": "Chain Fusion Supports EVM Chains", - "description": "ICP’s Chain Fusion technology has full support for Ethereum and other EVM chains. ICP smart contracts can read from and write to EVM chains using a decentralized approach as well as sign transactions in a trustless manner using threshold ECDSA signing. This allows ICP smart contracts to augment EVM-based smart contracts with additional functionality through ICP's superpowers, transfer tokens on other chains, and call smart contracts on EVM chains.", "milestone_id": "Tritium", "eta": "May 23, 2024", "status": "deployed", @@ -1232,11 +1057,9 @@ { "title": "RPC canister for Ethereum & EVM integration", "overview": "Integration of ICP with EVM blockchains via a canister that accesses JSON RPC providers through HTTPS outcalls. Works for Ethereum and other EVM chains with RPC providers.", - "description": "This feature is about an RPC-based integration of the Internet Computer with EVM chains, initially targeting the Ethereum network, then also Ethereum L2s and other chains with Ethereum RPC providers. The integration builds a canister smart contract offering a subset of the EVM RPC API on chain, while itself connecting to multiple RPC providers and communicating with them. For security-critical queries multiple RPC providers can be queried and their responses must match unanimously in order to be considered a valid response to the caller of the canister API. This reduces trust in any single entity, thereby making the approach more decentralized.", "status": "deployed", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "is_community": true, @@ -1246,11 +1069,9 @@ { "title": "Threshold ECDSA signing latency & throughput improvements", "overview": "Reducing the latency of threshold ECDSA signing operations by reducing the number of required consensus rounds through protocol improvements. Also results in a throughput improvement.", - "description": "The ICP's initial implementation of threshold ECDSA, also referred to as chain-key ECDSA, requires seven consensus rounds to compute a signature while consuming an available precomputed value (pre-signature). The protocol can be adapted to require fewer rounds for computing a signature, thus noticeably reducing signing latency. Also, the throughput for threshold signing is expected to increase as a byproduct of the optimizations performed. The optimizations will benefit all applications using chain-key ECDSA.", "status": "deployed", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "is_community": true, @@ -1260,10 +1081,8 @@ { "title": "Chain-key ERC20 (ckERC20) tokens & ckUSDC", "overview": "A subset of Ethereum's ERC20 tokens on ICP in the form of \"twin tokens\" called ckERC20 (chain-key ERC20). ckUSDC is to be among the first ckERC20 tokens.", - "description": "This feature established the infrastructure to bring a subset of Ethereum's ERC20 tokens over to the IC in the form of chain-key tokens, i.e., twin tokens of the original tokens on the Ethereum network. The implementation uses HTTPS outcalls to multiple Ethereum JSON RPC providers for reading from and writing to the Ethereum network. Threshold ECDSA signing (threshold ECDSA) is used for creating the required transactions on the Ethereum network in a completely trustless manner. USDC is the first envisioned ERC20 token that will be deployed on ICP as ckUSDC using this infrastructure.", "forum": "https://forum.dfinity.org/t/long-term-r-d-integration-with-the-ethereum-network/9382", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -1276,7 +1095,6 @@ }, { "name": "Chain Fusion Supports Bitcoin ordinals and runes", - "description": "ICP’s Chain Fusion technology has full support for Bitcoin protocols such as BRC20 and Runes. So far, ICP’s fully-on-chain capabilities have been largely applied to use cases where Bitcoin is used as a means of value transfer, its original purpose. This milestone provides additional functionality that enables smart contracts to index and issue prominent types of Bitcoin inscriptions in a completely decentralized manner for the first time, thereby putting ICP into a unique position among all chains.", "milestone_id": "Deuterium", "eta": "August 13, 2024", "status": "deployed", @@ -1284,12 +1102,10 @@ { "title": "Threshold Schnorr signing", "overview": "Threshold Schnorr signing support based on multiparty computation (MPC). Enables trustless integrations with the Bitcoin network using Schnorr-BIP340 for supporting inscriptions.", - "description": "ICP already has a suite of threshold signing protocols for realizing threshold ECDSA signatures. Important use cases, however, require Schnorr-BIP340 threshold signatures. Examples for Schnorr-BIP340 are ordinals and other forms of inscriptions on the Bitcoin network.", "status": "deployed", "eta": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": true, "in_beta": false, @@ -1298,12 +1114,10 @@ { "title": "On-chain Bitcoin block headers", "overview": "Exposing an on-chain API to access all Bitcoin block headers to enable dapps to trustlessly access the full Bitcoin block content.", - "description": "The current Bitcoin integration exposes the full UTXO state of the Bitcoin network on chain on a canister API. This is sufficient for many Bitcoin use cases, but not for the novel use cases of inscriptions, such as those using the BRC20 or Runes protocols. Those require access to certain parts of Bitcoin blocks, specifically the SegWit area, where the inscriptions are stored. Making all block headers available on chain through an API allows anyone to trustlessly verify any Bitcoin block obtained over an untrusted channel, e.g., through HTTPS outcalls to a block explorer. This allows for indexing Bitcoin ordinals in a completely decentralized and trustless manner fully on chain.", "status": "deployed", "eta": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": true, "in_beta": false, @@ -1320,10 +1134,8 @@ { "title": "Direct Bitcoin Integration", "overview": "Native integration with the Bitcoin blockchain. Trustless reading from and writing to the Bitcoin network from canister smart contracts.", - "description": "Direct integration of the Internet Computer with the Bitcoin blockchain enables canister smart contracts to receive, hold and transfer bitcoin. With this feature, neither additional trust assumptions, nor additional parties, such as bridges, are required. Bitcoin integration relies on threshold ECDSA signatures that make it possible for a subnet to sign on behalf of a canister with a secret-shared key.", "forum": "https://forum.dfinity.org/t/direct-integration-with-bitcoin/6147", "proposal": "https://dashboard.internetcomputer.org/proposal/20586", - "wiki": "", "docs": "https://internetcomputer.org/bitcoin-integration", "eta": "2022", "status": "deployed", @@ -1334,10 +1146,8 @@ { "title": "Threshold ECDSA Signatures (a.k.a. Chain-key ECDSA)", "overview": "Threshold ECDSA protocol suite based on multi-party computation (MPC). Enables trustless integrations with ECDSA-based chains.", - "description": "This feature enables canister smart contracts to have an ECDSA public key and to sign with regard to it. The corresponding secret key is threshold-shared among the nodes of a large subnet. Threshold ECDSA signatures are a prerequisite for the direct integration between the Internet Computer and Bitcoin, Ethereum, and possibly further ECDSA-based blockchains in the future.", "forum": "https://forum.dfinity.org/t/threshold-ecdsa-signatures/6152", "proposal": "https://dashboard.internetcomputer.org/proposal/21340", - "wiki": "", "docs": "https://internetcomputer.org/docs/current/developer-docs/integrations/t-ecdsa/", "eta": "2022", "status": "deployed", @@ -1348,10 +1158,8 @@ { "title": "ECDSA key rotation and resharing", "overview": "Periodic key rotation and resharing to improve resilience against adaptive attacks against the threshold ECDSA protocol.", - "description": "To make the threshold ECDSA feature as secure as possible, all ECDSA secret shares are periodically refreshed by resharing the secret key. The encryption keys that are used in this distributed key generation protocol are also regularly updated by the nodes. This makes it harder for an attacker to steal sufficiently many ECDSA key shares, as the attack now has to be performed in a small time window.", "forum": "https://forum.dfinity.org/t/threshold-ecdsa-signatures/6152/245", "proposal": "", - "wiki": "", "docs": "", "eta": "2022", "status": "deployed", @@ -1362,10 +1170,8 @@ { "title": "Chain-key Bitcoin (ckBTC) token", "overview": "Twin token of Bitcoin on ICP, realized with the native Bitcoin integration and threshold ECDSA. Fast, low-fee Bitcoin transfers on ICP.", - "description": "Canister smart contracts on the Internet Computer can control and hold real bitcoin on the Bitcoin network. However, Bitcoin transactions are slow and expensive. To address this limitation, we introduce a new token called \"Chain-Key Bitcoin\", or \"ckBTC\", which is an analogue of Bitcoin on the Internet Computer. A canister smart contract that builds on the Bitcoin integration, especially its chain-key ECDSA technology, is able to receive bitcoin and issue ckBTC to the sender. Vice versa, users can use this canister to redeem their ckBTC for real bitcoin. As ckBTC is a token that lives on the Internet Computer, it can be transacted efficiently and with low fees. The ckBTC token is backed 1:1 with real bitcoin that is publicly-verifiably held 100% on chain.", "forum": "https://forum.dfinity.org/t/chain-key-bitcoin-ckbtc-bitcoin-wrapped-by-a-smart-contract/17606/", "proposal": "https://dashboard.internetcomputer.org/proposal/50135", - "wiki": "", "docs": "", "eta": "April 2023", "status": "deployed", @@ -1376,11 +1182,9 @@ { "title": "X-chain token minter V1 (ERC20-ICP)", "overview": "First version of the X-chain token minter bringing the ICP token to Ethereum.", - "description": "One way of integrating with other blockchains such as Ethereum is to bring tokens from the Internet Computer to other chains. This feature is about bringing the ICP token to the Ethereum network by deploying an MVP of a cross-chain token minter for ERC20ICP, i.e., ICP as an ERC 20 token, on Ethereum. This allows, among other things, for ICP being traded on Ethereum-based DEXs like Uniswap. This effort is expected to bring greater visibility and utility to the ICP token.", "status": "deployed", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "2023-09-23", "is_community": true, @@ -1389,10 +1193,8 @@ { "title": "Chain-key Ether (ckETH) token", "overview": "Ethereum's Ether token on ICP in the form of a \"twin token\" called ckETH (chain-key ETH).", - "description": "This feature brings Ethereum's Ether (ETH) token over to the IC in the form of a chain-key token, i.e., a twin token of the original ETH token on the Ethereum network. The ckETH token is important to the IC due to the potential liquidity it can bring over to the ecosystem. The initial implementation is done using HTTPS outcalls to multiple Ethereum JSON RPC providers for communication with the Ethereum network. Once available in the future, the full Ethereum integration of the Internet Computer will be leveraged for communicating with the Ethereum network in a completely trustless way. Chain-key ECDSA signing (threshold ECDSA) is used for trustlessly creating the required transactions for the Ethereum network.", "forum": "https://forum.dfinity.org/t/long-term-r-d-integration-with-the-ethereum-network/9382", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -1411,11 +1213,9 @@ { "title": "EVM in a canister", "overview": "An low-fee, high-performance EVM implemented as set of canisters on ICP with trustless integration with the Bitcoin network.", - "description": "The Bitfinity EVM is implemented as a set of canister smart contracts deployed on ICP. Bitfinity offers lower fees, lower latency, and higher throughput than Ethereum. Thanks to chain-key cryptography, the EVM is tightly integrated with the Bitcoin network in a trustless manner.", "status": "in_progress", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "is_community": true, @@ -1424,11 +1224,9 @@ { "title": "Threshold ECDSA — Throughput improvements", "overview": "Improving throughput of chain-key ECDSA via multiple protocol improvements such as batching, parallel processing of crypto operations, and cryptographic protocol improvements.", - "description": "This feature addresses optimizations of the chain-key ECDSA (i.e., threshold ECDSA) implementation of ICP to improve throughput. Possible approaches used to reach those goals are to introduce batching techniques to process batches of cryptographic operations, thereby improving performance, or to further parallelize cryptographic operations.", "status": "in_progress", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "future", "is_community": true, @@ -1437,10 +1235,8 @@ { "title": "Chain-key SOL (ckSOL) token", "overview": "Solana's SOL token on ICP in the form of a \"twin token\" called ckSOL (chain-key SOL).", - "description": "This feature brings Solana's SOL token over to the IC in the form of a chain-key token, i.e., a twin token of the original SOL token on the Solana network. The ckSOL token is important to the IC due to the potential liquidity it can bring over to the ecosystem. The initial implementation is done using HTTPS outcalls to multiple Solana RPC providers for communication with the Solana network. Chain-key EdDSA signing (using the Ed25519 elliptic curve) is used for trustlessly creating the required transactions for the Solana network.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -1451,11 +1247,9 @@ { "title": "Oracles with signed responses", "overview": "Adding exchange rate oracles that threshold sign their responses. Required for adoption of ICP's trustless oracles in other blockchain ecosystems.", - "description": "Typical oracle use cases require an oracle to sign its responses. This ensures end-to-end traceability and hence accountability of the data provided by oracles. This feature is about providing oracles that sign their responses on ICP that can be used by other chains. Specifically, the exhange rate canister (XRC) is of interest here as it is a completely trustless oracle that runs fully on chain without relying on any trusted parties. Such signing XRC oracle can be leveraged by dapps on other blockchains, thereby strengthening their trust model and reducing oracle cost.", "status": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "is_community": true, @@ -1464,11 +1258,9 @@ { "title": "Chain-Key BRC20 (ckBRC20) tokens", "overview": "A generic chain-key variant of BRC20 tokens on ICP to support ordinals.", - "description": "BRC20 is an experimental fungible token standard on the Bitcoin network. Chain-key BRC20 brings BRC20 tokens to ICP using chain-key technology to create twin tokens in ICP, thereby helping strengthen ICP as a fully-decentralized smart contract platform and DeFi layer for Bitcoin.", "status": "future", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "is_community": false, @@ -1477,10 +1269,8 @@ { "title": "Direct Ethereum integration", "overview": "Direct integration of the Internet Computer with the Ethereum blockchain enables canisters to call smart contracts on Ethereum and vice versa in a trustless manner.", - "description": "Direct integration of the Internet Computer with the Ethereum blockchain will enable smart contracts on the Internet Computer to call smart contracts on Ethereum and vice versa in a trustless manner, without using any (trusted) intermediaries such as bridges.", "forum": "https://forum.dfinity.org/t/long-term-r-d-integration-with-the-ethereum-network/9382/6", "proposal": "https://dashboard.internetcomputer.org/proposal/35635", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -1491,10 +1281,8 @@ { "title": "On-chain Bitcoin explorer", "overview": "Fully on-chain Bitcoin explorer running on ICP. Adds a missing piece of decentralized infrastructure to the Bitcoin ecosystem.", - "description": "The world's first blockchain, Bitcoin, has a large ecosystem including tools like block explorers. Besides the blockchain itself, all tools are pure Web2 applications typically hosted in public cloud. This feature implements a fully on-chain Bitcoin block explorer, i.e., brings decentralization and stronger security due to reduced reliance on centralized servers also to the broader landscape of the Bitcoin ecosystem.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -1504,11 +1292,9 @@ { "title": "On-chain Ethereum block explorer", "overview": "Ethereum block explorer running fully on chain on ICP. Adding a missing piece of decentralized infrastructure to the Ethereum ecosystem.", - "description": "ICP has the mantra of having 100% on-chain dapp experiences. One such 100% on-chain dapp that is intended to be hosted following this paradigm is an Ethereum block explorer. This would close the gap for Ethereum of block explorers being Web2 applications and would help bring Ethereum closer to a fully on-chain decentralized ecosystem, including its core components that are currently hosted off chain.", "status": "future", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "future", "is_community": true, @@ -1517,11 +1303,9 @@ { "title": "Larger threshold for threshold signing and key backup", "overview": "Increasing the threshold of the ECDSA signing and key backup schemes from 1/3 towards up to 1/2 by giving up some availability for stronger security.", - "description": "This feature is about increasing the threshold for threshold ECDSA signing and corresponding key backup from the current threshold of 1/3 to a threshold of up to 1/2. This would further improve the security of threshold signing on ICP, while making a reasonable tradeoff against the liveness properties of the protocol.", "status": "future", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "future", "is_community": true, @@ -1530,11 +1314,9 @@ { "title": "Threshold signing with substantially increased throughput", "overview": "Considerably improving threshold signing throughput for threshold Schnorr & threshold EdDSA, based on a new cryptographic protocol architecture.", - "description": "Both threshold ECDSA and EdDSA protocols allow for substantial performance improvements by building upon an entirely new protocol architecture. For ECDSA, througput improvements of up to one to two orders of magnitude are expected, for EdDSA up to three orders of magnitude.", "status": "future", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "future", "is_community": true, @@ -1543,11 +1325,9 @@ { "title": "X-chain token minter V2 (ERC20-ICRC)", "overview": "Enhanced version of the X-chain token minter, generalizing the approach from ICP only to ICRC-1 tokens to be brought to Ethereum.", - "description": "This feature is about generalizing the approach of the ERC20-ICP X-chain token minter to a broader range of ICRC-1 tokens that can be projected onto the Ethereum network or other EVM networks. Those tokens can, for example, comprise tokens like chain-key tokens (e.g., ckBTC, ckETH) as well as SNS tokens or other important ICRC-1 tokens on ICP which can be made available on Ethereum.", "status": "future", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "future", "is_community": true, @@ -1556,11 +1336,9 @@ { "title": "X-chain token minter V3", "overview": "A further enhanced version of the X-chain token minter, bringing ICP and ICRC-1 tokens to other blockchain networks such as Ethereum L2s and Solana.", - "description": "This feature is the next step in the generalization of the X-chain token minting approach. It broadens the approach to other chains than Ethereum, such as Solana or Ethereum Layer 2 chains.", "status": "future", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "future", "is_community": true, @@ -1569,11 +1347,9 @@ { "title": "ICRC-1 interface for native BTC, ERC20 and others", "overview": "Exposing an ICRC-1 interface to tokens of token standards from other networks, such as BTC or ERC20. This will make X-chain tokens on ICP more easy to use.", - "description": "Exposing an ICRC-1 interface to tokens of token standards from other networks, such as BTC or ERC20. This will make X-chain tokens on ICP more easy to use.", "status": "future", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "future", "is_community": true, @@ -1582,11 +1358,9 @@ { "title": "Ethereum and Bitcoin optimistic logging and settlement", "overview": "Settling a fingerprint of a canister's state to ultra-high-replication blockchains such as the Bitcoin or Ethereum networks. The first version is optimistic in that the settled value cannot be publicly verified.", - "description": "Both the Ethereum and Bitcoin networks have a substantially larger number of nodes amongst which consensus is executed than ICP subnets. For certain high-integrity use cases such as digital assets, regularly settling a state hash of a canister's state to one of of those networks establishes an untamperable record of the canister's state on those networks. This leverages the massive node distribution of those networks to assure the canister smart contract state integrity in addition to what ICP provides. The approach is optimistic in the sense that the settlement does not include a proof of the correct state being settled.", "status": "future", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "future", "is_community": true, @@ -1595,10 +1369,8 @@ { "title": "Native USDC integration through CCTP", "overview": "Bringing USD Coin (USDC) natively to ICP via integration with Circle's CCTP (Cross-Chain Transfer Protocol).", - "description": "A future step after bringing USDC to ICP as ckUSDC, i.e., as twin token from the Ethereum network, this feature is about a \"native\" integration through Circle's Cross-Chain Transfer Protocol (CCTP). This direct integration avoids one level of indirection of going through the Ethereum network and is the end goal for USDC integration on the Internet Computer.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -1609,11 +1381,9 @@ { "title": "BTC ordinals indexer and inscriber", "overview": "BTC ordinals indexer and inscriber running fully on chain on ICP. Adding a missing decentralized piece of infrastructure to the Bitcoin ecosystem to help it become more decentralized.", - "description": "BTC ordinals indexer and inscriber running fully on chain on ICP. Adding a missing decentralized piece of infrastructure to the Bitcoin ecosystem to help them become more decentralized outside of the actual blockchain network.", "status": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "is_community": true, @@ -1629,7 +1399,6 @@ "milestones": [ { "name": "vetKeys for Decentralized Key Management", - "description": "vetKeys technology realizes a decentralized key management service, allowing users to derive cryptographic keys on demand, in a fully decentralized way. This allows developers to build dapps where users’ data is encrypted, addressing privacy needs on a public blockchain.", "milestone_id": "Niobium", "eta": null, "status": "in_progress", @@ -1637,10 +1406,8 @@ { "title": "ICP integrates the vetKeys protocol", "overview": "Implementing threshold key derivation to allow for threshold decryption and other use cases. Canisters can store end-to-end-encrypted user data.", - "description": "Empower dapps to perform encryption, threshold decryption, and BLS signing on the IC by allowing canisters to call a threshold key derivation interface. This feature will expose canisters with a new threshold key derivation interface, which enables users to securely obtain cryptographic keys from the Internet Computer. The keys are generated by the replicas of a subnet running a threshold protocol, which keeps the key encrypted at all times using a user public key so that not even the replicas can access it. Integrating this feature will enable canisters to store end-to-end encrypted user data (e.g., storage, messaging, social networks) without having to rely on browser storage for user-side secrets, as well as enabling transaction privacy within canisters (e.g., closed-bid auctions, front-running prevention).", "forum": "https://forum.dfinity.org/t/threshold-key-derivation-privacy-on-the-ic/16560", "proposal": "", - "wiki": "", "docs": "https://internetcomputer.org/docs/current/developer-docs/integrations/vetkeys/technology-overview", "eta": "2024", "status": "", @@ -1652,10 +1419,8 @@ { "title": "User-space libraries and example dapps for vetKeys", "overview": "Implementation of libraries and examples that make it easier for users and developers to use vetKeys technology and improve the security and privacy of dapps.", - "description": "Implementation of libraries that make it easier for users and developers to use vetKeys technology and improve the security and privacy of dapps. Libraries should be complemented with example dapps and documentation explaining how they should be used in practice.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "2024", "status": "", @@ -1675,10 +1440,8 @@ { "title": "vetKeys API mock implementation", "overview": "For facilitating the community discussions regarding the API of vetKeys, a mock implementation has been provided in a canister and used to converge to the final API.", - "description": "For facilitating the community discussions regarding the API of vetKeys a mock implementation for discussion purposes of the proposed vetKeys API has been provided in a canister. This has served a great purpose for the discussions with the community and deciding on the final API to use. The community could already familiarize itself with the API and implement dapp logic against it.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -1697,10 +1460,8 @@ { "title": "Disaster recovery for SEV-SNP-enabled subnets", "overview": "SEV-SNP raises the level of security, but also makes disaster recovery harder. SEV-SNP-enabled subnets need to be made recoverable, e.g., by securely escrowing its secure enclave key material on the NNS.", - "description": "While SEV-SNP considerably increases the security posture of ICP nodes, and therefore subnets, against attackers targeting their data confidentiality or integrity, it makes disaster recovery considerably harder because all computations are performed entirely in the secure enclave of the CPU and RAM and storage are encrypted with inaccessible keys encapsulated in the OS image of each node. \n In the case of a disaster regarding an SEV-SNP-enabled subnet, there must still be a way to recover the subnet, but without opening new attack vectors on the nodes or subnet. A promising architecture is to escrow the encryption keys used by the secure computing platform on the NNS so that in case of subnet recovery, a new subnet can obtain the keys, pull the encrypted state of the broken subnet and continue operating on this state.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -1710,7 +1471,6 @@ { "title": "SEV Support for Replica Nodes", "overview": "Enabling the SEV-SNP trusted computing technology for the Replica VM to protect sensitive data from unauthorized access.", - "description": "To increase the security of the Replica VMs in terms of integrity and confidentiality, they can be protected via trusted execution as offered by AMD SEV-SNP. This feature will provide the verified Replica VM as a sealed environment protecting sensitive data within the VM from unauthorized access.", "forum": "https://forum.dfinity.org/t/amd-sev-virtual-machine-support/6156", "proposal": "https://forum.dfinity.org/t/long-term-r-d-tee-enhanced-ic-proposal/9384/4", "docs": "", @@ -1723,10 +1483,8 @@ { "title": "SEV-SNP-enabled ECDSA subnet", "overview": "The threshold ECDSA signing and backup subnets are enabled with SEV-SNP to raise the protection level against data exfiltration attacks by entities with access to the nodes.", - "description": "ICP nodes currenly run in an operating mode where computations are not secured by a secure encvlave and where data stored in RAM is not encrypted. This provides opportunities for attackers, including dishonest node providers, to penetrate a node and get hold of secret data such as private keys and non-public user data. SEV-SNP is a security technology integrated in the CPUs used in the ICP nodes that allows for nodes securely attesting to other nodes or community-operated \"validators\" that they run the correct ICP software stack and secures the node by performing all computations in a secure enclave and encrypting all data that goes to RAM or storage. Enabling SEV-SNP in the setting of ICP nodes is challenging due to the complex security and availability requirements of ICP subnets and the involved key management, particularly w.r.t. software upgrades.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -1736,10 +1494,8 @@ { "title": "SEV-SNP for app subnets", "overview": "Regular app subnets receive SEV-SNP protection to make them harder to tamper with and to enable attestation of running the correct software image.", - "description": "Besides \"high-profile\" subnets such as threshold ECDSA subnets that host highly valuable key material, also regular app subnets will receive the elevated protection resulting from a deployment of SEV-SNP-secured nodes. Deploying this technology on app subnets makes those subnets considerably more resilient against attacks having the goals of extracting data (violating confidentiality) or tampering with data (violating integrity). This is a crucial step in even better protecting user data that resides on all the nodes of a subnet where the canister containing the data is hosted. This form of protection is also beneficial regarding regulatory compliance with data protection regulations, e.g., the European GDPR.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -1749,10 +1505,8 @@ { "title": "Fully-homomorphic encryption (FHE)", "overview": "Fully-homomorphic encryption (FHE) allows for arbitrary computations on encrypted data and only authorized parties to decrypt the result. Blockchain data privacy.", - "description": "In traditional compute and current blockchains, all computations happen on plaintext data on the machines. Thus, data is exposed to threads such as unintentional exposure or hacking attacks. FHE allows for computations to be performed on encrypted data, i.e., users encrypt input data at user side and submit ciphertexts on which the processing is performed. The decryption is done again on the user side, thus the node machines never observe plaintext data.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -1762,10 +1516,8 @@ { "title": "Cryptographic multi-party computation (MPC)", "overview": "Running MPC on the blockchain nodes allow for privacy-preserving arbitrary computations to be made, with only authorized parties learning the results.", - "description": "Data privacy on a blockchain is at risk because of plaintext data being stored on the blockchain nodes. Cryptographic multi-party computation enables nodes of a subnet to jointly perform arbitrary computation using interactive cryptographic protocols without needing to see each other's inputs. This can be used to enhance privacy considerably by users secret-sharing their inputs across the nodes and then nodes performing computations on the shares without ever seeing the plaintext data. Cryptographic MPC is one of the main enablers for stronger privacy in distributed systems as it completely obviates the need for the node machines \"seeing\" any plaintext user inputs.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -1782,7 +1534,6 @@ "milestones": [ { "name": "Edge Infrastructure is Fully Decentralized", - "description": "ICP is accessible through a decentralized edge infrastructure, which is split into NNS-controlled API Boundary Nodes and HTTP Gateways. The NNS will appoint node machines to run API Boundary Nodes and anyone will be able to run HTTP Gateways, enabling a much more decentralized ICP edge infrastructure with a diverse set of service providers.", "milestone_id": "Solenoid", "eta": null, "status": "in_progress", @@ -1790,11 +1541,9 @@ { "title": "API Boundary Nodes (NNS controlled)", "overview": "In the new boundary-node architecture, API boundary nodes are placed under the full control of the NNS and function as the edge of the IC.", - "description": "This feature introduces API boundary nodes. The existing monolithic boundary node undergoes division into an API boundary node and an HTTP gateway to enhance separation of concerns. The NNS will exercise full control over the API boundary node, enabling functions like routing API calls to replicas and implementing rate limiting. API boundary nodes will be directly accessible from the Internet, facilitating IC-native client connectivity. Filtering and deny-listing responsibilities are delegated to the HTTP gateway.", "status": "in_progress", "forum": "https://forum.dfinity.org/t/boundary-node-roadmap/15562", "proposal": "https://dashboard.internetcomputer.org/proposal/35671", - "wiki": "", "docs": "", "is_community": false, "in_beta": false, @@ -1803,10 +1552,8 @@ { "title": "HTTP Gateways", "overview": "The HTTP Gateway becomes a standalone component as part of the new Boundary Node architecture.", - "description": "The HTTP gateway enables web access to dapps hosted on the IC by implementing the [HTTP gateway protocol](/docs/current/references/http-gateway-protocol-spec). Currently integrated into the monolithic boundary node, it will transition to a standalone component in the future. HTTP gateways, independent of NNS control, offer functionalities such as TLS termination, static asset caching, enforcement of denylists for legal compliance, and translation of HTTP requests to IC API calls.", "forum": "https://forum.dfinity.org/t/boundary-node-roadmap/15562", "proposal": "https://dashboard.internetcomputer.org/proposal/35671", - "wiki": "", "docs": "", "eta": "", "status": "in_progress", @@ -1818,10 +1565,8 @@ { "title": "Community-owned HTTP Gateways", "overview": "As part of the new boundary-node architecture, the HTTP gateway is turned into a standalone service, which is easily deployable by anyone, including end users.", - "description": "HTTP gateways resulting from the new Boundary Node architecture are easily deployable on servers and end user machines to allow anyone to run them. This facilitates the emergence of community-owned HTTP gateways, leading to a much stronger decentralization of the ICP's edge infrastructure.", "forum": "https://forum.dfinity.org/t/boundary-node-roadmap/15562", "proposal": "https://dashboard.internetcomputer.org/proposal/35671", - "wiki": "", "docs": "", "eta": "", "status": "upcoming", @@ -1833,7 +1578,6 @@ }, { "name": "Decentralized access logs and metrics", - "description": "This milestone establishes visibility into the Internet Computer's edge infrastructure by making aggregated API boundary node access logs publicly accessible. This provides developers with valuable insights into their dapps' usage patterns and facilitates the generation of user statistics, offering key information on traffic sources. To guarantee the integrity of these logs, the milestone focuses on leveraging trusted execution environments, specifically AMD SEV-SNP.", "milestone_id": "Levitron", "eta": null, "status": "future", @@ -1841,10 +1585,8 @@ { "title": "SEV-SNP-protected API Boundary Nodes", "overview": "Security for API boundary nodes is improved using trusted execution (AMD SEV-SNP), enabling anyone to attest that the correct software is running.", - "description": "This feature aims to enhance security in the API boundary node through trusted execution, leveraging AMD SEV-SNP. This ensures that the software stack running on the API boundary nodes remains untampered and verifiable by anyone (through remote attestation). Consequently, API calls served by these nodes are shielded from access by the Node Providers, as well as boosting confidence in the accuracy and integrity of reported metrics.", "forum": "https://forum.dfinity.org/t/long-term-r-d-boundary-nodes-proposal/9401", "proposal": "https://dashboard.internetcomputer.org/proposal/35671", - "wiki": "", "docs": "", "eta": "", "milestone_id": "Levitron", @@ -1856,10 +1598,8 @@ { "title": "Public API Boundary Node logs and metrics", "overview": "Anyone can analyze the API boundary node access logs to gain insights into app usage patterns etc.", - "description": "This feature enables anyone to analyze the API boundary node access logs by making them publicly available. This allows developers to better understand the usage patterns (e.g., what endpoints of the canister are being called and by which applications), as well as provide usage statistics (i.e., daily active users).", "forum": "", "proposal": "", - "wiki": "", "milestone_id": "Levitron", "docs": "", "eta": "", @@ -1878,10 +1618,8 @@ { "title": "Gen 2 Replica Node Hardware Specification", "overview": "Specification of the Gen 2 hardware for ICP nodes. Vendor independent. Includes new features such as SEV-SNP CPUs.", - "description": "Components of the first replica node hardware generation are becoming obsolete. By providing a second generation hardware specification, node providers will be able to buy new replica nodes. The new specification is vendor-independent and includes new features such as an SEV-enabled CPU. The specification is described in detail on the [Internet Computer Wiki](https://wiki.internetcomputer.org/wiki/Node_provider_hardware), and has so far been validated for two specific configurations (Asus and Dell). In addition, the first nodes based on this second generation hardware specification have been deployed by independent node providers on the IC network.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "January 2023", "status": "deployed", @@ -1892,10 +1630,8 @@ { "title": "HostOS upgrades", "overview": "NNS-controlled HostOS upgrades enable also the HostOS to be kept up-to-date and security patched via the NNS.", - "description": "The IC's HostOS, the operating system installed on bare metal machines that hosts the GuestOS is currently not managed by the NNS. This feature makes the HostOS also managed by the NSS and thus upgradeable through proposals. This is a requirement for further features such as IPv4 support in GuestOS or HostOS SSH access.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "Feb 2024", "status": "deployed", @@ -1905,10 +1641,8 @@ { "title": "ICOS Boundary Nodes", "overview": "Porting the boundary node to the replica's IC OS to simplify operations.", - "description": "This feature ports the boundary nodes to IC OS, the operating system currently used for replica nodes. Aligning the two images and their build processes, greatly simplifies operations.", "forum": "https://forum.dfinity.org/t/long-term-r-d-boundary-nodes-proposal/9401", "proposal": "https://dashboard.internetcomputer.org/proposal/35671", - "wiki": "", "docs": "", "eta": "August 2022", "status": "deployed", @@ -1919,10 +1653,8 @@ { "title": "Canister SEO", "overview": "Enables dapps on ICP to be index by search engines and previewed on social media. Boundary nodes redirect requests from crawlers and bots to raw in order to avoid loading the service worker.", - "description": "This features enables dapps on the Internet Computer to be indexed by search engines and previewed on social media (e.g., Twitter cards). Boundary nodes redirect requests from crawlers and bots (e.g., Googlebot) to raw in order to avoid loading the service worker.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "September 2022", "status": "deployed", @@ -1933,10 +1665,8 @@ { "title": "Canister Deny-listing on Boundary Nodes", "overview": "Node providers can deny-list canisters to comply with local regulations and orders. Decentralized content blocking.", - "description": "This feature gives node providers decentralized means to specify a deny-list for canisters. As node providers are the first to be contacted if questionable content is being stored in a canister smart contract on ICP, adding a deny-listing feature will allow them to make independent decisions about blocking such content, while users can choose which boundary node to use.", "forum": "https://forum.dfinity.org/t/path-forward-on-leveraging-boundary-nodes-for-content-filtering/10911", "proposal": "", - "wiki": "", "docs": "", "eta": "October 2022", "status": "deployed", @@ -1947,10 +1677,8 @@ { "title": "Custom Domain Names", "overview": "Custom domain names for canisters on ICP. DNS entry of domain redirects to boundary nodes.", - "description": "This feature enables custom domains on the Internet Computer, so users will not be restricted to using the canister_id.ic0.app domains. Users can configure the DNS entries of their domain to redirect traffic to the boundary nodes and signal the boundary nodes of the canister to which the traffic should be forwarded. Boundary nodes automatically manage the required certificates for HTTPS.", "forum": "https://forum.dfinity.org/t/custom-domains-for-ic0-app-community-consideration/6162", "proposal": "https://dashboard.internetcomputer.org/proposal/35671", - "wiki": "", "docs": "https://internetcomputer.org/docs/current/developer-docs/production/custom-domain/", "eta": "February 2023", "status": "deployed", @@ -1961,10 +1689,8 @@ { "title": "WebSocket support for canisters", "overview": "WebSocket-based communication with canister smart contracts through an IC WebSocket gateway. Enables the canister to push messages to the client.", - "description": "WebSockets provide a bi-directional communication channel between the client (dapp frontend) and the canister (dapp backend). This enables, among others, the canister to push notifications directly to the client and to dynamically update dapp content.", "forum": "https://forum.dfinity.org/t/23872", "proposal": "", - "wiki": "", "docs": "https://github.com/omnia-network/ic-websocket-gateway", "eta": "", "status": "deployed", @@ -1975,10 +1701,8 @@ { "title": "Certified Headers", "overview": "Flexible certification of canister-defined response headers besides the response body.", - "description": "Canisters currently only support certification of the response body (e.g., static assets). dapps, however, have varying certification needs (e.g., certifying specific header fields in addition to the body). This feature introduces flexible certification that allows the dapp developer to specify the header fields and assets to be certified.", "forum": "https://forum.dfinity.org/t/announcing-response-verification-v2/19135/1", "proposal": "", - "wiki": "", "docs": "", "eta": "March 2023", "status": "deployed", @@ -1989,10 +1713,8 @@ { "title": "Secure access to ICP without service worker", "overview": "Relying on local and remote HTTP gateways to securely access ICP dapps from the browser.", - "description": "To enable secure web access to dapps, the Internet Computer initially used a service worker to handle HTTP requests and verify responses. However, this impacted user and developer experience negatively (e.g., loading time, missing support for SEO). By the end of 2023, the service worker was replaced with two HTTP gateways: the IC HTTP Proxy for local use and icx-proxy for remote access. This setup provides users and developers a familiar web2 experience familiarity while offering the choice of a trusted local gateway.", "forum": "https://forum.dfinity.org/t/deprecating-the-service-worker/23401", "proposal": "", - "wiki": "", "docs": "", "eta": "Q4 2023", "status": "deployed", @@ -2002,10 +1724,8 @@ { "title": "Replica Node Storage Upgrade", "overview": "Upgrading the storage of existing replica nodes to 32 TB of NVME SSD.", - "description": "Thanks to significant improvements in the state manager, state synchronization is no longer a bottleneck when it comes to supporting larger states. Supporting larger states, however, requires more storage on replica nodes. With this upgrade, node providers are supported in extending the storage of their existing replica nodes. The storage upgrade is in full progress and expected to be completed in the 2nd/3th Quarter of 2023.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -2024,10 +1744,8 @@ { "title": "Tooling for node provider self onboarding", "overview": "Node provider onboarding through improved tooling and a GUI. Better UX for node provider onboarding than using the command line.", - "description": "Providing a graphical front-end for the node provider onboarding process to allow for a more user-friendly and less technically complex onboarding experience.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "in_progress", @@ -2037,10 +1755,8 @@ { "title": "Subnet rental", "overview": "Anyone can rent subnets created with possibly different decentralization properties than mainnet, and have full control over the subnet's resources.", - "description": "Subnet rental follows the idea of having the NNS create subnets that possibly have different decentralization properties than mainnet and allow tenants to rent them. For example, a for-rent subnet could have all its nodes located in one country, e.g., Switzerland. The deviation from ICPs usual decentralization properties may be interesting for a tenant to reap benefits such as regulatory compliance. The tenant of a rental subnet can use its resources alone without sharing with the public and needs to cover the full subnet costs in terms of node provider rewards, including a markup. Due to their different decentralization properties compared with mainnet, for-rent subnets need, much like sovereign subnets, special consideration w.r.t. inter-subnet communication with other subnets or ICP mainnet. See other features in this category for further details.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "in_progress", @@ -2050,11 +1766,9 @@ { "title": "Certification libraries", "overview": "The libraries assist developers in managing certification at the appropriate level of abstraction, addressing the complexity of securely interacting with the IC.", - "description": "This feature offers a suite of certification libraries tailored for developers, addressing the complexity of securely interacting with the IC end-to-end. Certification is paramount for security, yet often intricate and IC-specific. These libraries provide varying levels of abstraction, empowering developers to certify and verify data with ease.", "status": "in_progress", "forum": "", "proposal": "", - "wiki": "", "docs": "https://github.com/dfinity/response-verification", "is_community": false, "in_beta": false @@ -2062,10 +1776,8 @@ { "title": "Public access to a subset of node metrics", "overview": "Giving node providers access to a subnet of node metrics to help them find out what is failing in case of node issues.", - "description": "Currently, node providers do not have access to logs and metrics from their nodes. To fully decentralize ICP, node providers themselves need to be able to access certain metrics about nodes in order to find out what is wrong with their nodes. This enables node providers to autonomously fix issues with their nodes without any DFINITY involvement. This is a major prerequisite to a fully decentralized operation of the Internet Computer.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "in_progress", @@ -2075,10 +1787,8 @@ { "title": "Off-chain observability stack for Node Providers", "overview": "Observability solution for Node Providers to enable them to independently triage node health and take corrective action in case of problems.", - "description": "The node providers (a) do not have access to the logs and metrics from their node, and (b) do not have access to the alerts we have in place. Therefore, the node providers cannot monitor the health status of their nodes. The objective is to make an observability solution available for node providers that allows them to independently triage the node health and decide whether a node needs to be redeployed, and to independently understand an underlying cause of a node being unhealthy.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "in_progress", @@ -2089,10 +1799,8 @@ { "title": "Alerting of NP in case of Node Failure", "overview": "Alert Node Providers if one of their nodes starts misbehaving or underperforming.", - "description": "This feature provides a solution for independent node providers to be alerted if their node(s) start misbehaving or underperforming, which would require the node provider's attention. Since node providers do not have direct access to their nodes and node metrics, alerting in case of a node being unhealthy would significantly help the operational activities of node providers.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "in_progress", @@ -2103,10 +1811,8 @@ { "title": "HTTP Asset Caching", "overview": "Building on response verification v2, HTTP gateways support caching of static assets such as HTML pages, JavaScript code, and images.", - "description": "This feature enhances user experience by empowering developers to define caching parameters (e.g., time-to-live) for their assets such as HTML pages, JS sources, and images. These caching directives are then made accessible to HTTP gateways, like the icx-proxy on the boundary nodes or the local IC HTTP proxy. By leveraging caching at the edge layer, particularly for static content, a significant portion of traffic can be served from caches instead of querying canisters.", "forum": "https://forum.dfinity.org/t/long-term-r-d-boundary-nodes-proposal/9401", "proposal": "https://dashboard.internetcomputer.org/proposal/35671", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -2117,10 +1823,8 @@ { "title": "Certified queries", "overview": "Enable certified queries by having boundary nodes send a query to multiple replicas and aggregating the signed responses. Better efficiency than fully-replicated calls, and comparable security.", - "description": "While query calls have very good performance, they only allow to provide the additional security benefits of subnet certifications when dealing with static assets (i.e., via certified variables). Certified queries will allow to secure all query calls (even for dynamic responses) by having the boundary nodes issuing the query to multiple replicas at once, aggregating their signed responses and sending that back to the client.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "upcoming", @@ -2131,10 +1835,8 @@ { "title": "SEV-SNP-protected HTTP Gateway", "overview": "Trusted execution (AMD SEV-SNP) enables trustless HTTP gateways, empowering users to remotely attest that the correct software image is running.", - "description": "This feature focuses on securing the HTTP gateway through trusted execution, utilizing AMD SEV-SNP. This enables end-users to independently verify the integrity of the gateway's software stack, ensuring it remains untampered. Consequently, users can have confidence that the gateway does not intercept or tamper with the traffic passing through it.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false, @@ -2143,11 +1845,9 @@ { "title": "CDN canisters on edge infrastructure", "overview": "Read-only canisters run on the ICP edge infrastructure in a non-replicated manner, bringing read-heavy data closer to the edge, analogous to CDNs.", - "description": "Canister caching involves replicating and deploying heavily accessed canisters on the ICP edge infrastructure. By copying the canister to the edge node, read access scalability is achieved, bringing data closer to users. This approach bears conceptual resemblance to Content Distribution Networks (CDNs). Deploying the ICP's canister execution environment on the edge enables non-replicated query operations from these canisters. Developers must indicate whether caching is suitable for their canisters, considering factors like data confidentiality. This form of caching complements the regular canister response caching, forming an additional aspect of the ICP caching strategy.", "status": "future", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false @@ -2155,10 +1855,8 @@ { "title": "Chain Name System (CNS)", "overview": "A decentralized naming system built into the ICP that allows translating user-friendly domains into various records (e.g., A records as used in DNS or wallet addresses), while ensuring verifiability and reliability.", - "description": "The Chain Name System (CNS) offers a decentralized alternative to web2's Domain Name System (DNS) and expands it for web3 with new record types like wallet addresses. CNS isn't merely a top-level domain (TLD) but a full ecosystem centered around a DAO-managed root. It provides essential tools like resolvers, DNS gateways, and naming canisters (the web3 nameserver), enabling anyone to serve as a TLD provider. CNS empowers users with enhanced control, security, and transparency in domain management.", "forum": "https://forum.dfinity.org/t/technical-working-group-naming-system/21236", "proposal": "", - "wiki": "", "docs": "https://github.com/dfinity/cns", "eta": "", "status": "future", @@ -2168,10 +1866,8 @@ { "title": "Node Provider Remuneration V3", "overview": "Improves the V2 remuneration model by considering things like different multipliers per region, nodes per data center, penalties, or remuneration based on metrics such as cycles burnt.", - "description": "The remuneration model V2 is being implemented to reward node providers for operating the new Replica HW Specification. This V2 remuneration does not yet take into account penalties for unhealthy nodes or remuneration based on decentralization metrics per country, city, or data center. Remuneration V3 aims to add these features into a fully automated remuneration solution.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -2182,10 +1878,8 @@ { "title": "Strengthened node provider checks and audits", "overview": "Strengthening the checks of entities intending to join ICP as node providers. May also include node provider audits.", - "description": "As participation on the ICP network for node providers has been growing and more and more node providers join the network, it is becoming increasingly important to implement strengthened checks w.r.t. node providers, their identities, and data centers their machines are located in. This is important to prevent sybil attacks and nodes hosted in insufficiently secured locations.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -2195,10 +1889,8 @@ { "title": "Penalties for non-compliant node providers", "overview": "Non-compliant node providers are penalized economically or excluded from future network participation. This creates a cryptoeconomic incentive to encourage honest node provider behaviour.", - "description": "Non-compliant node providers are penalized economically or excluded from future network participation. This creates a cryptoeconomic incentive to encourage honest node provider behaviour.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -2208,10 +1900,8 @@ { "title": "Autonomous capacity management", "overview": "Making capacity management of the ICP network more decentralized and autonomous, driven by community proposals.", - "description": "Aspects related to capacity management of ICP, such as managing the node pool, creating subnets, splitting subnets, or migrating canister groups to another subnet are currently driven by proposals mostly made by DFINITY teams. This should be made more decentralized so that this responsibility can be tranferred to the community in a first step, requiring the community to be able to monitor relevant metrics accordingly. In a final further step, the protocols itself could handle capacity management in a fully autonomous manner, not requiring proposals.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -2221,10 +1911,8 @@ { "title": "Community can run system tests", "overview": "Enabling the community to run system tests for the ICP codebase, using Kubernetes as container orchestrator.", - "description": "Currently, the testing facility for running system tests for the ICP codebase is internal to DFINITY and not yet based on Kubernetes. As part of the larger strategy to receive more community contributions to the ICP codebase in the future, the community needs to be enabled to run the system tests for ICP. This requires the system testing framework to migrate to the Kubernetes container orchestrator and to be made available to the public. This feature is a key enabler to future community contributions to the ICP codebase.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -2234,10 +1922,8 @@ { "title": "Decentralized backup and recovery", "overview": "Enabling decentralized backup and recovery of canisters. Recovered canisters are guaranteed to not have been tampered with.", - "description": "Enabling decentralized backup and recovery of canisters. Recovered canisters are guaranteed to not have been tampered with.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -2247,10 +1933,8 @@ { "title": "Decentralized virtual personhood validation", "overview": "Scalable, decentralized, virtual proof of personhood to distinguish people from machines.", - "description": "Being able to distinguish people from machines is important for a blockchain environment where people are not identified when onboarding. There are different ways how this can be achieved. Integrating proof of personhood with ICP will improve decentralization on the Internet Computer, offering greater voting power and rewards to real validated users as opposed to unknown entities. Users can also present the proof of personhood toward dapps, which can provide greater privileges and rewards. As one option, virtual people parties are a scalable proof of personhood, whereby randomly assigned groups of users validate each other through interaction.", "forum": "https://forum.dfinity.org/t/long-term-r-d-people-parties-proof-of-human-proposal/9636", "proposal": "https://dashboard.internetcomputer.org/proposal/35668", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -2261,10 +1945,8 @@ { "title": "Public contributions to IC repository", "overview": "Allowing for contributions by the public to the IC source code repository.", - "description": "Currently the IC repo is open source, but does not allow for contributions by the public community. This feature aims to make the IC repository available for public contributions. Due to the complexity of the ICP codebase and therefore contributions to it, this is a longer-term goal. A first required step for this is to move the IC repository from GitLab to GitHub.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -2275,10 +1957,8 @@ { "title": "Gen 3 Replica Node Hardware Specification", "overview": "Specification of the Gen 3 hardware for the third generation of ICP nodes.", - "description": "Following the first and second generations of ICP nodes, there will be a third generation of nodes requiring a new hardware specification, the Gen 3 replica node hardware specification.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -2288,10 +1968,8 @@ { "title": "Rewards-driven DRE", "overview": "Rewards-driven decentralized reliability (DRE) and capacity management on ICP.", - "description": "Decentralized reliability, the Web3 analogon of Web2's SRE, is about keeping ICP running reliably. This currently involves DFINITY teams making relevant proposals related to operational aspects of the ICP. This should be further decentralized to involve and incentivize the broader community and increasingly transition DRE to the community. In the long-term future, eligible aspects of DRE should be automated and executed through the protocol without requiring community or DFINITY intervention.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -2307,103 +1985,89 @@ "description": "An identity solution that is privacy-preserving, self-sovereign, and user-friendly is a fundamental building block for web3 adoption. Internet Identity provides a robust authentication solution based on passkeys, with easy onboarding and support for identity attributes while protecting the users’ privacy.", "milestones": [ { - "name": "Verifiable Credentials Platform is Live", - "description": "With this milestone, ICP offers the infrastructure and tooling to issue, share, and consume credentials in a privacy-preserving fashion. Users are in control of their credentials, giving them a self-sovereign digital identity. At the same time, new credential issuers, such as a KYC service, go live on ICP and will enable new use cases such as security tokens subject to financial regulations.", - "milestone_id": "Separatrix", - "eta": "June 18, 2024", - "status": "deployed", + "name": "Signer standards in use by ICP dapps and wallets", + "milestone_id": "Synchrotron", + "eta": null, + "status": "in_progress", "elements": [ { - "title": "Verifiable credentials", - "overview": "Verifiable credentials are digital representations of data (qualifications, achievements, or attributes) that are cryptographically secured and portable, enabling efficient and trustworthy sharing of personal data while maintaining privacy and control.", - "description": "The verifiable credentials protocol facilitates the issuance, presentation, and verification of digital credentials in a decentralized and interoperable manner. Based on W3C's Verifiable Credentials Data Model and the work of the IC's Wallet and Identity Standards Working Group, the protocol outlines a structured approach for parties to issue credentials containing information about individuals, digitally sign them using cryptographic methods to ensure integrity and authenticity, and present these credentials to relying parties when needed. Relying parties can then verify the credentials' authenticity by verifying the digital signatures and checking against decentralized or centralized registries as necessary, enabling trustful interactions while preserving individuals' privacy and control over their data.", + "title": "Identity signer standards", + "overview": "Standards that enable an untrusted relying party (e.g., canister or web app) to request a signer to sign a transaction for ICP after user approval. Alternative to II's delegation model for high-security use cases or when stable identities are required with different canisters.", "forum": "", "proposal": "", - "wiki": "", "docs": "https://github.com/dfinity/wg-identity-authentication", "eta": "", - "status": "deployed", + "status": "in_progress", "is_community": true, - "in_beta": false, - "milestone_id": "Separatrix" + "in_beta": true, + "milestone_id": "Synchrotron" }, { - "title": "SSI SDK for Relying Parties", - "overview": "We will be providing tooling, libraries and standards so that relying parties looking to integrate to ICPs VC platform can do so correctly and efficiently.", - "description": "Tooling and standards that help community devs to easily build relying parties helps bootstrap a strong identity ecosystem on ICP, providing ICP an edge over other blockchain networks.", + "title": "Transaction approval on Ledger's ICP app", + "overview": "Signer standards are coming to the Ledger HW ICP app. Specifically, the Leger ICP app will acquire the ability to interact with dapps that support signer standards and display ICRC-21 compatible messages to the user.", "forum": "", "proposal": "", - "wiki": "", - "docs": "", + "docs": "https://github.com/dfinity/wg-identity-authentication", "eta": "", - "status": "deployed", + "status": "in_progress", "is_community": true, "in_beta": true, - "milestone_id": "Separatrix" + "milestone_id": "Synchrotron" }, { - "title": "Verifiable credentials playground", - "overview": "A dapp that introduces the concepts of verifiable credentials to prospective identity issuers and relying parties with the aim to facilitate adoption.", - "description": "The verifiable credentials playground comprises both an issuer and a relying party dapp. The dapps implement a full end-to-end experience using ICP's verifiable credentials capabilities. The dapps are intended for educational purposes and can act as a template for issuer and relying party dapps building on the verifiable credentials feature.", - "status": "deployed", + "title": "Support II authentication to Web2 services", + "overview": "Secure, privacy-enhancing authentication for Web2 by bringing ICP's II authentication to Web2 services.", "forum": "", "proposal": "", - "wiki": "", "docs": "", + "eta": "", + "status": "in_progress", "is_community": false, "in_beta": true, - "milestone_id": "Separatrix" + "milestone_id": "Synchrotron" } ] }, { - "name": "Signer standards in use by ICP dapps and wallets", - "description": "With Synchrotron, ICP finally acquires the capability to create a vibrant and diverse ecosystem of signers, dapps and canisters that can interact with each other, without having to specifically integrate with any particular component.", - "milestone_id": "Synchrotron", - "eta": "", - "status": "in_progress", + "name": "Verifiable Credentials Platform is Live", + "milestone_id": "Separatrix", + "eta": "June 18, 2024", + "status": "deployed", "elements": [ { - "title": "Identity signer standards", - "overview": "Standards that enable an untrusted relying party (e.g., canister or web app) to request a signer to sign a transaction for ICP after user approval. Alternative to II's delegation model for high-security use cases or when stable identities are required with different canisters.", - "description": "Users will be able to use a single wallet address across many different applications, making wallets built on the Internet Computer and relying on Internet Identity for key management to be portable. This way, users can bring their digital assets with them across applications and services while relying on Internet Identity for key management.", + "title": "Verifiable credentials", + "overview": "Verifiable credentials are digital representations of data (qualifications, achievements, or attributes) that are cryptographically secured and portable, enabling efficient and trustworthy sharing of personal data while maintaining privacy and control.", "forum": "", "proposal": "", - "wiki": "", "docs": "https://github.com/dfinity/wg-identity-authentication", "eta": "", - "status": "in_progress", + "status": "deployed", "is_community": true, - "in_beta": true, - "milestone_id": "Synchrotron" + "in_beta": false, + "milestone_id": "Separatrix" }, { - "title": "Transaction approval on Ledger's ICP app", - "overview": "Signer standards are coming to the Ledger HW ICP app. Specifically, the Leger ICP app will acquire the ability to interact with dapps that support signer standards and display ICRC-21 compatible messages to the user.", - "description": "Ledger users will be able to confirm and authorize a digital currency transaction initiated through a compatible dapp. The Ledger device will first validate the BLS-signed incoming message and after the user reviews the transaction details, e.g., recipient address, amount, and any associated fees, the Ledger device will digitally signing and send the transaction.", + "title": "SSI SDK for Relying Parties", + "overview": "We will be providing tooling, libraries and standards so that relying parties looking to integrate to ICPs VC platform can do so correctly and efficiently.", "forum": "", "proposal": "", - "wiki": "", - "docs": "https://github.com/dfinity/wg-identity-authentication", + "docs": "", "eta": "", - "status": "in_progress", + "status": "deployed", "is_community": true, "in_beta": true, - "milestone_id": "Synchrotron" + "milestone_id": "Separatrix" }, { - "title": "Support II authentication to Web2 service", - "overview": "Secure, privacy-enhancing authentication for Web2 by bringing ICP's II authentication to Web2 services.", - "description": "Secure, privacy-enhancing authentication for Web2 by bringing ICP's II authentication to Web2 services.", + "title": "Verifiable credentials playground", + "overview": "A dapp that introduces the concepts of verifiable credentials to prospective identity issuers and relying parties with the aim to facilitate adoption.", + "status": "deployed", "forum": "", "proposal": "", - "wiki": "", "docs": "", - "eta": "", - "status": "in_progress", "is_community": false, "in_beta": true, - "milestone_id": "Synchrotron" + "milestone_id": "Separatrix" } ] }, @@ -2416,10 +2080,8 @@ { "title": "Device registration via QR code", "overview": "Registering another device for an internet identity and linking it to an existing device via scanning a QR code.", - "description": "By employing QR code technology, users can simply scan the code with their device's camera, instantly initiating the device registration process without the need for manual input of lengthy identifiers or complicated setup procedures. This seamless approach reduces the likelihood of errors and frustration typically associated with traditional methods, enhancing user satisfaction and accelerating the onboarding process. ", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -2429,10 +2091,8 @@ { "title": "Recovery phrase verification", "overview": "Requiring the user to confirm a few random elements of a new recovery phrase to assure they have a copy of the phrase.", - "description": "Verifying that a user has copied down their recovery phrase is crucial for ensuring the security and accessibility of their account. This step mitigates the risk of data loss or account compromise by confirming that the user has accurately recorded the recovery phrase, which serves as their lifeline for regaining access to their account in the event of password loss or device failure.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -2442,10 +2102,8 @@ { "title": "Protected recovery phrase", "overview": "Requiring knowledge of the previous recovery phrase in order to remove or replace it. Avoids accidental deletion / replacement of the recovery phrase, or through compromised WebAuthn key.", - "description": "Verifying that a user has copied down their recovery phrase before they lock it mitigates the risk of data loss or account compromise by confirming that the user has accurately recorded the recovery phrase, which serves as their lifeline for regaining access to their account in the event of password loss or device failure.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -2455,10 +2113,8 @@ { "title": "II stable memory migration", "overview": "Moving identities from heap storage to stable memory data structures. Massively increases available storage for identities and reduces risks of bricking the canister during updates.", - "description": "By updating the storage for identities, we enabled the improvement of II through new features, including username support.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -2468,10 +2124,8 @@ { "title": "II archive canister", "overview": "Archive canister for recording any changes to identities managed by II. Provides full traceability regarding IIs.", - "description": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -2481,10 +2135,8 @@ { "title": "II canister subnet migration", "overview": "Migration of the II canister from the NNS governance subnet to another high-replication subnet.", - "description": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -2494,10 +2146,8 @@ { "title": "DoS protection based on Captchas", "overview": "Captchas are employed to protect the creation of an unlimited amount of IIs by single entities.", - "description": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -2507,10 +2157,8 @@ { "title": "Identity creation rate limiting", "overview": "Enforing rate limits on the number of new IIs that can be created per time interval to avoid filling up storage of the II canister.", - "description": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -2520,10 +2168,8 @@ { "title": "II metrics", "overview": "Providing II-related metrics for the dashboard. Shows the authentication means users have to authenticate to II.", - "description": "The team now collects the bounce rate and number of daily and monthly authentications using II. This data helps product teams understand how frequently users interact with the product, identify areas for improvement, and make informed decisions to enhance user experience and drive growth, without infringing on user privacy.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -2533,10 +2179,8 @@ { "title": "II Temporary Keys", "overview": "Social dapps requested that users can onboard to Internet Identity without creating passkeys to reduce friction during onboarding.", - "description": "The browser generates a private / public key pair that is used to create an identity (and for subsequent authentication). This key pair is then encrypted using both a user-provided password and non-extractable key material local to the browser. This ensures that the encryption on the private key cannot be broken using brute-force attacks even if the user choses a very weak password (which is likely, since it needs to be entered often).", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -2546,10 +2190,8 @@ { "title": "Streamlined II onboarding", "overview": "Streamlined onboarding experience for obtaining a new II. Crucial for bringing more end users into the ICP ecosystem.", - "description": "The Internet Identity onboarding experience was reduced from 10 steps to 3 steps, and includes tutorials to explain potentially complex concepts.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -2559,10 +2201,8 @@ { "title": "Alternative origins", "overview": "Alternative origins enable domain migration for services users authenticate with. Multiple alternative origins can represent a service and users will have the same principal for those.", - "description": "Dapps that may be accessible through more than one domain can still have a user have single identity when they authenticate with Internet Identity.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -2572,10 +2212,8 @@ { "title": "Login security in stages", "overview": "Fast onboarding to Internet Identity and deferred handling of recovery factors such as seed phrases and additional devices.", - "description": "Users can onboard to Internet Identity without creating recovery phrases and adding devices. However, they are notified every authentication thereafter to improve the security of their Internet Identity through additional recovery and passkeys.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed" @@ -2588,28 +2226,12 @@ "milestone_id": "Future features", "eta": "none", "elements": [ - { - "title": "Configurable II", - "overview": "Dapps can customize their configuration of II to meet the needs of their users and their product.", - "description": "Dapps can configure Internet Identity so that only certain authentication methods are exposed to their users. For example, a temporary key may be effective for onboarding users to social dapps, but it is not a secure method for signing financial transactions or managing digital assets. If a user clears their browser history, then the temporary key will be lost, and the user will lose access to the assets associated with it. Therefore, financial dapps may choose not to expose temporary keys as an authentication method for their users.", - "forum": "", - "proposal": "", - "wiki": "", - "docs": "", - "eta": "", - "status": "future", - "is_community": true, - "in_beta": false, - "milestone_id": "" - }, { "title": "Resident passkeys", "overview": "Explore the potential of resident passkeys to remove the requirement for users to remember their identity numbers.", - "description": "We want to double-down on ICP's adoption of passkeys. Passkeys are now used by all the major web2 and web3 players, the aim here is to reduce sign-up and sign-in friction as much as possible by identifying how II can use resident passkeys.", "status": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false, @@ -2618,11 +2240,9 @@ { "title": "ICP DID (Decentralised Identifier) method", "overview": "Specify a canonical ICP DID method for DID creation, resolution, update, delegation and deletion.", - "description": "Specify a canonical ICP DID method for DID creation, resolution, update, delegation and deletion.", "status": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false, @@ -2631,11 +2251,9 @@ { "title": "Verifiable Credentials protocol extension", "overview": "Extension of the Verifiable Credentials protocol to allow the VC issuer frontend to participate in credentials sharing.", - "description": "The current VC protocol only interacts with the issuer backend when credentials are shared. We want to extend the protocol and allow the VC Issuer frontend to participate in credential sharing. This will unlock significant capabitilies including better UX for existing issuers and improved integration when issuers separate front-end and back-end canisters.", "status": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false, @@ -2644,10 +2262,8 @@ { "title": "Verifiable Credentials Billing", "overview": "Billing for provisioning of verifiable credentials. Enables a business model for identity providers.", - "description": "A payment model for verifiable credentials would likely include fees for issuance, verification, and premium services, with options for subscription-based access, transaction fees, bulk discounts, and micropayments, aiming to cover costs related to credential management and validation while ensuring scalability, affordability, and compliance with privacy and security standards.", "forum": "", "proposal": "", - "wiki": "", "docs": "https://github.com/dfinity/wg-identity-authentication", "eta": "", "status": "future", @@ -2658,10 +2274,8 @@ { "title": "External services authentication to canisters", "overview": "Allowing users to authenticate to canisters via Web2-based authentication services. True Web2-Web3-X-SSO.", - "description": "Allowing users to authenticate to canisters via Web2-based authentication services. True Web2-Web3-X-SSO.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -2672,10 +2286,8 @@ { "title": "Usernames instead of Anchors", "overview": "Usernames are easier to remember and better reflect what users are used to from Web2 to identify to services.", - "description": "Transition from a sequential numbering system (anchors) to a username system to enhance the user experience and streamline the onboarding experience by reducing the education required to create an Internet Identity. The shift aims to create a more user-friendly environment, allowing individuals to easily remember and identify with their chosen usernames, as opposed to impersonal and forgettable numerical sequences. This transition is motivated by a strategic move towards a more engaging and user-centric platform that prioritizes personalization and ease-of-use. This will also streamline the onboarding process because users will create their identifier instead of being given one, which is more consistent with conventional authentication systems. ", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -2686,10 +2298,8 @@ { "title": "Decentralized KYC", "overview": "Decentralized on-chain KYC system that provides a real-world identity backbone to ICP.", - "description": "Many use cases, such as those related to securities, require users to be subject to KYC and AML verification. This feature builds an on-chain KYC system that provides this functionality to any dapp on ICP, thereby being an important foundation for many types of financial applications.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -2699,23 +2309,30 @@ { "title": "Social recovery", "overview": "Recovery of an identity in case the device or credentials are lost by recovering the underlying key from the user's social circle who hold key shares.", - "description": "Social recovery allows users to designate a set of pre-selected contacts who can collectively vouch for their identity in case of emergencies or loss of access. When needed, the user can initiate a recovery process, prompting these trusted contacts to validate their identity. This method enhances security and resilience by distributing the responsibility for identity recovery among trusted peers, reducing reliance on single points of failure such as password recovery emails or security questions, while ensuring user privacy and control over their digital identity.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", "is_community": true, "in_beta": false }, + { + "title": "Configurable II", + "overview": "Dapps can customize their configuration of II to meet the needs of their users and their product.", + "forum": "", + "proposal": "", + "docs": "", + "eta": "", + "status": "", + "is_community": true, + "in_beta": false + }, { "title": "Root domain name independence", "overview": "Make internet identities independent of the domain II was executing in when the II has been created. Important to ensure II anchors remain accessible even if II's root domain needs to be switched.", - "description": "Enables II to switch domains if necessary for branding or accessibility purposes. This initiative makes internet identities independent of the domain II was executing in when the II has been created. Important to ensure II anchors remain accessible even if II's root domain needs to be switched.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -2725,10 +2342,8 @@ { "title": "Identity management", "overview": "Bringing comprehensive identity management features to II: Giving users a 360° view on their identities, such as their open sessions, credential sharing history, or pseudonyms used with parties.", - "description": "Users will be able to manage various identity-related details, including their passkeys, recovery phrases, credentials, and wallet addresses. Users will be able to view and manage their sessions. This holistic perspective allows users to monitor how their identity is being utilized, track access permissions, and detect any potential misuse or unauthorized access. By managing their identity in this manner, users can enhance their privacy, security, and autonomy.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -2738,10 +2353,8 @@ { "title": "ZK-II (Zero-knowledge II)", "overview": "Cryptographically privacy-preserving II, based on vetKeys, that unconditionally prevents linkability of identities.", - "description": "Cryptographically privacy-preserving II, based on vetKeys, that unconditionally prevents linkability of identities.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -2758,7 +2371,6 @@ "milestones": [ { "name": "Oisy, a Multi-chain Wallet Powered by Chain Fusion", - "description": "Oisy is a wallet with a unique combination of properties. It is the first smart contract wallet that is self-custodial, multi-chain and uses passkeys for authentication, while being fully accessible through the browser. Oisy supports ICP-based dapps through ICRC-21, EVM-based dapps through WalletConnect, and easily converts between native Ethereum tokens and their ck twins.", "milestone_id": "Toroidal", "eta": null, "status": "in_progress", @@ -2766,11 +2378,9 @@ { "title": "Oisy", "overview": "Shift Oisy's ownership model to a canister-per-user model. Under this change, the digital assets for each Oisy user will reside in a user-controlled canister.", - "description": "Shift Oisy's ownership model to a canister-per-user model. Under this change, the digital assets for each Oisy user will reside in a user-controlled canister.", "status": "in_progress", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false, @@ -2779,11 +2389,9 @@ { "title": "Signer standards in Oisy", "overview": "Oisy is upgraded to support the signer standards to foster a vibrant and diverse ecosystem of signers and relying parties. Users can then safely transact with digital assets using their Oisy wallet and signer standards compatible dApps.", - "description": "Oisy is upgraded to support the signer standards. The benefit of signer standards is in fostering a vibrant and diverse ecosystem of signers and relying parties. Users can then safely transact with digital assets using their Oisy wallet and signer standards compatible dApps.", "status": "in_progress", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false, @@ -2793,7 +2401,6 @@ }, { "name": "The Orbit Multi-Custody Asset Framework", - "description": "Orbit is a comprehensive digital assets framework, enabling simple to advanced rules over user digital assets. Initially tailored for token management, it provides robust support for 1-out-of-M to complex approval policies for financial transactions. It also supports secure management of infrastructure like canister installations and upgrades. Teams and businesses can confidently handle ICP, ckETH, and compatible tokens using Orbit's advanced security features, vital for those seeking a stable and reliable management over their treasury.", "milestone_id": "Poloidal", "eta": null, "status": "in_progress", @@ -2801,11 +2408,9 @@ { "title": "Orbit Multi custody wallet for ICP", "overview": "Users can securely share ownership of the ICP token with a fully on-chain wallet. From simple 1-out-of-2 configurations to sophisticated approval policy rules, Orbit provides comprehensive support for secure financial transactions with a flexible user-role-based system.", - "description": "Users can securely share ownership of the ICP token with a fully on-chain wallet. From simple 1-out-of-2 configurations to sophisticated approval policy rules, Orbit provides comprehensive support for secure financial transactions with a flexible user-role-based system.", "status": "in_progress", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false, @@ -2814,11 +2419,9 @@ { "title": "Secure canister management", "overview": "Teams building their product vision require secure shared access to dapp control. Orbit's rule engine enables policies specifying how many users are needed to approve canister management operations like upgrades and installations, preventing a single member from seizing control.", - "description": "Teams building their product vision require secure shared access to dapp control. Orbit's rule engine enables policies specifying how many users are needed to approve canister management operations like upgrades and installations, preventing a single member from seizing control.", "status": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false, @@ -2827,11 +2430,9 @@ { "title": "Orbit supports ICRC-1", "overview": "Users can manage non-ICP tokens like ckBTC and ckETH with confidence using Orbit's advanced security features, ensuring secure multi-token management.", - "description": "Users can manage non-ICP tokens like ckBTC and ckETH with confidence using Orbit's advanced security features, ensuring secure multi-token management.", "status": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false, @@ -2848,11 +2449,9 @@ { "title": "ICRC-1: Fungible tokens on ICP", "overview": "Fungible token standard for the Internet Computer, improving on the ICP token standard. Accounts using principal-subaccount pairs instead of hashed addresses.", - "description": "The Internet Computer community needs a token standard for fungible tokens besides ICP. ICRC-1 is the first standard in the ICRC (Internet Computer Request for Comments) series of standards. ICRC-1 defines a fungible token standard that is intended for use for any token besides ICP. It features a simplified model of for addresses, namely through principal-subaccount pairs instead of using the hash of the principal and subaccount as address as done for the ICP token. ICRC-1 provides only an active transfer flow, delegations similar to Ethereum's approve and transfer from are added with ICRC-2.", "status": "deployed", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false @@ -2860,11 +2459,9 @@ { "title": "ICRC-2: Approve and transfer_from for ICRC-1 tokens", "overview": "Extending ICRC-1 with ERC-20-stlye approve / transfer_from functionality, but enhanced for the IC.", - "description": "ICRC-1 is the fungible token standard of the Internet Computer Protocol for fungible tokens besides ICP. ICRC-2 adds the functionality of approve and transfer_from that is well known from Ethereum's ERC-20 standard to ICRC-1. This allows a token holder to approve another party, the spender, to transfer a given amount of the token holder's tokens. The spender can make use of the approval as long as it is valid to transfer tokens on behalf of the token holder. ICRC-2 complements ICRC-1 with much simplified transfer flows for many relevant use cases.", "status": "deployed", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false @@ -2872,11 +2469,9 @@ { "title": "Oisy", "overview": "Network custodial wallet for EVM blockchains. Naming derived from Open-Internet-Services-like (OISy).", - "description": "", "status": "deployed", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false @@ -2884,10 +2479,8 @@ { "title": "ICRC-3: transaction log for ICRC ledgers standard", "overview": "Standard for the transaction log format for ICRC ledgers. Prerequisite for unified integration of ICRC tokens with centralized exchanges.", - "description": "Ledgers on ICP need to implement their tx log in user space as ICP blocks are neither exposed to the public nor retained indefinitely. ICRC-3 defines the format of the block log for ICRC-compliant fungible token and NFT ledgers.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "March 2024", "status": "deployed", @@ -2897,11 +2490,9 @@ { "title": "On-ramp for ICP and other tokens", "overview": "On-ramping for ICP and further tokens to simplify the onboarding experience", - "description": "", "status": "deployed", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false @@ -2909,10 +2500,8 @@ { "title": "ICRC-7: Basic non-fungible token (NFT) standard", "overview": "Basic non-fungible token (NFT) standard for the Internet Computer featuring batch APIs.", - "description": "ICRC-7 defines the basic NFT standard for the Internet Computer. ICRC-7 is a basic NFT standard for NFTs without a contained marketplace. The API is simple yet comprises batch functionality for higher throughput of operations for both query and update calls. This standard is expected to unify the NFT-related development on ICP and help clean up the fragmented NFT standards landscape.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -2922,10 +2511,8 @@ { "title": "ICRC-37: Approve / transfer_from extension for ICRC-7.", "overview": "Approve / transfer_from extension for the ICRC-7 basic NFT standard for the Internet Computer. Simplifies many scenarios by avoiding error scenarios of regular transfers.", - "description": "ICRC-37 extends ICRC-7 with approve and transfer_from semantics. This semantics is prominently usef for tokens in the Ethereum ecosystem and allows a token holder to approve a spender to transfer tokens on their behalf. The spender can, as long as an approval is valid, transfer tokens on behalf of their owner. This simplifies many use cases where certain errors otherwise make their handling tricky. Together with ICRC07, this standard is expected to help unify the NFT-related development on ICP and help clean up the fragmented NFT standards landscape.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -2935,11 +2522,9 @@ { "title": "ICRC-3 implementation for ICRC-1 ledgers", "overview": "Implementation of the ICRC-3 transaction log standard for the ICRC-1 ledger implementation and rollout to SNS, ckBTC, and ckETH ledger deployments.", - "description": "The ICRC-3 standard needs to be implemented in the ICRC-1 ledger implementation. This enhances the ICRC-1 ledgers with a standardized way of accessing the block log. This affects all deployments of ICRC-1 ledgers, e.g., also the SNS token ledgers.", "status": "deployed", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "is_community": true, @@ -2956,12 +2541,10 @@ { "title": "Rosetta for ICRC ledgers", "overview": "Implement the Rosetta standard for ICRC-based ledgers. Enables ICRC tokens to be handled by centralized exchanges.", - "description": "Rosetta is a quasi standard put forth by Coinbase for the integration of blockchain ledgers with centralized exchanges. Besides a Rosetta implementation for the ICP ledger, an implementation for ICRC ledgers using ICRC-3 as block storage format is implemented in the scope of this feature.", "status": "in_progress", "eta": "2024", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": true, "in_beta": false @@ -2969,10 +2552,8 @@ { "title": "Trust Wallet integration", "overview": "Integrating ICP with the widely-used Trust Wallet project to facilitate adoption of ICP beyond the current audience.", - "description": "Trust Wallet is a widely-used wallet integrating with a wide range of blockchain platforms. Integrating with Trust Wallet may help improve adoption of ICP. Integrating with Trust Wallet requires enabling the Trust Wallet Core library to sign ICP transactions, adding ICP to the Trust Wallet token repository, and providing a Rosetta node for them to access the ledger.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "in_progress", @@ -2982,10 +2563,8 @@ { "title": "ICRC-4: Batch transfers for fungible tokens standard", "overview": "Extension standard for ICRC-1 defining batch transfers for ICRC-1 tokens. Batching transactions improves throughput and can reduce cost.", - "description": "Some use cases of token ledgers such as token distributions or specifically airdrops can benefit from the added support of batch transactions by a ledger. A batch transaction allows the caller to batch a larger number of individual transactions into a single canister method invocation. This not only saves on cycles for the method call overhead, but also allows for drastically increasing throughput of the ledger, which would otherwise be constrained by the subnet's ingress of XNet capacity.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "in_progress", @@ -2995,10 +2574,8 @@ { "title": "ICRC-21 canister call consent messages", "overview": "Standard for a protocol for obtaining human-readable consent messages for canister calls.", - "description": "This specification describes a protocol for obtaining human-readable consent messages for canister calls. These messages are intended to be shown to users to help them make informed decisions about whether to approve a canister call / sign a transaction. The protocol is designed in such a way that it can be used interactively (e.g. in a browser-based signer) or non-interactively (e.g. in a cold signer).", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "in_progress", @@ -3008,10 +2585,8 @@ { "title": "ICRC-22 payment request formats", "overview": "Standard for expressing payment requests for tokens on ICP as URLs and thus also QR codes.", - "description": "This standard defines the format of URLs for expressing the parameters required for making payments. The URLs can be expressed as a QR code to realize a simple visual channel between devices, e.g., a payment terminal and a user's mobile device.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "in_progress", @@ -3021,11 +2596,9 @@ { "title": "Implement ICRC-21 in Ledger devices", "overview": "The implmentation of ICRC-21 in Ledger devices will allow us to support generic transactions.", - "description": "The implmentation of ICRC-21 in Ledger devices will allow us to support generic transactions.", "status": "in_progress", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false @@ -3033,11 +2606,9 @@ { "title": "Chain-key token (ck token) support in the ICP wallet", "overview": "Support for chain-key tokens (ck tokens) in the ICP wallet.", - "description": "Support for chain-key tokens (ck tokens) in the ICP wallet.", "status": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false @@ -3045,10 +2616,8 @@ { "title": "Ledger Live staking integration", "overview": "Ledger live acquires the ability to stake ICP.", - "description": "Ledger live acquires the ability to stake ICP.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -3058,10 +2627,8 @@ { "title": "ICRC-7/-37 NFT standard in mobile wallets", "overview": "Bringing the ICRC-7 and ICRC-37 standards for NFTs on ICP to mobile wallets.", - "description": "Bringing the ICRC-7 and ICRC-37 standards for NFTs on ICP to mobile wallets.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -3071,11 +2638,9 @@ { "title": "Institutional custody integration for custody & staking of ICP and ICRC tokens", "overview": "An integration of the Internet Computer blockchain with institutional custody providers to support custody and staking for ICP and ICRC tokens.", - "description": "An integration of the Internet Computer blockchain with institutional custody providers to support custody and staking for ICP and ICRC tokens.", "status": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "is_community": false, @@ -3084,10 +2649,8 @@ { "title": "Ledger metrics", "overview": "This feature collects and exposes certain metrics of ledgers on the Internet Computer. Both the ICP ledger as well as ICRC-1/-2/-3 ledgers receive metrics support.", - "description": "This feature collects and exposes certain metrics of ledgers on the Internet Computer. Both the ICP ledger as well as ICRC-1/-2/-3 ledgers receive metrics support.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -3104,7 +2667,6 @@ "milestones": [ { "name": "Active Liquid Democracy", - "description": "The NNS and SNS DAOs facilitate active liquid democracies by lowering the entry barrier for participation and providing better inputs for well-informed decisions. This includes direct voting as well as delegating some voting decisions to others. As a result, more users stake tokens and participate in DAOs to form lively communities. The resulting network effects foster additional adoption and growth.", "milestone_id": "Plasma", "eta": null, "status": "in_progress", @@ -3112,7 +2674,6 @@ { "title": "Engagement platform for named neurons", "overview": "Make information about named neurons’ voting behavior easily accessible for voters to make informed decisions whom to follow.", - "description": "Named neurons can register in the NNS so that other neurons can find them and follow them on some proposal topics. While their names and the last few ballots can be found by their followers, additional context is often missing. This feature facilitates a more engaged community, where named neurons can share more information that can then be used by potential followers to make well-informed decisions. For example, this could include the (future) voting intentions of named neurons, whether they always intend to vote on certain topics and what values they generally support, some context on why the named neurons voted in a certain way and more accessible voting history of the named neurons.", "forum": "", "proposal": "", "docs": "", @@ -3125,7 +2686,6 @@ { "title": "Increase and facilitate active participation in SNS DAOs", "overview": "Further increase voting activity in the SNS DAOs by introducing an easy way to find experts and follow them.", - "description": "Unlike the NNS, there is currently no notion of named neurons in the SNSs. Therefore it is harder for users to find actively voting neurons that they can follow. The goal of this feature is to change this and make it easier for actively voting neurons to identify themselves and for other users to find and follow them.", "forum": "", "proposal": "", "docs": "", @@ -3138,7 +2698,6 @@ { "title": "Encouraging diligent active voting", "overview": "Further increase voting participation of neurons that make well-founded, informed decisions.", - "description": "Improve the experience for neurons to make well-founded, informed decisions. For example, make it easier for named neurons to fully commit to always voting on some topics. This enables new neurons to become experts on some topics and may then be followed by other neurons. Since following is done based on the topics of proposals, one way to achieve this is to reorganize topics so that they can be reasonably covered by individual named neurons.", "forum": "", "proposal": "", "docs": "", @@ -3151,7 +2710,6 @@ { "title": "SNS communities portal", "overview": "Introduce a landing page for each SNS DAO to the NNS dapp to foster the DAO communities.", - "description": "This feature has the goal of improving the experience on the NNS dapp to create a sense of community for the individual SNS DAOs and make all information concerning one DAO more readily accessible. To achieve this, one could introduce a new landing page for each SNS DAO which includes a summary of the DAO, for example the name and a description, but also summarizes the actionable proposals where a logged-in user can still vote on and links to the user’s neurons in that SNS.", "forum": "", "proposal": "", "docs": "", @@ -3164,10 +2722,8 @@ { "title": "Simplify neuron following experience", "overview": "Simplifications of neuron following in the NNS and SNS to enhance UX for governance participants.", - "description": "Today, it is quite cumbersome to set up several neurons at the same time, especially in cases where a user intends to follow a more diverse set of neurons based on the proposal topic. This is because neuron followees are saved each time they are set for a topic. A potential solution would be to allow the user to create a followee setup, and save it once they are happy with it, or even do batch neuron actions. Another goal is to improve the NNS dapp so that special cases for proposal following become intuitively visible, for example if some proposals are not covered by the “catch-all” following or are categorized as critical.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -3186,10 +2742,8 @@ { "title": "NNS FE Dapp Re-write", "overview": "Replacing Flutter with Svelte for the NNS Frontend dapp. Significant speed and UX improvements expected.", - "description": "The NNS FE dapp introduced at genesis was built on Flutter with mediocre usability. This feature replaces Flutter with Svelte — a technology stack better suited for the Internet Computer. The replacement led to significant improvements in dapp speed and user interface.", "forum": "https://forum.dfinity.org/t/nns-dapp-towards-new-ui-ux-including-test-link/13952", "proposal": "", - "wiki": "", "docs": "", "eta": "July 2022", "status": "deployed", @@ -3200,10 +2754,8 @@ { "title": "NNS spam protection (remove financial incentive)", "overview": "Protecting the NNS against proposal spam by removing the financial incentive for such spam by allocating voting rewards differently.", - "description": "This roadmap item is based on the community proposal 80970. In the current voting reward set-up, there is a financial incentive for spam because the overall reward pot is always handed out and by submitting governance spam propoals you can skew the voting reward allocation. This change removes this incentive by allocating voting rewards under the assumption that every neuron voted on everything. This change also includes moving the governance weight back to 20 and the tracking of allocated vs not allocated voting rewards.", "forum": "https://forum.dfinity.org/t/reproposal-spam-prevention-convert-from-system-based-rewards-to-voter-based-rewards/15352/1", "proposal": "https://dashboard.internetcomputer.org/proposal/80970", - "wiki": "", "docs": "", "eta": "Q1 23", "status": "deployed", @@ -3214,10 +2766,8 @@ { "title": "Carbon Footprint and Sustainability Policy", "overview": "Conducting a carbon footprint / environmental impact assessment of ICP. Footprint of a subnet, source of electricity, total environmental cost of a tx on ICP.", - "description": "This roadmap item is based on the community proposal 55487. Step 1: Conduct a carbon footprint / environmental impact assessment — either through internal resources or hiring an external consultant to answer basic questions about what the carbon footprint of running an IC Subnet is, where that electricity is sourced, and what the total cost per transaction is on the IC blockchain. Based on the learnings define more activities. Update: The Internet Computer Footprint Report is now available [here](https://assets.carboncrowd.io/reports/ICF.pdf). Step 2: “Energy consumption” reporting panel to the IC Network Status dashboard. Update: Power/Energy consumption is now reported in real time on the IC Dashboard homepage, as well as individually for given nodes, see example node [here](https://dashboard.internetcomputer.org/node/25p5a-3yzir-ifqqt-5lggj-g4nxg-v2qe2-vxw57-qkxtd-wjohn-kfbfp-bqe). Carbon Crowd also launched a [Sustainability Dashboard](https://app.carboncrowd.io/).", "forum": "https://forum.dfinity.org/t/sustainability-nns-proposal/11976", "proposal": "https://dashboard.internetcomputer.org/proposal/55487", - "wiki": "", "docs": "", "eta": "Q2 23", "status": "deployed", @@ -3228,10 +2778,8 @@ { "title": "Service Nervous System (SNS)", "overview": "A DAO factory that allows for proposal-based no-code creation of a DAO, including an initial decentralization swap. Available as part of the ICP governance framework.", - "description": "This SNS rollout will include several features: **1.** SNSs that are provided as a protocol function (deployed on an SNS subnet and facilitating maintainable upgrades), **2.** A first version of voting rewards for SNSs that can be further customised in the future, **3.** Decentralization swaps that decentralize a dapp, where participants provide ICP tokens in exchange for SNS tokens. **4.** A NNS frontend dapp extension that allows end users to participate in the decentralization swap. **5.** Tooling to help users initialize an SNS.", "forum": "", "proposal": "https://dashboard.internetcomputer.org/proposal/65132", - "wiki": "", "docs": "", "eta": "Q1 23", "status": "deployed", @@ -3242,10 +2790,8 @@ { "title": "Community Fund", "overview": "A first version of a community fund that provides means for the NNS community to have a \"treasury\" to invest in projects on ICP.", - "description": "This feature implements a first version of a community fund that provides means for the NNS community to have a \"treasury\" to invest in projects on the Internet Computer. In this first version neurons which have enabled the “community fund” feature may expose their maturity to the decisions of the NNS to invest in SNS decentalization swaps. Note: This feature has been renamed to “Neuron's Fund” later.", "forum": "https://forum.dfinity.org/t/community-fund-revised-design-proposal/14691", "proposal": "https://dashboard.internetcomputer.org/proposal/74820", - "wiki": "", "docs": "", "eta": "Q1 23", "status": "deployed", @@ -3256,10 +2802,8 @@ { "title": "Restriction for SNS swap participation", "overview": "Constraining participation in decentralization swaps of SNSs based on geographic location. Includes custom disclaimers to be confirmed by users before participating.", - "description": "When an SNS is launched, it goes through a decentralization swap. During the swap, participants provide ICP and in return receive a share of the SNS DAO’s governance power in the form of SNS neurons. In the current design, swaps are open for anyone to participate. Different projects in the community have requested a feature that allows SNSs to restrict participation in the swap by geographic location. In addition, this feature enables swap participants to be presented with a custom confirmation text that they need to confirm before being allowed to participate.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "Q2 23", "status": "deployed", @@ -3270,10 +2814,8 @@ { "title": "One-proposal SNS initialization", "overview": "Simplifying decentralization of a dapp from two NNS proposals plus manual steps to a single proposal.", - "description": "Currently, the process of decentralizing a dapp through the SNS platform requires two NNS proposals plus a few manual steps. Once this feature is implemented, the creation of a SNS will be done by a single proposal which the NNS community votes on.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "Q3 23", "status": "deployed", @@ -3284,7 +2826,6 @@ { "title": "Neurons' Fund Enhancements: Matched Funding", "overview": "Enhancing the Neuron's Fund, e.g., to scale with swap participation by the community or having a 10% cap on participation.", - "description": "Based on collected experience and community feedback from recent SNS launches and forum discussions, we propose four enhancements to the Community Fund framework: Introduction of a ‘Matched Funding’ scheme: Instead of a fixed ICP amount, the fund’s contribution to SNS swaps should scale in line with direct participation, i.e., match the funding of the Neuron's Fund with the organic funding received during the SNS launch, allowing for a more accurate reflection of market signals. Implementation of a 10% Participation Cap: To streamline adjustments when neurons opt out during SNS proposal voting, we suggest a cap in relationship to the totals funds available. This ensures the fund’s contribution to a single SNS never exceeds 10% of the total available funds at the proposal execution time. Consequently, this automatically adjusts the fund’s participation if neurons opt out. Renaming of the ‘Community Fund’ to ‘Neurons’ Fund’: This change aims to clarify misconceptions about the fund. It emphasizes that the fund comprises neurons owned by private individuals who are exposing their maturity to promising SNS DAOs. It is suggested to release this cosmetic change next week. Reduction of the Maximum Swap Duration: A potential fund contribution is tied up and cannot be utilized for other SNS launches for the duration of the swap. To prevent a potentially unsuccessful swap from blocking a fund contribution for an extended period, we propose shortening the maximum swap duration from the current 90 days to 14 days.", "forum": "", "proposal": "", "docs": "", @@ -3297,10 +2838,8 @@ { "title": "Ability to mint SNS tokens and revised thresholds for voting", "overview": "", - "description": "Some SNS projects have requested SNS token minting functionality to fine tune the DAO's tokenomics. This feature implements token minting for SNS projects. In addition, this feature defines some of the SNS proposals as critical and increases their voting thresholds. See the forum link for the details.", "forum": "https://forum.dfinity.org/t/new-sns-ability-to-mint-sns-tokens-revised-thresholds-for-voting/23382", "proposal": "", - "wiki": "", "docs": "", "eta": "Q1 24", "status": "deployed", @@ -3311,10 +2850,8 @@ { "title": "Node Provider Remuneration V2", "overview": "The remuneration scheme v2 improves on the original v1 scheme by reducing remuneration for regions having already many nodes and reducing also for additional nodes of a node provider.", - "description": "For the further growth of the IC network, the NNS agreed on a new replica hardware specification. The new specification is generic, i.e. not vendor specific. It is ready for upcoming ICP improvements. For example, it supports VM memory encryption and attestation which will further increase the security of dapps running on ICP. The new specification results in different captical expenses for the independent node providers running replica nodes. Consequently, a new NP reward structure (remuneration) is required. Based on feedback and discussion within the community, this remuneration is based on: — Higher rewards for the first nodes of a new NP in order to attract more NPs in an effort to improve ownership decentralization. — More refined rewards for nodes in new geographies, like South America, Africa, Asia and Australia, to stimulate further geographical decentralization. IC wiki: https://wiki.internetcomputer.org/wiki/Node_Provider_Remuneration", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -3325,12 +2862,10 @@ { "title": "Enable ICRC-2 for SNS ledgers", "overview": "Update the SNS framework to enable the rollout of ICRC-2 on the SNSs’ ledger canisters.", - "description": "", "status": "deployed", "eta": "Q1 2024", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false @@ -3338,10 +2873,8 @@ { "title": "Safeguards for critical SNS proposals, including the SNS treasury", "overview": "Additional safeguards for transferring funds from an SNS treasury.", - "description": "Some SNS proposals are more critical than others in that they have a big impact. This feature introduces additional measures to make it harder for them to be adopted, such as requiring a higher approval threshold. In addition, this feature introduces limits for treasury and minting proposals.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "Q1 2024", "status": "deployed", @@ -3352,10 +2885,8 @@ { "title": "Update of SNS ledger parameters", "overview": "Enables parameters of SNS ledgers, such as the SNS token name, to be updated through SNS upgrade proposals.", - "description": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "deployed", @@ -3365,11 +2896,9 @@ { "title": "Increase reward for SNS launch proposals to 20", "overview": "Increase the reward weight of NNS proposals of topic \"SNS & Neuron’s Fund\" to 20 to incentivize more active voting and have the same rules as for proposals of topic \"Governance\"", - "description": "", "status": "deployed", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false @@ -3385,11 +2914,9 @@ { "title": "Visual indicator of actionable proposals in the NNS dapp", "overview": "Users will see a ‘notification’-like indicator in the NNS dapp next to the name of different projects that show how many open governance proposals there are that the user can vote on.", - "description": "Users will see a ‘notification’-like indicator in the NNS dapp next to the different projects that show how many open governance proposals there are that the user has eligible neurons to vote on. This allows users to quickly get a glance of all the proposals that they can still vote on, instead of having to click through every project that may have open proposals.", "status": "in_progress", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false @@ -3397,7 +2924,6 @@ { "title": "Periodic Confirmation of Neuron Followees", "overview": "Require neuron owners to confirm their neurons’ following settings periodically.", - "description": "This roadmap item is based on the community proposal 55651. Caveat: A periodic reconfirmation of neuron following would presumably (at least initially) result in an active voting power of below 50% for non-governance topics. Thus, even if all voters voted, the system would have to wait for the end of the voting period, which is problematic in case of urgent updates (e.g. update of a subnet). This limitation needs to be mitigated.", "forum": "https://forum.dfinity.org/t/periodic-confirmation-of-neuron-followees/12109", "proposal": "https://dashboard.internetcomputer.org/proposal/55651", "docs": "", @@ -3410,7 +2936,6 @@ { "title": "Governance portfolio reporting capabilities", "overview": "Reporting functionality that allows users to obtain summary information about their neurons.", - "description": "Currently, it is difficult for users to obtain detailed historic information on received voting rewards and the neuron spawning history. In practice, users need to manually track their actions periodically or use third-party canisters to obtain this information. This feature enables ICP to provide reports to users regarding voting rewards and neurons", "forum": "", "proposal": "", "docs": "", @@ -3422,11 +2947,9 @@ { "title": "Allow ecosystem wallets to connect to the NNS dapp", "overview": "Allow ecosystem wallets to control tokens and governance neurons in the NNS dapp to foster an interoperable ICP ecosystem.", - "description": "The ICRC signer standards are being developed in collaboration with the ICP community to enable secure, and standardized communication between canisters. Adopting ICRC signer standards will foster an ecosystem of ICP dapps where the end-user has full control over their tokens, while managing them across dapps is much simplified. The NNS dapp will open up and provide an example implementation using these standards to allow ecosystem wallets to control assets in the NNS dapp. ICP wallets and dapps are encouraged to follow suit and together create an interoperable ICP ecosystem.", "status": "upcoming", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false @@ -3434,11 +2957,9 @@ { "title": "Simplification of NNS neurons", "overview": "NNS neurons have different properties, such as dissolve delay and age. Some of these properties must be in a certain relation at all times. This feature is to abstract away more of these details from the users so that neurons are easier to understand.", - "description": "NNS neurons have different properties, such as dissolve delay and age. Some of these properties must be in a certain relation at all times. This feature is to abstract away more of these details from the users so that neurons are easier to understand.", "status": "future", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false @@ -3446,10 +2967,8 @@ { "title": "NNS Neuron ID Indexing", "overview": "Create an index of all neuron ID values, accessible through a public interface.", - "description": "This roadmap item is based on the community proposal 48491.", "forum": "https://forum.dfinity.org/t/motion-request-for-neuron-indexing/11183", "proposal": "https://dashboard.internetcomputer.org/proposal/48491", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -3460,10 +2979,8 @@ { "title": "Manual overwriting of following-triggered votes", "overview": "This feature enables manual (overwrite) voting throughout the entire voting period of governance proposals even when a neuron is following another neuron.", - "description": "This feature enables manual (overwrite) voting throughout the entire voting period of governance proposals even when a neuron is following another neuron. This roadmap item is based on the community proposal 38985.", "forum": "https://forum.dfinity.org/t/proposal-to-enable-manual-voting-throughout-the-entire-voting-period-of-governance-proposals/9815", "proposal": "https://dashboard.internetcomputer.org/proposal/38985", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -3474,10 +2991,8 @@ { "title": "Improve staking user experience", "overview": "Simplify and make the ICP staking process self-explanatory in the NNS dapp to eliminate barriers of entry for first-time users.", - "description": "Simplify and make the ICP staking process self-explanatory in the NNS dapp to eliminate barriers of entry for first-time users.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -3488,10 +3003,8 @@ { "title": "Archiving of NNS proposals", "overview": "Storing NNS proposals long term instead of deleting them after a defined time period as done currently to increase transparency and accountability of governance.", - "description": "Currently, NNS proposals are only kept for a limited time and are then deleted. This feature intends to archive governance proposals to enhance transparency and accountability of the governance system.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -3501,10 +3014,8 @@ { "title": "Maintain neurons’ voting history", "overview": "Keeping the voting history of all neurons for better transparency and accountability of ICP governance.", - "description": "Keeping the voting history of all neurons for better transparency and accountability of ICP governance.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -3514,10 +3025,8 @@ { "title": "Private voting", "overview": "Private voting of neuron holders. This helps prevent leakage of information on how neurons voted to ensure that neuron holders can vote freely without fearing consequences.", - "description": "Private voting of neuron holders. This helps prevent leakage of information on how neurons voted to ensure that neuron holders can vote freely without fearing consequences.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -3527,11 +3036,9 @@ { "title": "Visual indicator of actionable proposals in the SNS dapp", "overview": "Users will see a ‘notification’-like indicator in the NNS dapp next to the name of different projects that show how many open governance proposals there are that the user can vote on.", - "description": "Users will see a ‘notification’-like indicator in the SNS dapp next to the different projects that show how many open governance proposals there are that the user has eligible neurons to vote on. This allows users to quickly get a glance of all the proposals that they can still vote on, instead of having to click through every project that may have open proposals.", "status": "in_progress", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false @@ -3539,11 +3046,9 @@ { "title": "SNS cycles management", "overview": "Support better automation of top-up of cycles for SNS and SNS-managed canisters for better usability.", - "description": "Support better automation of top-up of cycles for SNS and SNS-managed canisters for better usability.", "status": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": false, "in_beta": false @@ -3551,10 +3056,8 @@ { "title": "Neuron fund phase II (end of investment)", "overview": "Addresses the end-of-investment phase of the Neuron's Fund.", - "description": "Addresses the end-of-investment phase of the Neuron's Fund.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -3564,10 +3067,8 @@ { "title": "Hide 0 balances for SNS tokens/neurons", "overview": "Allow users to hide all projects from the list that they have 0 tokens or neurons in. This will make the UI cleaner, and provide a better overview of DAOs the user cares about.", - "description": "Allow users to hide all projects from the list that they have 0 tokens or neurons in. This will make the UI cleaner, and provide a better overview of DAOs the user cares about.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -3578,10 +3079,8 @@ { "title": "Improve staking user experience", "overview": "Simplify and make the SNS staking process self-explanatory in the NNS dapp to eliminate barriers of entry for first-time users.", - "description": "Simplify and make the SNS staking process self-explanatory in the NNS dapp to eliminate barriers of entry for first-time users.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -3592,10 +3091,8 @@ { "title": "SNS swap participation with additional tokens", "overview": "Allow users to participate in SNS decentralization swaps with different tokens than ICP, e.g., ckBTC or ckETH. Also a combination of tokens may be applicable.", - "description": "Allow users to participate in SNS decentralization swaps with different tokens than ICP, e.g., ckBTC or ckETH. Also a combination of tokens may be applicable.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -3605,10 +3102,8 @@ { "title": "Standalone \"SNS\" DAO", "overview": "Allow creating a DAO using the SNS governance code, but without going through an NNS proposal, without NNS orchestration, and without the DAO canisters being deployed on the SNS subnet.", - "description": "Allow creating a DAO using the SNS governance code, but without going through an NNS proposal, without NNS orchestration, and without the DAO canisters being deployed on the SNS subnet.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -3618,10 +3113,8 @@ { "title": "Simplified SNS life cycle", "overview": "Enhancements to the SNS life-cycle to improve the experience of SNS canister upgrades for SNS communities, e.g., through API versioning and better auditability of error messages on failed upgrades.", - "description": "Enhancements to the SNS life-cycle to improve the experience of SNS canister upgrades for SNS communities, e.g., through API versioning and better auditability of error messages on failed upgrades.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -3638,7 +3131,6 @@ "milestones": [ { "name": "Canister DevOps", - "description": "This milestone provides substantial simplifications and improvements for the developer lifecycle of canisters, particularly regarding their development and operations. Snapshotting capabilities, a powerful logging infrastructure and improved error handling as well as push of relevant events brings canisters closer to Web2 services in terms of DevOps.", "milestone_id": "Beryllium", "eta": null, "status": "in_progress", @@ -3646,11 +3138,9 @@ { "title": "Canister snapshots", "overview": "Allow for canister snapshots to be created on chain. Snapshots can be exported to and imported from off-chain storage.", - "description": "Data of canisters on ICP can currently not be easily exported by its controller, or exported data imported into another canister. Such functionality can be written by the canister developer, but should rather come as a platform feature of ICP. This feature brings snapshotting capabilities for canisters, the export of the snapshot to the off-chain world, and import of snapshots back into a canister.", "status": "in_progress", "forum": "https://forum.dfinity.org/t/canister-backup-and-restore-community-consideration/22597", "proposal": "", - "wiki": "", "docs": "", "eta": "Q2 2024", "is_community": true, @@ -3660,10 +3150,8 @@ { "title": "Canister logging", "overview": "New APIs for writing and reading canister runtime logs. The logs survive upgrades and traps and ensure developers are able to record key error data.", - "description": "This allows a developer to store and retrieve runtime logs of canisters deployed to mainnet through a dedicated memory buffer. The logs survive upgrades and traps and ensure developers are able to record key error data.", "forum": "https://forum.dfinity.org/t/canister-logging/21300", "proposal": "", - "wiki": "", "docs": "", "eta": "2024-08-28", "status": "deployed", @@ -3675,10 +3163,8 @@ { "title": "Canister lifecycle hooks", "overview": "Push model for canisters receiving notifications from the ICP, e.g., when they are low on cycles. More resource efficient than periodic pulling.", - "description": "Currently, developers have to actively monitor their canisters by periodically polling the cycle balance and the memory usage of the canisters. Periodic polling is inefficient in terms of resource usage and difficult to maintain for dapps with many canisters. This feature aims to improve the monitoring and observability of canisters by introducing a push model, where the canister is automatically notified when it is low on cycles and memory.", "forum": "https://forum.dfinity.org/t/canister-lifecycle-hooks/17089", "proposal": "https://dashboard.internetcomputer.org/proposal/106146", - "wiki": "", "docs": "", "eta": "", "status": "in_progress", @@ -3690,11 +3176,9 @@ { "title": "Standardized canister response codes", "overview": "Standardize canister response codes, particularly error codes, to enable better composability of services from canister smart contracts.", - "description": "Standardize canister response standard, particularly error codes, to enable better composability of services from canister smart contracts.", "status": "upcoming", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": true, "in_beta": false, @@ -3703,10 +3187,8 @@ { "title": "Actionable error messages and backtraces", "overview": "Improve error messages by providing more actionable information to developers such as backtraces, error codes, and links to the documentation explaining how to fix the error.", - "description": "Improve error messages by providing more actionable information to developers such as backtraces, error codes, and links to the documentation explaining how to fix the error.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "in_progress", @@ -3718,7 +3200,6 @@ }, { "name": "Canister Resource Management", - "description": "This milestone improves the cycles management experience on ICP through the introduction of the Cycles Ledger. End users can hold cycles seamlessly and can more easily manage their canisters cycles balance. Furthermore, with the exposure of a key set of metrics, getting insights into your canisters’ operations has been greatly improved, especially with the ability to see a breakdown of your cycles usage and which endpoints are the most expensive ones to call.", "milestone_id": "Thorium", "eta": null, "status": "in_progress", @@ -3726,10 +3207,8 @@ { "title": "Cycles and instruction insights", "overview": "Give canister controllers insights on cycles consumption of their canisters to help them optimize cycles consumption.", - "description": "Tracking down where cycles are consumed during the operation of a canister is currently a tedious job requiring manual efforts. This feature provides people who operate canisters insights into where cycles were spent. These insights can be used to optimize the cycles consumption of canisters.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "upcoming", @@ -3741,10 +3220,8 @@ { "title": "Cycles Ledger", "overview": "The Cycles Ledger replaces the cycles wallet as the recommended solution for managing cycles across projects.", - "description": "The Cycles Ledger replaces the cycles wallet as the recommended solution for managing cycles across projects. Prior to the development of the Cycles Ledger, the Cycles Wallet had been a source of confusion for many newcomers in the ecosystem. It is not a critical path for developing dapps on the Internet Computer and requires extensive prerequisite knowledge in order to be used effectively. Going forward, dfx will use the Cycles Ledger to make it simpler for developers to deploy code to the mainnet. The Cycles Wallet project will be deprecated in dfx, but developers will still be able to install and use it manually.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "Q2 2024", "status": "in_progress", @@ -3756,10 +3233,8 @@ { "title": "Live canister metrics", "overview": "Expose key canister metrics in realtime that can be queried, such as a breakdown of its memory usage (heap vs. stable) or received calls per second.", - "description": "It is crucial that canisters be manageable and monitorable, much like traditional Web2 services in cloud environments. This requires that key canister metrics be available through a canister API. Besides the cycles insights that are part of another feature, relevant further metrics would be a memory consumption breakdown (heap and stable), or received calls per second. Such metrics are crucial for devs in order to ensure their dapps are running reliably and to take mitigating steps in case of issues.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -3778,10 +3253,8 @@ { "title": "Bitcoin Integration with DFX", "overview": "Configure and run a Bitcoin adapter on the dev machine supporting the development with the native Bitcoin integration.", - "description": "This feature allows developers to spin up a Bitcoin adapter in dfx that connects to a Bitcoin node running on the developer's machine. Configuration parameters in dfx.json specify how to connect to the Bitcoin daemon.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "August 2022", "status": "deployed", @@ -3792,10 +3265,8 @@ { "title": "dfx Deps", "overview": "Enable canister developers to pull third party canisters into their local environment in order to build integrations that would otherwise require building code from source.", - "description": "dfx deps allows canister developers to pull third party canisters into their local environment in order to build integrations that would otherwise require building code from source.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "Q3 23", "status": "deployed", @@ -3806,10 +3277,8 @@ { "title": "DFX Quickstart", "overview": "New command that guides the developer through the steps necessary to ensure a successful deployment to mainnet.", - "description": "This feature introduces a new command that guides the developer through the steps necessary to ensure a successful deployment to mainnet. It also serves up useful information, such as the developer's current identity, ICP balance, the list of locally running canisters (future), and more.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "Q2 23", "status": "deployed", @@ -3820,10 +3289,8 @@ { "title": "Asset Caching", "overview": "Enable caching of assets served by the asset canister by giving it time-to-live information.", - "description": "Boundary nodes only cache queries for a very short amount of time. Assets (HTML pages, JS sources, images, etc) are not cached. The asset canister does not provide TTL information as to when the assets should expire. This feature gives the assets time-to-live information and expose it on the boundary nodes as well as the service worker.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "Q4 22", "status": "deployed", @@ -3834,10 +3301,8 @@ { "title": "SNS Quill", "overview": "Provides all the commands developers need to build and interact with an SNS. Based on the original Quill project.", - "description": "SNS Quill provides all the commands developers need to build and interact with an SNS both locally and on mainnet. It is based on the original Quill project, and will be integrated back into Quill at a future date.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "August 2022", "status": "deployed", @@ -3848,10 +3313,8 @@ { "title": "System-wide DFX", "overview": "Allows for running dfx as a system-wide process instead of being project specific. Makes it easier to start and stop canisters, run tests, and develop integrations.", - "description": "Today, the execution environment provided by dfx is project specific. This feature removes this limitation and allows developers to run dfx as a system-wide process. This quality-of-life improvement will make it easier to start and stop canisters, run tests, and develop integrations. Available in DFX 0.12.0.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "November 2022", "status": "deployed", @@ -3862,10 +3325,8 @@ { "title": "DFX keyring integration", "overview": "This feature integrates DFX with the OS keyring for seamless decryption of password-protected identities.", - "description": "This feature integrates DFX with the OS keyring for seamless decryption of password-protected identities.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "Q4 22", "status": "deployed", @@ -3876,10 +3337,8 @@ { "title": "SNS Tooling", "overview": "Enhance dfx for developing testing dapps for SNS-based decentralization. Code and test a swap locally, simulate it on mainnet, and manage a dapp after launch.", - "description": "We want to enable more developers to decentralize their dapps through the SNS. dfx now has more tools and capabilities for you to develop your code and test your swap locally, run a simulated swap on mainnet, and manage your dapp after it has been launched.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "Q3 23", "status": "deployed", @@ -3890,10 +3349,8 @@ { "title": "PocketIC", "overview": "The PocketIC is a canister testing system that allows to run a \"pocket version\" of ICP, including multiple subnets. This is the preferred way of testing dapps.", - "description": " is a system for testing canisters. is spun up on a developer's machine and behaves similarly to the IC mainnet. It provides a shortcut to the IC's execution environment, stripping away the networking and consensus layers, while the execution environment is the same that executes canisters on mainnet. It supports multiple subnets, thus offers a powerful platform for testing dapps efficiently. is intended to become the preferred solution for testing canisters.", "forum": "https://forum.dfinity.org/t/pocketic-testing-canisters-in-python/22490", "proposal": "", - "wiki": "", "docs": "https://internetcomputer.org/docs/current/developer-docs/setup/pocket-ic", "eta": "Dec 2023", "status": "deployed", @@ -3903,10 +3360,8 @@ { "title": "Improved Unit Testing for Motoko", "overview": "Brings a number of enhancements to unit testing in Motoko: Support for watch mode, VSCode extension GUI, and more.", - "description": "This brings a number of enhancements to unit testing in Motoko, including support for watch mode, VSCode extension GUI, and more. See the PR [here](https://github.com/dfinity/motoko-base/pull/527)", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "Q1 23", "status": "deployed", @@ -3917,10 +3372,8 @@ { "title": "React Native Starter Template", "overview": "A starter project template for React Native comprising Agent-JS for an integration with ICP.", - "description": "This feature provides a starter template to use as the basis of an integration between React Native, Agent-JS, and the IC that developers can use to build fully native mobile apps for the Internet Computer.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "Q4 22", "status": "deployed", @@ -3931,10 +3384,8 @@ { "title": "Motoko Base Library Enhancements", "overview": "Community-requested data structures and functionality for the Motoko Base Library.", - "description": "This feature brings long sought after data structures and functionality to the Motoko Base Library. Additions and enhancements to the Motoko Base Library will be ongoing and recurring as a set of quarterly deliverables.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "Q4 22", "status": "deployed", @@ -3945,10 +3396,8 @@ { "title": "Motoko Incremental Garbage Collector", "overview": "Blazing fast new garbage collector (GC) for Motoko, based on incremental GC.", - "description": "We are redesigning Motoko's garbage collector to be blazing fast. We're utilizing a design known as an incremental garbage collector to achieve considerable performance improvements. This is a large effort, and we currently expect the new garbage collector to be available towards the end of 2023.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "Q3 23", "status": "deployed", @@ -3959,10 +3408,8 @@ { "title": "Motoko Formatter & VSCode Extension", "overview": "Better dev UX for Motoko in VSCode. Motoko formatter, prettier integration, and other VSCode extensions.", - "description": "This feature brings significant enhancements to the Motoko developer experience with a new and improved formatter, prettier integration, and numerous new features to the VSCode extension.", "forum": "https://forum.dfinity.org/t/we-heard-you-motoko-vs-code-extension-improvements/15933", "proposal": "", - "wiki": "", "docs": "", "eta": "Q4 22", "status": "deployed", @@ -3973,10 +3420,8 @@ { "title": "Motoko let-else Binding", "overview": "A let-else binding allows a failure block to be run in case of a binding failure. Avoid deeply nested switch statements and have more readable code.", - "description": "This Motoko language feature allows a failure block to be run in case of a binding failure. The main motivation for this feature is to avoid deeply nested switch statements that lead to less readable code.", "forum": "https://forum.dfinity.org/t/solution-in-moc-0-8-3-let-else-match-and-take-in-motoko-do-for-variants-was-when/13427/6", "proposal": "", - "wiki": "", "docs": "", "eta": "Q1 23", "status": "deployed", @@ -3987,10 +3432,8 @@ { "title": "Motoko Dev Server", "overview": "Live-reload environment for Motoko projects that allows for rapid prototyping and a friction-free development experience.", - "description": "The motoko-dev-server is a live-reload environment for Motoko projects that allows for rapid prototyping and a friction-free development experience.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "Q2 2023", "status": "deployed", @@ -4001,10 +3444,8 @@ { "title": "New project builder and templates", "overview": "More powerful workflows for creating new projects using dfx new.", - "description": "The dfx project builder (dfx new) will receive a redesign that includes an improved UI, additional and updated templates, and more configuration options", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "Q1 2024", "status": "deployed", @@ -4015,10 +3456,8 @@ { "title": "Motoko Stable Regions", "overview": "Run-time-based stable memory data structures allocating memory in multiple different regions. Improves Motoko's stable memory support towards towards composable use cases.", - "description": "The current stable memory module in base has been \"experimental\" for a long time, and requires a more composable API to graduate from this status. Stable regions address the problem that today's ExperimentalStableMemory module only provides a single, monolithic memory that makes it unsuitable for directly building composable software parts. Stable region permit a new API that supports composable use cases. Stable regions also bring Motoko closer to parity with Rust canister development support today, by giving a run-time-system-based analogue of a special Rust library for stable data structures that allocates “pages” for them from stable memory in separate memory regions.", "forum": "", "proposal": "https://github.com/dfinity/motoko/blob/113f9c72edf4ff36bcc6dacc892fdb2f454ac81d/design/StableRegions-20230209.md", - "wiki": "", "docs": "", "eta": "Q3 2023", "status": "deployed", @@ -4029,10 +3468,8 @@ { "title": "Chunked upload of large Wasm files in dfx", "overview": "DFX support for uploading large Wasm files in chunks to meet message size limits. Complements replica support for large Wasm files.", - "description": "DFX now supports uploading canister Wasm files up to 10MB in size. This is accomplished by splitting large modules into chunks less than 2MB in size, uploading all of the chunks, and then combining them on the backend to form a completed wasm.", "forum": "https://forum.dfinity.org/t/allow-installation-of-large-wasm-modules/17372", "proposal": "", - "wiki": "", "docs": "", "eta": "Q1 2024", "status": "deployed", @@ -4042,11 +3479,9 @@ { "title": "Higher-level stable memory libraries", "overview": "Create more friendly, higher-level libraries that abstract away the complexity of working with the Stable Memory API.", - "description": "The stable memory API is a great abstraction for orthogonal persistence on ICP; however, it is a low-level API and can be difficult to use. Create more friendly, higher-level libraries that abstract away the complexity of working with the Stable Memory API.", "status": "deployed", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": true, "in_beta": false @@ -4054,10 +3489,8 @@ { "title": "dfx Version Manager", "overview": "The dfx version manager installs and manages versions of dfx.", - "description": "Developers sometimes need to use different versions of dfx. They may be upgrading to a new version of dfx, testing their project with a beta of an upcoming dfx release, or evaluating someone else’s project that specifies a particular dfx version. The dfx version manager (dfxvm) installs and manages dfx installations. This also paves the way to support package manager installs of dfx in the future.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "2024-02-08", "status": "deployed", @@ -4076,10 +3509,8 @@ { "title": "dfx extensions", "overview": "Plugin architecture for integrating 3rd-party functionality directly into DFX.", - "description": "The ability to extend the capabilities of dfx through an extension system would allow for myriads of integrations including CDKs, the Service Nervous System, Wallets, and more.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "Q3 2024", "status": "in_progress", @@ -4090,11 +3521,9 @@ { "title": "dfx ergonomics", "overview": "Better developer experience when using the ICP SDK.", - "description": "Ergonomic and quality-of-life improvements to dfx, simplifications, and reduction of manual effort.", "status": "in_progress", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "is_community": true, @@ -4103,11 +3532,9 @@ { "title": "Tutorials", "overview": "More and improved tutorials for faster successful onboarding of devs into the ICP ecosystem.", - "description": "Tutorials for developers are crucial for new and seasoned developers alike. The experience for newcomers can be refined by having easy-to-follow and effective tutorials, in written and video form.", "status": "in_progress", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "is_community": true, @@ -4116,10 +3543,8 @@ { "title": "Streaming support for the Asset Canister", "overview": "Streaming specific byte ranges of large files hosted by the Asset Canister.", - "description": "While the Asset Canister supports storage of up to 400 GB of assets, it does not yet have the capability to fetch specific ranges of content for a given file. By supporting certified byte-range requests, the Asset Canister will become much more capable. For example, byte range support can enable arbitrary seeking in HTML-based video and audio.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "Q3 2024", "status": "upcoming", @@ -4130,10 +3555,8 @@ { "title": "Trait-Bound canister development using Candid", "overview": "A new way to synchronize Candid interface definitions with Rust source code that leverages the capabilities of macros and Rust's strong type system.", - "description": "This feature introduces a new way to synchronize Candid interface definitions with Rust source code that leverages the capabilities of macros and Rust's strong type system.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "Q4 2024", "status": "upcoming", @@ -4144,7 +3567,6 @@ { "title": "IC agents for additional languages", "overview": "Releasing IC agents for additional languages to broaden the support for languages that can interoperate with ICP.", - "description": "IC agents that provide the functionality for a program to interact with the Internet Computer Protocol are available for a number of languages already. In order to facilitate further growth of ICP and new builders joining, it is planned to make IC agents available for a broader set of languages, in part also through a community effort.", "forum": "", "proposal": "", "docs": "", @@ -4156,10 +3578,8 @@ { "title": "Mobile app IC agents", "overview": "IC agents for the Android and iOS mobile app platforms. Simplifies building native Android and iOS apps talking to canisters.", - "description": "Currently, it is not straightforward to build a native mobile app for ICP as the Web-based IC agents need to be used, thus preventing a fully native implementation. This feature realizes IC agents for Android and iOS to simplify the development of native mobile ICP apps.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -4169,10 +3589,8 @@ { "title": "Specify placement subnet for new canisters", "overview": "When creating a new canister, the subnet on which it should be placed can be specified.", - "description": "The subnet a canister is deployed to should be able to be specified at deploy time. It will be possible to specify a subnet explicitly or declare that a canister should be deployed next to some other canister by providing an ID.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -4182,10 +3600,8 @@ { "title": "Verifiable canisters", "overview": "Tools for devs, such as TLA+, to help them verify the correctness of their canisters.", - "description": "Having canisters verifiable is crucial for gaining trust by users. One important aspect of verifiability is the use of formal methods, e.g., using TLA+ models as done for many of the DFINITY-authored canisters, with the goal of enhancing assurance of correctness of canisters. This feature intends to bring such techniques from the internal use in DFINITY to the wider community and allow for a more broad-based adoption of those techniques in the community.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -4195,10 +3611,8 @@ { "title": "dfx for macOS ARM arch", "overview": "The dfx toolchain is becoming native to the Mac's ARM-based architecture. This is necessary because the Rosetta 2 emulation layer is expected to be removed in the future.", - "description": "The current dfx tooling for Apple Macintosh computers is still based on the x64 instruction set and requires the Rosetta 2 emulator to run on ARM-based Macs. This feature retrofits the dfx toolset to run also on ARM-based Macs natively without an emulator. This will improve performance and make dfx future proof as Rosetta 2 is temporary and will be removed at some point from macOS.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -4208,10 +3622,8 @@ { "title": "PocketIC shipped and bundled with dfx", "overview": "PocketIC, the new recommended standard tool for testing of dapps, is shipped and packaged with the dfx SDK.", - "description": "Currently, the PocketIC is a standalone tool and shipped independently of ICP's dfx SDK. As a next step, it is integrated with dfx and shipped as a part of it. This integration of ICP's preferred dapp testing solution is another step in improving developer experience and reducing the burden for devs on ICP.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -4221,10 +3633,8 @@ { "title": "Candid presentation-layer configuration", "overview": "Candid configuration abstraction enabling type mapping from source language to destination language.", - "description": "A new abstraction to be added to Candid that allows configuring the mapping of types from source language to destination language.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -4234,10 +3644,8 @@ { "title": "Protocol Interface Specification evolution", "overview": "Improving version interoperability between different versions of the Internet Computer Protocol Specification.", - "description": "Currently, new versions of the Internet Computer Potocol Specification are rolled out very homogeneously throughout all the subnets of ICP within a short time period following the release of the new version. With the advent of sovereign subnets like UTOPIA, version diversity of networks will grow and interoperability between different versions will be more challenging. This feature enables subnets to make a certified claim about the protocol version they are running to other subnets. This information can be used to determine whether the subnet is compatible with the protocol version of another subnet that intends to communicate with it.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -4247,10 +3655,8 @@ { "title": "Ingress deduplication", "overview": "Enhancing ingress deduplication to use a larger 24-hour deduplication window for ingress messages compared to the current 5-minute window. Provides stronger guarantees, e.g., for financial applications.", - "description": "Distributed systems like ICP face the problem that submitted messages may not get processed for various reasons, thus requiring them to be resubmitted. Deduplication is used to ensure resubmission does not lead to repeated processing, i.e., it ensures idempotency. The current deduplication mechanism of ingress messages submitted to a canister on ICP deduplicates messages only during a 5-minute time window. This is too small a time window for applications where value is at stake, e.g., financial applications. This feature improves the deduplication guarantees offered by ICP by realizing a much larger deduplication time window.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -4260,10 +3666,8 @@ { "title": "Easy asset creation", "overview": "Easy and streamlined creation of digital assets (both fungible and non-fungible) through out-of-the-box-deployable ledgers. For example, the creation of a new token through a proposal, without needing to manually deploy (and manage) the corresponding canisters.", - "description": "Currently, the creation of a new digital asset on ICP is not straightforward and involves many steps, considerable technical knowledge, and ongoing maintainance efforts related to canister upgrades: Deploying a token means to deploy its token ledger and corresponding auxiliary canisters such as the index canister and maintaining those canisters in terms of software updates and cycles replenishment. This is technically rather involved and requires more technical skills than what should be the case for such a standard operation. This feature is about making the creation of a new digital asset on ICP as streamlined as reasonably possible. The idea is to allow for the creation of a new digital asset on ICP through just a proposal. A first target of this will be tokens following the ICRC-1/2/3 token standards, NFTs based on ICRC-7 and ICRC-37 could be a next step. Once implemented, anybody without deep technical knowledge about ICP's ledgers should be able to deploy a token on ICP, which is anticipated to be a great driver for new tokens on ICP.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -4273,11 +3677,9 @@ { "title": "REST- & JSON-centric interfaces", "overview": "REST and JSON-centric philosophy of interacting with ICP. Makes it easier for Web2 devs to onboard the ICP ecosystem.", - "description": "This community request proposes to have stronger support for the REST paradigm and thus also JSON on the ICP, complementing (or even attempting to replace) the current Candid-driven experience. This would help make the developer experience more similar to what Web2 developers are used to and thus simplify dev onboarding on ICP.", "status": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": true, "in_beta": false @@ -4285,11 +3687,9 @@ { "title": "Regulatory compliance support", "overview": "Making it easier to write regulatory compliant dapps.", - "description": "This is a rather generic roadmap item about supporting canister devs in writing regulatory compliant code. One important aspect in this domain is to have a KYC solution available for dapps to use.", "status": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": true, "in_beta": false @@ -4297,10 +3697,8 @@ { "title": "Automatic canister top-up using Web2 payment rails", "overview": "Allowing canisters to be topped up automatically via Web2 payment systems, e.g., credit card transfers.", - "description": "Manually taking care of canister cycles balances is a tedious and error-prone process. Automating this process helps canister developers save time and avoid situations where their canisters run out of cycles and lead to a service degradation for their users. One approach for this is to automate canister top ups based on their current cycles balance and using traditional Web2 payment systems such as credit cards for cycles top ups.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "future", @@ -4310,11 +3708,9 @@ { "title": "Bring Actix to the IC", "overview": "Bring Actix, one of the most powerful Web frameworks in Rust, to the IC. We should not be rebuilding this every time we do a dapp.", - "description": "", "status": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": true, "in_beta": false @@ -4322,11 +3718,9 @@ { "title": "A file system on ICP", "overview": "A file system on ICP, to avoid building or emulating one whenever needed by a dapp.", - "description": "Many projects need file-type capabilities. Instead of having each project re-build this functionality on their own, ICP should offer file system capabilities either as part of the system or as a user-space library that can be readily plugged into a dapp.", "status": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": true, "in_beta": false @@ -4334,11 +3728,9 @@ { "title": "Easily upload files to ICP", "overview": "Upload files to the IC with a few lines of code.", - "description": "Uploading files is a crucial functionality of many apps in Web2. Being able to easily, i.e., with a few lines of simple code, implement file upload functionality is crucial in order to make ICP competitive with existing Web2 technology stacks. One possible solution to allow for this is to provide a library for file upload that performs the \"heavy lifting\" and can be used with a few lines of code.", "status": "", "forum": "", "proposal": "", - "wiki": "", "docs": "", "is_community": true, "in_beta": false @@ -4346,10 +3738,8 @@ { "title": "Motoko-written interactive Web UIs running in Wasm", "overview": "Implementing interactive Web UIs with Motoko, executing in the browser in Wasm. Analogous to .NET's Blazor.", - "description": "Enable interactive Web UIs to be written in Motoko, compiled to Wasm, and executing efficiently in the browser is a next step towards providing a more streamlined developer experience in the Motoko ecosystem: Motoko can be used for both the backend and the frontend.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -4359,10 +3749,8 @@ { "title": "Motoko editor support V2", "overview": "Improved Motoko support in VSCode, e.g., outline view, unused declaration warnings, organizing imports, Prettier 3 support, and AI-driven coding assistance.", - "description": "This feature tackles improved support of Motoko in VSCode, some examples of which are given next: Outline view for Motoko files, jumping to the implementation of a symbol, warnings for unused declarations, organizing imports, Prettier 3.0 formatting, UX improvements for type checking large projects, better Mops and Vessel integration, improved wrapping rules for logical operators, and Motoko-optimized AI-driven coding assistance. This is instrumental in creating a more streamlined developer environment for Motoko and accelerate onboarding to the Motoko ecosystem.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -4372,10 +3760,8 @@ { "title": "Motoko package manager shipped with dfx", "overview": "Integrating the Mops package manager with dfx to improve DX.", - "description": "Motoko currently has two package managers, Vessel, the original package manager for Motoko, and Mops, a fully on-chain package manager that is the recommended package manager for Motoko. This feature is about integrating the Mops Motoko package manager with dfx to further improve the developer experience.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -4385,10 +3771,8 @@ { "title": "More Motoko libraries", "overview": "Implementing, or funding the implementation of, additional Motoko libraries of frequently requested functionality.", - "description": "A limitation of every young programming language like Motoko is that the selection of available open source libraries is substantially smaller than for mainstream languages. Therefore, determine commonly used smart contract functionality that should be offered as Motoko libraries. Invite the community to contribute to evolving the Motoko library ecosystem. At the same time, overhaul the existing Motoko base library for a more homogeneous design of common data structures and functionality that optimally fits the IC.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -4398,10 +3782,8 @@ { "title": "Language-interoperability support in Motoko", "overview": "Allowing the integration of libraries from other languages in Motoko (with some limitations).", - "description": "Additionally boost the Motoko library ecosystem by supporting interoperability between Motoko and other languages that compile to Wasm. This would, e.g., allow the usage of Rust libraries in Motoko. The implementation road could start from an MVP with limited support (e.g. only stateless functions that do not save memory across calls, restricted types that can be passed across foreign-language calls) and continue towards potentially full-fledged interoperability enabled through the Wasm component model", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -4411,10 +3793,8 @@ { "title": "Support of popular languages", "overview": "Supporting additional popular languages such as Dart, Java, C#, Go, or Swift, to implement canisters. Helping a broader range of devs to onboard ICP.", - "description": "Having a broad selection of programming languages for programming smart contracts is beneficial for adoption as prospective programmers are then less likely to be required to learn a new language in order to onboard on ICP. As ICP's execution is based on WebAssembly (Wasm), any language that compiles to Wasm can in principal be used to implement canister smart contracts. However, without an available canister development kit, using a language is not straightforward. Thus, besides the already supported languages, Dart, Java, C#, Go, and Swift are example candidates for additional support through CDKs due to their widespread use.", "forum": "", "proposal": "", - "wiki": "", "docs": "", "eta": "", "status": "", @@ -4425,4 +3805,4 @@ } ] } -] +] \ No newline at end of file diff --git a/sidebars.js b/sidebars.js index b7d87a9d16..f9b9dd3466 100644 --- a/sidebars.js +++ b/sidebars.js @@ -165,6 +165,14 @@ const sidebars = { }, ], }, + { + type: "category", + label: "Cost", + items: [ + "developer-docs/gas-cost", + "developer-docs/cost-estimations-and-examples", + ], + }, { type: "category", label: "Maintain", @@ -182,14 +190,6 @@ const sidebars = { "developer-docs/smart-contracts/maintain/storage", "developer-docs/smart-contracts/maintain/trapping", "developer-docs/smart-contracts/maintain/upgrade", - { - type: "category", - label: "Cost", - items: [ - "developer-docs/gas-cost", - "developer-docs/cost-estimations-and-examples", - ], - }, { type: "category", label: "Topping up canisters",