Skip to content

Commit

Permalink
Masterclass (#127)
Browse files Browse the repository at this point in the history
* Update MEV param

* Update README

* Update default experiment docs

* Update README notebook

* Update notebooks

* Minor viz updates

* Reformat post-processing

* Update template

* Update template

* Update experiment utils

* Update model docs

* Remove code nb2
  • Loading branch information
BenSchZA authored Sep 3, 2021
1 parent adab607 commit fd46bd5
Show file tree
Hide file tree
Showing 13 changed files with 90 additions and 53 deletions.
4 changes: 2 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -183,9 +183,9 @@ If you receive the following error and you use Anaconda, try: `conda install -c

The [experiments/](experiments/) directory contains modules for configuring and executing simulation experiments, as well as performing post-processing of the results.

The [experiments/notebooks/](experiments/notebooks/) directory contains initial validator-level and network-level experiment notebooks and analyses. These notebooks and analyses do not aim to comprehensively illuminate the Ethereum protocol, but rather to suggest insights into a few salient questions the Ethereum community has been discussing, and to serve as inspiration for researchers building out their own, customized analyses and structural model extensions.
The [experiments/notebooks/](experiments/notebooks/) directory contains initial validator-level and network-level experiment notebooks and analyses. These notebooks and analyses do not aim to comprehensively illuminate the Ethereum protocol, but rather to suggest insights into a few salient questions the Ethereum community has been discussing, and to serve as inspiration for researchers building out their own, customized analyses and structural model extensions.

The [experiments/notebooks/README.ipynb](experiments/notebooks/0_README.ipynb) contains an overview of how to execute existing experiment notebooks, and how to configure and execute new ones.
The [Experiment README notebook](experiments/notebooks/0_README.ipynb) contains an overview of how to execute existing experiment notebooks, and how to configure and execute new ones.

#### Notebook 1. Model Validation

Expand Down
7 changes: 5 additions & 2 deletions experiments/default_experiment.py
Original file line number Diff line number Diff line change
@@ -1,7 +1,10 @@
"""
The default experiment with default model System Parameters, State Variables, and Simulation Configuration.
The default experiment with default model Initial State, System Parameters, and Simulation Configuration.
The defaults are defined in their respective modules (e.g. `model/system_parameters.py`).
The defaults are defined in their respective modules:
* Initial State in `model/state_variables.py`
* System Parameters in `model/system_parameters.py`
* Simulation Configuration in `experiments/simulation_configuration.py`
"""

from radcad import Simulation, Experiment, Backend
Expand Down
46 changes: 39 additions & 7 deletions experiments/notebooks/0_README.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
"id": "f959b450",
"metadata": {},
"source": [
"# Experiment Quick-Start Guide"
"# Experiment README"
]
},
{
Expand Down Expand Up @@ -63,7 +63,7 @@
"id": "6a3d6ab4",
"metadata": {},
"source": [
"The experiment notebooks will start by importing some standard dependencies:"
"Depending on the chosen template and planned analysis, the required imports might differ slightly from the below standard dependencies:"
]
},
{
Expand All @@ -82,8 +82,9 @@
"import copy\n",
"import logging\n",
"import numpy as np\n",
"from pprint import pprint\n",
"import pandas as pd\n",
"import plotly.express as px\n",
"from pprint import pprint\n",
"\n",
"# Project dependencies\n",
"import model.constants as constants\n",
Expand Down Expand Up @@ -296,6 +297,25 @@
"df"
]
},
{
"cell_type": "markdown",
"id": "9647375d-7516-4e26-9dd7-eee458d3aab7",
"metadata": {},
"source": [
"We can also use Pandas for numerical analyses:"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "8e52f2be-059b-4762-b0e2-a6228d7efdaa",
"metadata": {},
"outputs": [],
"source": [
"# Get the maximum validating rewards in ETH for each subset\n",
"df.groupby('subset')['validating_rewards'].max() / constants.gwei"
]
},
{
"cell_type": "markdown",
"id": "26f2e7ef",
Expand All @@ -309,7 +329,7 @@
"id": "6ac53e3b",
"metadata": {},
"source": [
"Once we have the results post-processed and in a Pandas DataFrame, we can use Plotly for plotting our results, or Pandas for numerical analyses:"
"Once we have the results post-processed and in a Pandas DataFrame, we can use Plotly for plotting our results:"
]
},
{
Expand All @@ -319,6 +339,18 @@
"metadata": {},
"outputs": [],
"source": [
"# Plot the total validating rewards in ETH for each subset\n",
"px.line(df, x='timestamp', y='validating_rewards_eth', facet_col='subset')"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "658cc61b-001e-4342-b4b3-b071a59c08be",
"metadata": {},
"outputs": [],
"source": [
"# Plot the individual validating rewards in ETH for each subset\n",
"visualizations.plot_validating_rewards(df, subplot_titles=[\"Base Reward Factor = 64\", \"Base Reward Factor = 32\"])"
]
},
Expand All @@ -330,7 +362,7 @@
"# Creating New, Customized Experiment Notebooks\n",
"\n",
"If you want to create an entirely new analysis, you'll need to create a new experiment notebook, which entails the following steps:\n",
"* Step 1: Select a base experiment template from the `experiments/templates/` directory to start from. The template [example_analysis.py](../templates/example_analysis.py) gives an example of extending the default experiment to override default State Variables and System Parameters.\n",
"* Step 1: Select an experiment template from the `experiments/templates/` directory to start from. If you'd like to create your own template, the [example_analysis.py](../templates/example_analysis.py) template gives an example of extending the default experiment to override default State Variables and System Parameters that you can copy.\n",
"* Step 2: Create a new notebook in the `experiments/notebooks/` directory, using the [template.ipynb](./template.ipynb) notebook as a guide, and import the experiment from the experiment template.\n",
"* Step 3: Customize the experiment for your specific analysis.\n",
"* Step 4: Execute your experiment, post-process and analyze the results, and create Plotly charts!"
Expand Down Expand Up @@ -722,9 +754,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "Python (CADLabs Ethereum Model)",
"display_name": "Python 3",
"language": "python",
"name": "python-cadlabs-eth-model"
"name": "python3"
},
"language_info": {
"codemirror_mode": {
Expand Down
2 changes: 1 addition & 1 deletion experiments/notebooks/1_model_validation.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -610,7 +610,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.5"
"version": "3.8.11"
}
},
"nbformat": 4,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -159,9 +159,13 @@
"source": [
"simulation_1a.model.params.update({\n",
" 'validator_process': [\n",
" lambda _run, _timestep: 3, # Normal adoption: current average active validators per epoch from Beaconscan\n",
" lambda _run, _timestep: 3 * 0.5, # Low adoption: 50%-lower scenario\n",
" lambda _run, _timestep: 3 * 1.5, # High adoption: 50%-higher scenario\n",
" # Normal adoption: historical average newly activated validators per epoch\n",
" # between 15 January 2021 and 15 July 2021 as per https://beaconscan.com/stat/validator\n",
" lambda _run, _timestep: 3,\n",
" # Low adoption: 50%-lower scenario\n",
" lambda _run, _timestep: 3 * 0.5,\n",
" # High adoption: 50%-higher scenario\n",
" lambda _run, _timestep: 3 * 1.5,\n",
" ], # New validators per epoch\n",
"})"
]
Expand Down Expand Up @@ -246,7 +250,7 @@
" 'mev_per_block': [\n",
" 0,\n",
" 0,\n",
" 0.1, # ETH - median per-block MEV from https://explore.flashbots.net/\n",
" 0.02, # ETH - median per-block MEV from https://explore.flashbots.net/\n",
" ]\n",
"})"
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -143,10 +143,14 @@
"simulation_1 = copy.deepcopy(simulation)\n",
"simulation_1.model.params.update({\n",
" 'validator_process': [\n",
" lambda _run, _timestep: 3, # Normal adoption: current average active validators per epoch from Beaconscan\n",
" lambda _run, _timestep: 3 * 0.5, # Low adoption: 50%-lower scenario\n",
" lambda _run, _timestep: 3 * 1.5, # High adoption: 50%-higher scenario\n",
" ],\n",
" # Normal adoption: historical average newly activated validators per epoch\n",
" # between 15 January 2021 and 15 July 2021 as per https://beaconscan.com/stat/validator\n",
" lambda _run, _timestep: 3,\n",
" # Low adoption: 50%-lower scenario\n",
" lambda _run, _timestep: 3 * 0.5,\n",
" # High adoption: 50%-higher scenario\n",
" lambda _run, _timestep: 3 * 1.5,\n",
" ], # New validators per epoch\n",
"})\n",
"\n",
"simulation_2 = copy.deepcopy(simulation)\n",
Expand Down Expand Up @@ -258,7 +262,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.5"
"version": "3.8.11"
}
},
"nbformat": 4,
Expand Down
13 changes: 5 additions & 8 deletions experiments/notebooks/template.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,7 @@
"import logging\n",
"import numpy as np\n",
"import pandas as pd\n",
"import plotly.express as px\n",
"\n",
"import experiments.notebooks.visualizations as visualizations\n",
"from experiments.run import run\n",
Expand All @@ -86,8 +87,7 @@
"outputs": [],
"source": [
"# Import experiment templates\n",
"import experiments.default_experiment as default_experiment\n",
"import experiments.templates.time_domain_analysis as time_domain_analysis"
"import experiments.default_experiment as default_experiment"
]
},
{
Expand All @@ -107,8 +107,7 @@
"outputs": [],
"source": [
"# Create a simulation for each analysis\n",
"simulation_1 = copy.deepcopy(default_experiment.experiment.simulations[0])\n",
"simulation_2 = copy.deepcopy(time_domain_analysis.experiment.simulations[0])"
"simulation_1 = copy.deepcopy(default_experiment.experiment.simulations[0])"
]
},
{
Expand All @@ -119,10 +118,8 @@
"source": [
"# Experiment configuration\n",
"simulation_1.model.initial_state.update({})\n",
"simulation_1.model.params.update({})\n",
"\n",
"simulation_2.model.initial_state.update({})\n",
"simulation_2.model.params.update({})"
"simulation_1.model.params.update({})"
]
},
{
Expand Down Expand Up @@ -184,7 +181,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.5"
"version": "3.8.11"
}
},
"nbformat": 4,
Expand Down
6 changes: 3 additions & 3 deletions experiments/notebooks/visualizations/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -1119,7 +1119,7 @@ def plot_yields_per_subset(df, scenario_names):
pad={"t": 10},
x=0,
xanchor="left",
y=1.15,
y=1.3,
yanchor="top",
)
]
Expand Down Expand Up @@ -1202,7 +1202,7 @@ def plot_cumulative_yields_per_subset(df, DELTA_TIME, scenario_names):
pad={"t": 10},
x=0,
xanchor="left",
y=1.15,
y=1.3,
yanchor="top",
)
]
Expand Down Expand Up @@ -1570,7 +1570,7 @@ def plot_network_issuance_scenarios(df, simulation_names):
pad={"t": 25},
x=0,
xanchor="left",
y=1.1,
y=1.25,
yanchor="top",
)
]
Expand Down
13 changes: 12 additions & 1 deletion experiments/post_processing.py
Original file line number Diff line number Diff line change
Expand Up @@ -62,7 +62,18 @@ def post_process(df: pd.DataFrame, drop_timestep_zero=True, parameters=parameter
df['revenue_profit_yield_spread_pct'] = df['total_revenue_yields_pct'] - df['total_profit_yields_pct']

# Convert validator rewards from Gwei to ETH
validator_rewards = ['total_online_validator_rewards', 'total_priority_fee_to_validators', 'source_reward', 'target_reward', 'head_reward', 'block_proposer_reward', 'sync_reward', 'whistleblower_rewards']
validator_rewards = [
'validating_rewards',
'validating_penalties',
'total_online_validator_rewards',
'total_priority_fee_to_validators',
'source_reward',
'target_reward',
'head_reward',
'block_proposer_reward',
'sync_reward',
'whistleblower_rewards'
]
df[[reward + '_eth' for reward in validator_rewards]] = df[validator_rewards] / constants.gwei

# Convert validator penalties from Gwei to ETH
Expand Down
5 changes: 2 additions & 3 deletions experiments/templates/eth_price_eth_staked_grid_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,17 +6,16 @@

import numpy as np
import copy
from radcad.utils import generate_cartesian_product_parameter_sweep

from model.state_variables import eth_staked, eth_supply, eth_price_max
from experiments.default_experiment import experiment, TIMESTEPS, DELTA_TIME
from experiments.utils import generate_cartesian_product
from model.types import Stage


# Make a copy of the default experiment to avoid mutation
experiment = copy.deepcopy(experiment)

sweep = generate_cartesian_product({
sweep = generate_cartesian_product_parameter_sweep({
# ETH price range from 100 USD/ETH to the maximum over the last 12 months
"eth_price_samples": np.linspace(start=100, stop=eth_price_max, num=20),
# ETH staked range from current ETH staked to minimum of 2 x ETH staked and 30% of total ETH supply
Expand Down
2 changes: 1 addition & 1 deletion experiments/templates/example_analysis.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
}

state_variable_overrides = {
"eth_price": [0],
"eth_price": 0,
}

# Override default experiment System Parameters
Expand Down
14 changes: 0 additions & 14 deletions experiments/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,20 +28,6 @@ def rng_generator(master_seed=1):
return np.random.default_rng(seed_sequence.spawn(1)[0])
else:
return np.random.default_rng(seed_sequence.spawn(1)[0])


def generate_cartesian_product(sweeps):
"""Generates a parameter sweep using a cartesian product of System Parameter dictionary
Args:
sweeps (Dict[str, List]): A cadCAD System Parameter dictionary to sweep
Returns:
Dict: A dictionary containing the cartesian product of all parameters
"""
cartesian_product = list(itertools.product(*sweeps.values()))
params = {key: [x[i] for x in cartesian_product] for i, key in enumerate(sweeps.keys())}
return params


def get_simulation_hash(sim):
Expand Down
5 changes: 3 additions & 2 deletions model/state_update_blocks.py
Original file line number Diff line number Diff line change
Expand Up @@ -38,8 +38,9 @@
state_update_block_validators = {
"description": """
Environmental validator processes:
* New validators
* Online and offline validators
* Validator activation queue
* Validator rotation
* Validator uptime
""",
"policies": {
"policy_validators": validators.policy_validators,
Expand Down

0 comments on commit fd46bd5

Please sign in to comment.