Skip to content

Commit

Permalink
rerendered docs
Browse files Browse the repository at this point in the history
  • Loading branch information
pat-alt committed Jul 5, 2023
1 parent 33c11f5 commit 0442cf4
Show file tree
Hide file tree
Showing 54 changed files with 16,060 additions and 24,898 deletions.
4 changes: 2 additions & 2 deletions _freeze/docs/src/faq/execute-results/md.json
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
{
"hash": "d91317173a1d0b7680d3c49ca54c0d38",
"hash": "6b1c9d828b12ac6f715df3827f568ce3",
"result": {
"markdown": "# Frequently Asked Questions\n\n```@meta\nCurrentModule = ConformalPrediction\n```\n\n\n\nIn this section we attempt to provide some reflections on frequently asked questions about the package and implemented methodologies. If you have a particular question that is not listed here, please feel free to also open an issue. While can answer questions regarding the package with a certain degree of confidence, I **do not pretend** to have any definite answers to methodological questions, but merely reflections (more on this below). \n\n## Package\n\n### Why the interface to `MLJ.jl`?\n\nAn important design choice. `MLJ.jl` is a one-stop shop for common machine learning models and pipelines in Julia. It’s growing fast and the development team is very accessible, friendly and enthusiastic. Conformal Prediction is a model-agnostic approach to uncertainty quantification, so it can be applied to any common (supervised) machine learning model. For these reasons I decided to interface this package to `MLJ.jl`. The idea is that any (supervised) `MLJ.jl` model can be conformalized using `ConformalPrediction.jl`. By leveraging existing `MLJ.jl` functionality for common tasks like training, prediction and model evaluation, this package is light-weight and scalable.\n\n## Methodology\n\nFor methodological questions about Conformal Prediction, my best advice is to consult the literature on the topic. A good place to start is [\"A Gentle Introduction to Conformal Prediction and\nDistribution-Free Uncertainty Quantification\"](https://arxiv.org/pdf/2107.07511.pdf) [@angelopoulos2021gentle]: the tutorial is comprehensive, accessible and continuously updated. Below you will find a list of high-level questions and reflections. \n\n| !!! warning \\\"Disclaimer\\\"\n| I want to emphasize that these are merely my own **reflections**. I provide these to the best of my knowledge and understanding of the topic, but please be aware that I am still on a learning journey myself. I have not read the entire literature on this topic (and won't be able to in the future either). If you spot anything that doesn't look right or sits at odds with something your read in the literature, please open an issue. Even better: if you want to add your own reflections and thoughts, feel free to open a pull request. \n\n### What is Predictive Uncertainty Quantification?\n\nPredictive Uncertainty Quantification deals with quantifying the uncertainty around predictions for the output variable of a supervised model. It is a subset of Uncertainty Quantification, which can also relate to uncertainty around model parameters, for example. I will sometimes use both terms interchangeably, even though I shouldn't (please bare with me, or if you're bothered by a particular slip-up, open a PR). \n\nUncertainty of model parameters is a very important topic itself: we might be interested in understanding, for example, if the estimated effect $\\theta$ of some input variable $x$ on the output variable $y$ is statistically significant. This typically hinges on being able to quantify the uncertainty around the parameter $\\theta$. This package does not offer this sort of functionality. I have so far not come across any work on Conformal Inference that deals with parameter uncertainty, but I also haven't properly looked for it. \n\n### What is the (marginal) coverage guarantee?\n\nThe (marginal) coverage guarantee states that:\n\n> [...] the probability that the prediction set contains the correct label [for a fresh test point from the same distribution] is almost exactly $1-\\alpha$.\n> \n> --- @angelopoulos2021gentle\n\nSee @angelopoulos2021gentle for a formal proof of this property or check out this [section](https://www.paltmeyer.com/blog/posts/conformal-regression/#evaluation) or `Pluto.jl` 🎈 [notebook](https://binder.plutojl.org/v0.19.12/open?url=https%253A%252F%252Fraw.githubusercontent.com%252Fpat-alt%252FConformalPrediction.jl%252Fmain%252Fdocs%252Fpluto%252Fintro.jl) to convince yourself through a small empirical exercise. Note that this property relates to a special case of conformal prediction, namely Split Conformal Prediction [@angelopoulos2021gentle].\n\n### What does marginal mean in this context?\n\nThe property is \"marginal\" in the sense that the probability is averaged over the randomness in the data [@angelopoulos2021gentle]. Depending on the size of the calibration set (context: Split Conformal Prediction), the realized coverage or estimated empirical coverage may deviate slightly from the user specified value $1-\\alpha$. To get a sense of this effect, you may want to check out this `Pluto.jl` 🎈 [notebook](https://binder.plutojl.org/v0.19.12/open?url=https%253A%252F%252Fraw.githubusercontent.com%252Fpat-alt%252FConformalPrediction.jl%252Fmain%252Fdocs%252Fpluto%252Fintro.jl): it allows you to adjust the calibration set size and check the resulting empirical coverage. See also Section 3 of @angelopoulos2021gentle.\n\n### Is CP really distribution-free?\n\nThe marginal coverage property holds under the assumption that the input data is exchangeable, which is a minimal distributional assumption. So, in my view, the short answer to this question is \"No\". I believe that when people use the term \"distribution-free\" in this context, they mean that no prior assumptions are being made about the actual form or family of distribution(s) that generate the model parameters and data. If we define \"distribution-free\" in this sense, then the answer to me seems \"Yes\". \n\n### What happens if this minimal distributional assumption is violated?\n\nThen the marginal coverage property does not hold. See [here](https://www.paltmeyer.com/blog/posts/conformal-regression/#are-we-done) for an example. \n\n### What are set-valued predictions?\n\nThis should be clearer after reading through some of the other tutorials and explanations. For conformal classifiers of type `ConformalProbabilisticSet`, predictions are set-valued: these conformal classifiers may return multiple labels, a single label or no labels at all. Larger prediction sets indicate higher predictive uncertainty: for sets of size greater than one the conformal predictor cannot with certainty narrow down its prediction down to a single label, so it returns all labels that meet the specified marginal coverage.\n\n### How do I interpret the distribution of set size?\n\nIt can be useful to plot the distribution of set sizes in order to visually asses how adaptive a conformal predictor is. For more adaptive predictors the distribution of set sizes is typically spread out more widely, which reflects that “the procedure is effectively distinguishing between easy and hard inputs” [@angelopoulos2021gentle]. This is desirable: when for a given sample it is difficult to make predictions, this should be reflected in the set size (or interval width in the regression case). Since ‘difficult’ lies on some spectrum that ranges from ‘very easy’ to ‘very difficult’ the set size should very across the spectrum of ‘empty set’ to ‘all labels included’.\n\n### What is aleatoric uncertainty? What is epistemic uncertainty?\n\nLoosely speaking: aleatoric uncertainty relates to uncertainty that cannot be “learned away” by observing more data (think points near the decision boundary); epistemic uncertainty relates to uncertainty that can be “learned away” by observing more data.\n\n## References\n\n",
"markdown": "---\ntitle: Frequently Asked Questions\n---\n\n\n\n```@meta\nCurrentModule = ConformalPrediction\n```\n\n\n\n\nIn this section we attempt to provide some reflections on frequently asked questions about the package and implemented methodologies. If you have a particular question that is not listed here, please feel free to also open an issue. While can answer questions regarding the package with a certain degree of confidence, I **do not pretend** to have any definite answers to methodological questions, but merely reflections (see the disclaimer below). \n\n## Package\n\n### Why the interface to `MLJ.jl`?\n\nAn important design choice. `MLJ.jl` is a one-stop shop for common machine learning models and pipelines in Julia. It’s growing fast and the development team is very accessible, friendly and enthusiastic. Conformal Prediction is a model-agnostic approach to uncertainty quantification, so it can be applied to any common (supervised) machine learning model. For these reasons I decided to interface this package to `MLJ.jl`. The idea is that any (supervised) `MLJ.jl` model can be conformalized using `ConformalPrediction.jl`. By leveraging existing `MLJ.jl` functionality for common tasks like training, prediction and model evaluation, this package is light-weight and scalable.\n\n## Methodology\n\nFor methodological questions about Conformal Prediction, my best advice is to consult the literature on the topic. A good place to start is [\"A Gentle Introduction to Conformal Prediction and\nDistribution-Free Uncertainty Quantification\"](https://arxiv.org/pdf/2107.07511.pdf) [@angelopoulos2021gentle]: the tutorial is comprehensive, accessible and continuously updated. Below you will find a list of high-level questions and reflections. \n\n| !!! warning \\\"Disclaimer\\\"\n| I want to emphasize that these are merely my own **reflections**. I provide these to the best of my knowledge and understanding of the topic, but please be aware that I am still on a learning journey myself. I have not read the entire literature on this topic (and won't be able to in the future either). If you spot anything that doesn't look right or sits at odds with something your read in the literature, please open an issue. Even better: if you want to add your own reflections and thoughts, feel free to open a pull request. \n\n### What is Predictive Uncertainty Quantification?\n\nPredictive Uncertainty Quantification deals with quantifying the uncertainty around predictions for the output variable of a supervised model. It is a subset of Uncertainty Quantification, which can also relate to uncertainty around model parameters, for example. I will sometimes use both terms interchangeably, even though I shouldn't (please bare with me, or if you're bothered by a particular slip-up, open a PR). \n\nUncertainty of model parameters is a very important topic itself: we might be interested in understanding, for example, if the estimated effect $\\theta$ of some input variable $x$ on the output variable $y$ is statistically significant. This typically hinges on being able to quantify the uncertainty around the parameter $\\theta$. This package does not offer this sort of functionality. I have so far not come across any work on Conformal Inference that deals with parameter uncertainty, but I also haven't properly looked for it. \n\n### What is the (marginal) coverage guarantee?\n\nThe (marginal) coverage guarantee states that:\n\n> [...] the probability that the prediction set contains the correct label [for a fresh test point from the same distribution] is almost exactly $1-\\alpha$.\n> \n> --- @angelopoulos2021gentle\n\nSee @angelopoulos2021gentle for a formal proof of this property or check out this [section](https://www.paltmeyer.com/blog/posts/conformal-regression/#evaluation) or `Pluto.jl` 🎈 [notebook](https://binder.plutojl.org/v0.19.12/open?url=https%253A%252F%252Fraw.githubusercontent.com%252Fpat-alt%252FConformalPrediction.jl%252Fmain%252Fdocs%252Fpluto%252Fintro.jl) to convince yourself through a small empirical exercise. Note that this property relates to a special case of conformal prediction, namely Split Conformal Prediction [@angelopoulos2021gentle].\n\n### What does marginal mean in this context?\n\nThe property is \"marginal\" in the sense that the probability is averaged over the randomness in the data [@angelopoulos2021gentle]. Depending on the size of the calibration set (context: Split Conformal Prediction), the realized coverage or estimated empirical coverage may deviate slightly from the user specified value $1-\\alpha$. To get a sense of this effect, you may want to check out this `Pluto.jl` 🎈 [notebook](https://binder.plutojl.org/v0.19.12/open?url=https%253A%252F%252Fraw.githubusercontent.com%252Fpat-alt%252FConformalPrediction.jl%252Fmain%252Fdocs%252Fpluto%252Fintro.jl): it allows you to adjust the calibration set size and check the resulting empirical coverage. See also Section 3 of @angelopoulos2021gentle.\n\n### Is CP really distribution-free?\n\nThe marginal coverage property holds under the assumption that the input data is exchangeable, which is a minimal distributional assumption. So, in my view, the short answer to this question is \"No\". I believe that when people use the term \"distribution-free\" in this context, they mean that no prior assumptions are being made about the actual form or family of distribution(s) that generate the model parameters and data. If we define \"distribution-free\" in this sense, then the answer to me seems \"Yes\". \n\n### What happens if this minimal distributional assumption is violated?\n\nThen the marginal coverage property does not hold. See [here](https://www.paltmeyer.com/blog/posts/conformal-regression/#are-we-done) for an example. \n\n### What are set-valued predictions?\n\nThis should be clearer after reading through some of the other tutorials and explanations. For conformal classifiers of type `ConformalProbabilisticSet`, predictions are set-valued: these conformal classifiers may return multiple labels, a single label or no labels at all. Larger prediction sets indicate higher predictive uncertainty: for sets of size greater than one the conformal predictor cannot with certainty narrow down its prediction down to a single label, so it returns all labels that meet the specified marginal coverage.\n\n### How do I interpret the distribution of set size?\n\nIt can be useful to plot the distribution of set sizes in order to visually asses how adaptive a conformal predictor is. For more adaptive predictors the distribution of set sizes is typically spread out more widely, which reflects that “the procedure is effectively distinguishing between easy and hard inputs” [@angelopoulos2021gentle]. This is desirable: when for a given sample it is difficult to make predictions, this should be reflected in the set size (or interval width in the regression case). Since ‘difficult’ lies on some spectrum that ranges from ‘very easy’ to ‘very difficult’ the set size should very across the spectrum of ‘empty set’ to ‘all labels included’.\n\n### What is aleatoric uncertainty? What is epistemic uncertainty?\n\nLoosely speaking: aleatoric uncertainty relates to uncertainty that cannot be “learned away” by observing more data (think points near the decision boundary); epistemic uncertainty relates to uncertainty that can be “learned away” by observing more data.\n\n## References\n\n",
"supporting": [
"faq_files"
],
Expand Down
Loading

0 comments on commit 0442cf4

Please sign in to comment.