diff --git a/episodes/fairness.md b/episodes/fairness.md index 70d2622..90f2121 100644 --- a/episodes/fairness.md +++ b/episodes/fairness.md @@ -12,8 +12,32 @@ exercises: 0 ::::::::::::::::::::::::::::::::::::: objectives -- Participants will be able to define and differntiate between various notions of fairness in the context of machine learning. -- Participants will be able to define (and implement?) two different ways of modifying the machine learning modeling process to improve the fairness of a model. -- Participants will understand the limitations of fairness as a metric for machine learning models. +- Explain what is meant by bias and fairness in the context of machine learning. +- Describe two different ways of modifying the machine learning modeling process to improve the fairness of a model. +- Articulate the limitations of fairness scores :::::::::::::::::::::::::::::::::::::::::::::::: + +:::::::::::::::::::::::::::::::::::::: challenge + +### Matching fairness terminology with definitions + +Match the following types of formal fairness with their definitions. +(A) Individual fairness, +(B) Equalized odds, +(C) Demographic parity, and +(D) Group-level calibration + +1. The model is equally accurate across all demographic groups. +2. Different demographic groups have the same true positive rates and false positive rates. +3. Similar people are treated similarly. +4. People from different demographic groups receive each outcome at the same rate. +:::::::::::::::::::::::::::::::::::::::::::::::::: + +:::::::::::::: solution + +### Solution + +A - 3, B - 2, C - 4, D - 1 + +::::::::::::::::::::::::: \ No newline at end of file diff --git a/episodes/problem-definition.md b/episodes/problem-definition.md index 0ac321e..434b0fc 100644 --- a/episodes/problem-definition.md +++ b/episodes/problem-definition.md @@ -12,6 +12,9 @@ exercises: 0 ::::::::::::::::::::::::::::::::::::: objectives -- TODO +- Judge what tasks are appropriate for machine learning +- Understand why the choice of prediction task / target variable is important. +- Describe how bias can appear in training data. + ::::::::::::::::::::::::::::::::::::::::::::::::