Skip to content

Latest commit

 

History

History
210 lines (148 loc) · 5.48 KB

README.md

File metadata and controls

210 lines (148 loc) · 5.48 KB

Evaluation Toolbox

Dependencies

  • Python3
  • Python packages in requirements.txt
  • MATLAB Engine API for Python

Installation

A. Install MATLAB Engine API for Python

a. Windows

1. Find your MATLAB root
where matlab

For example:

C:\Program Files\MATLAB\R2019b\bin\matlab.exe
2. Go to the Python API folder
cd matlabroot\extern\engines\python

For example:

cd C:\Program Files\MATLAB\R2019b\extern\engines\python
3. Install MATLAB Engine API for Python
python setup.py install

b. Ubuntu

1. Find your MATLAB root
sudo find / -name MATLAB

For example:

# default MATLAB root
/usr/local/MATLAB/R2016b
2. Go to Python API folder
cd matlabroot/extern/engines/python

For example:

cd /usr/local/MATLAB/R2016b/extern/engines/python
3. Install MATLAB Engine API for Python
python setup.py install

B. Install Required Modules

pip install -r requirements.txt

C. Download the PI Evaluating Model

You can download the PI evaluating model from Google Drive or Baidu Drive (extraction code muw3) and place model.mat into MetricEvaluation/utils/sr-metric.

Usage

The scripts will calculate the values of the following evaluation metrics: 'MA', 'NIQE', 'PI', 'PSNR', 'BRISQUE', 'SSIM', 'MSE', 'RMSE', 'MAE', 'LPIPS'. Note that the 'SSIM' values are calculated by ssim.m, the matlab code including the suggested downsampling process available in this link.

Configurations

a. Manual

Manual modification is recommended when you want to evaluate only one SR method. You can edit Configuration.yml as follows:

Pairs:
  Dataset:
    - Set5
    - Set14
  GTFolder: 
    - ../data/GT/Set5
    - ../data/GT/Set14
  SRFolder:
    - ../data/SR/Set5/SPSR_Paper
    - ../data/SR/Set14/SPSR_Paper
RGB2YCbCr: True
evaluate_Ma: False
max_workers: 16
Name: Test
Echo: True
  • Pairs: The orders of the Dataset, GTFolder and SRFolder must match each other.
  • Dataset: The Datasets that need to be evaluated.
  • GTFolder: The folder path of ground-truth images.
  • SRFolder: The folder path of SR images.
  • RGB2YCbCr: Set whether convert color space in matlab code.
  • evaluate_Ma: Set whether to calculate metrics Ma. The calculation of metrics Ma takes a lot of time.
  • max_workers: max worker. (Since the computing function of NIQE in MATLAB seems to be thread unsafe, multithread computing may have problems)
  • Name: The evaluation's name.
  • Echo: Whether to echo scores while evaluating or not.

b. Bash

We also provide generate_configuration.py to generate configuration files automatically when you have multiple SR methods to evaluate.

  1. Put SR folders of different methods into your_SR_Folder and put the ground-truth folders into your_GT_Folder.

  2. Edit the MethodDict values. The key is the method and the value is the list of datasets you want to evaluate.

MethodDict=dict()
MethodDict['EnhanceNet']=['BSD100', 'General100', 'Set14']
MethodDict['SRGAN']=['BSD100', 'General100', 'Set14', 'Set5', 'Urban100']
  1. Edit the dataDict values.
fileName = method+'.yml'
dataDict=dict()
dataDict['Pairs']=dict()
dataDict['Pairs']['Dataset']=MethodDict[method]
dataDict['Pairs']['SRFolder']=[]
dataDict['Pairs']['GTFolder']=[]
dataDict['Name']=evaluation_name
dataDict['RGB2YCbCr'] = RGB2YCbCr
dataDict['evaluate_Ma'] = evaluate_Ma
dataDict['max_workers'] = max_workers
dataDict['Echo']=True
for dataset in MethodDict[method]:
    dataDict['Pairs']['SRFolder'].append(str(os.path.join('your_SR_Folder','SR',dataset,method)))
    dataDict['Pairs']['GTFolder'].append(str(os.path.join('your_GT_Folder','GT',dataset)))
  • dataDict['Name'] is the evaluation's name.
  • dataDict['Echo'] is a bool value to control whether to output scores to the terminal while evaluating.
  • your_SR_Folder is the folder where your SR results are stored.
  • your_GT_Folder is the folder where your GT datasets are stored.

Evaluation

a. Manual

Run evaluate_sr_results.py

usage: python evaluate_sr_results.py [-h] YAML

positional arguments:
  YAML        configuration file

optional arguments:
  -h, --help  show this help message and exit

For example:

python evaluate_sr_results.py config/Configuration.yml

b. Bash

If you use the bash configuration method, all of the generated configuration files will be saved in the Configuration folder. At the same time, a bash file named Run.bash will be generated to run the whole configurations. You can start evaluation by:

bash Run.bash

Results

The results will be generated in the ../evaluate/ folder as follows:

  • ../evaluate/<Name>/<Name>.log: Log file of evaluation.
  • ../evaluate/<Name>/<Name>.xlsx: xlsx file to store evaluation on each dataset. .xlsx is provided.
  • ../evaluate/<Name>/detail/<Dataset>: Folders to store detail evaluation data for each image in one dataset. .csv or .xlsx is provided.