Skip to content

[TECS'23] CNNBench tool for generation and evaluation of CNN architectures.

License

Notifications You must be signed in to change notification settings

jha-lab/cnn_design-space

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CNNBench: A CNN Design-Space Generation Tool and Benchmark

Python Version Conda PyTorch Hits

This repository contains the tool CNNBench which can be used to generate and evaluate different Convolutional Neural Network (CNN) architectures pertinent to the domain of Machine-Learning Accelerators. The tool can be used to search among a large set of CNN architectures.

Table of Contents

Environment setup

Clone this repository

git clone https://github.com/jha-lab/cnn_design-space.git
cd cnn_design-space

Setup python environment

  • PIP
virtualenv cnnbench
source cnnbench/bin/activate
pip install -r requirements.txt
  • CONDA
conda env create -f environment.yaml

Basic run of the tool

Running a basic version of the tool comprises of the following:

  • CNNs with modules comprising of upto two vertices, each is one of the operations in [MAXPOOL3X3, CONV1X1, CONV3X3]
  • Each module is stacked three times. A base stem of 3x3 convolution with 128 output channels is used. The stack of modules is followed by global average pooling and a final dense softmax layer.
  • Training on the CIFAR-10 dataset.

Download and prepare the CIFAR-10 dataset

cd cnnbench
python dataset_downloader.py

To use another dataset (among CIFAR-10, CIFAR-100, MNIST, or ImageNet) use input arguments; check: python dataset_downloader.py --help.

Generate computational graphs

python generate_library.py

This will create a .json file of all graphs at: dataset/dataset.json using the SHA-256 hashing algorithm and three modules per stack.

Run BOSHNAS

python run_boshnas.py

All training scripts use bash and have been implemented using SLURM. This will have to be setup before running the experiments.

Other flags can be used to control the training procedure (check using python run_boshnas.py --help). This script uses the SLURM scheduler over mutiple compute nodes in a cluster (each cluster assumed to have 1 GPU, this can be changed in the script job_scripts/job_train.sh). SLURM can also be used in scenarios where distributed nodes are not available.

Developer

Shikhar Tuli. For any questions, comments or suggestions, please reach me at [email protected].

Cite this work

Cite our work using the following bitex entry:

@article{tuli2022codebench, 
  author = {Tuli, Shikhar and Li, Chia-Hao and Sharma, Ritvik and Jha, Niraj K.}, 
   title = {{CODEBench}: A Neural Architecture and Hardware Accelerator Co-Design Framework}, 
   year = {2022}, publisher = {Association for Computing Machinery}, 
   address = {New York, NY, USA}, 
   issn = {1539-9087}, 
   url = {https://doi.org/10.1145/3575798}, 
   doi = {10.1145/3575798}, 
   note = {Just Accepted}, 
   journal = {ACM Trans. Embed. Comput. Syst.}, 
   month = {dec}}

License

BSD-3-Clause. Copyright (c) 2022, Shikhar Tuli and Jha Lab. All rights reserved.

See License file for more details.