Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FR] Update OpenVINO 2021.2 (w/ Python 3.8 support) #53

Open
bafu opened this issue Aug 30, 2020 · 7 comments
Open

[FR] Update OpenVINO 2021.2 (w/ Python 3.8 support) #53

bafu opened this issue Aug 30, 2020 · 7 comments
Assignees

Comments

@bafu
Copy link
Member

bafu commented Aug 30, 2020

Description

Failed to import openvino module because OpenVINO 2020.1 does not support Python 3.8.

References

  1. Community discussion, https://community.intel.com/t5/Intel-Distribution-of-OpenVINO/Python-3-8-support/td-p/1183409
  2. Intel official release note, https://software.intel.com/content/www/us/en/develop/articles/openvino-relnotes.html
@bafu
Copy link
Member Author

bafu commented Jan 31, 2021

@bafu bafu changed the title [Bug Report] OpenVINO 2020.1 does not support Python 3.8 [Bug Report] Update OpenVINO 2021.2 (w/ Python 3.8 support) Jan 31, 2021
@bafu bafu changed the title [Bug Report] Update OpenVINO 2021.2 (w/ Python 3.8 support) [FR] Update OpenVINO 2021.2 (w/ Python 3.8 support) Jan 31, 2021
@bafu bafu self-assigned this Feb 21, 2021
@bafu
Copy link
Member Author

bafu commented Feb 21, 2021

OpenVINO APT installation instruction

Runtime package

$ sudo apt-cache search intel-openvino-runtime-ubuntu20
intel-openvino-runtime-ubuntu20-2021.1.110 - Intel® Deep Learning Deployment Toolkit 2021.1 for Linux*
intel-openvino-runtime-ubuntu20-2021.2.200 - Intel® Deep Learning Deployment Toolkit 2021.2 for Linux*

Developer package

$ sudo apt-cache search intel-openvino-dev-ubuntu20
intel-openvino-dev-ubuntu20-2021.1.110 - Intel® Deep Learning Deployment Toolkit 2021.1 for Linux*
intel-openvino-dev-ubuntu20-2021.2.200 - Intel® Deep Learning Deployment Toolkit 2021.2 for Linux*

Install the latest runtime and developer packages

$ sudo apt install intel-openvino-runtime-ubuntu20-2021.2.200 intel-openvino-dev-ubuntu20-2021.2.200

@bafu
Copy link
Member Author

bafu commented Feb 21, 2021

Installation Issues

  1. OpenVINO 2021 breaks the symlink
$ cd /opt/intel
$ ls -l
total 4
lrwxrwxrwx  1 root root   19 Feb 21 11:20 openvino_2021 -> openvino_2021.2.200
drwxr-xr-x 10 root root 4096 Feb 21 11:20 openvino_2021.2.200

Solution: Manually create the compatible openvino symlink

$ sudo ln -s openvino_2021 openvino
  1. Python can not work by default
$ source /opt/intel/openvino/bin/setupvars.sh 
[setupvars.sh] WARNING: Can not find OpenVINO Python binaries by path /home/bafu/codes/python
[setupvars.sh] WARNING: OpenVINO Python environment does not set properly
[setupvars.sh] OpenVINO environment initialized

setupvars.py in 2021.2.200 requires to be sourced in /opt/intel/openvino/bin/ instead of any directory. This is different from the installation guide and the 2020 version.

Solution: Run source in the /opt/intel/openvino/bin/ or modify the setupvars.sh.

@bafu
Copy link
Member Author

bafu commented Feb 21, 2021

Compatibility Issues

$ bn_openvino -h
/usr/lib/python3/dist-packages/apport/report.py:13: DeprecationWarning: the imp module is deprecated in favour of importlib; see the module's documentation for alternative uses
  import fnmatch, glob, traceback, errno, sys, atexit, locale, imp
Traceback (most recent call last):
  File "/home/bafu/.local/bin/bn_openvino", line 11, in <module>
    load_entry_point('berrynet', 'console_scripts', 'bn_openvino')()
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 490, in load_entry_point
    return get_distribution(dist).load_entry_point(group, name)
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2854, in load_entry_point
    return ep.load()
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2445, in load
    return self.resolve()
  File "/usr/lib/python3/dist-packages/pkg_resources/__init__.py", line 2451, in resolve
    module = __import__(self.module_name, fromlist=['__name__'], level=0)
  File "/home/bafu/codes/BerryNet/berrynet/service/openvino_service.py", line 29, in <module>
    from berrynet.engine.openvino_engine import OpenVINOClassifierEngine
  File "/home/bafu/codes/BerryNet/berrynet/engine/openvino_engine.py", line 33, in <module>
    from openvino.inference_engine import IENetwork, IEPlugin
ImportError: cannot import name 'IEPlugin' from 'openvino.inference_engine' (/opt/intel/openvino/python/python3.8/openvino/inference_engine/__init__.py)

Confirmed that IEPlugin is removed from inference_engine

In [4]: import openvino.inference_engine 
                                         
In [5]: dir(openvino.inference_engine)   
Out[5]:       
['Blob',
 'BlobBuffer',
 'CDataPtr',                             
 'ColorFormat',
 'DataPtr', 
 'ExecutableNetwork',
 'IECore',          
 'IENetwork',
 'InferRequest',
 'InputInfoCPtr',
 'InputInfoPtr',
 'MeanVariant',
 'OrderedDict',
 'PreProcessChannel',
 'PreProcessInfo',
 'ResizeAlgorithm',
 'StatusCode',
 'TensorDesc',
 'WaitMode',
 '__all__',
 '__builtins__',
...

openvinotoolkit/openvino#1455 (comment)

The IEPlugin is going to be removed in the next major release 2021.1, you should use Core API instead.

Which means to replace IEPlugin by IECore.

https://docs.openvinotoolkit.org/2020.1/ie_python_api/classie__api_1_1IECore.html

@bafu
Copy link
Member Author

bafu commented Mar 7, 2021

IECore inference example (benchmark util)

https://github.com/openvinotoolkit/openvino/tree/master/tools/benchmark
https://medium.com/analytics-vidhya/tutorial-on-how-to-run-inference-with-openvino-in-2021-a96e5e7c99f8

/opt/intel/openvino/deployment_tools/tools/benchmark_tool/benchmark_app.py
/opt/intel/openvino/python/python3.8/openvino/tools/benchmark

Model IR compatibility is broken

$ python3 benchmark_app.py -m /usr/share/dlmodels/mobilenet-ssd-fp32-openvino-1.0.0/mobilenet-ssd.xml -i ~/Downloads/dog.jpg -d CPU                                                                                         
/opt/intel/openvino/python/python3.8/openvino/tools/benchmark/utils/utils.py:220: SyntaxWarning: "is not" with a literal. Did you mean "!="?
  if arg_name is not '':
/opt/intel/openvino/python/python3.8/openvino/tools/benchmark/utils/utils.py:226: SyntaxWarning: "is not" with a literal. Did you mean "!="?
  if arg_name is not '':
[Step 1/11] Parsing and validating input arguments
/opt/intel/openvino/python/python3.8/openvino/tools/benchmark/main.py:29: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
  logger.warn(" -nstreams default value is determined automatically for a device. "
[ WARNING ]  -nstreams default value is determined automatically for a device. Although the automatic selection usually provides a reasonable performance, but it still may be non-optimal for some cases, for more information look at README. 
[Step 2/11] Loading Inference Engine
[ INFO ] InferenceEngine:
         API version............. 2.1.2021.2.0-1877-176bdf51370-releases/2021/2
[ INFO ] Device info
         CPU
         MKLDNNPlugin............ version 2.1
         Build................... 2021.2.0-1877-176bdf51370-releases/2021/2

[Step 3/11] Setting device configuration
[ WARNING ] -nstreams default value is determined automatically for CPU device. Although the automatic selection usually provides a reasonable performance,but it still may be non-optimal for some cases, for more information look at README.
[Step 4/11] Reading network files
[ ERROR ] The support of IR v3 has been removed from the product. Please, convert the original model using the Model Optimizer which comes with this version of the OpenVINO to generate supported IR version.
Traceback (most recent call last):
  File "/opt/intel/openvino/python/python3.8/openvino/tools/benchmark/main.py", line 182, in run
    ie_network = benchmark.read_network(args.path_to_model)
  File "/opt/intel/openvino/python/python3.8/openvino/tools/benchmark/benchmark.py", line 66, in read_network
    ie_network = self.ie.read_network(model_filename, weights_filename)
  File "ie_api.pyx", line 261, in openvino.inference_engine.ie_api.IECore.read_network
  File "ie_api.pyx", line 285, in openvino.inference_engine.ie_api.IECore.read_network
RuntimeError: The support of IR v3 has been removed from the product. Please, convert the original model using the Model Optimizer which comes with this version of the OpenVINO to generate supported IR version.

@bafu
Copy link
Member Author

bafu commented Mar 7, 2021

Download and Convert Model: MobileNet SSD

https://docs.openvinotoolkit.org/2021.2/omz_tools_downloader_README.html

downloader.py

$ mkdir ~/openvino-models
$ cd /opt/intel/openvino/deployment_tools/tools/model_downloader
$ python3 downloader.py -o ~/openvino-models --name mobilenet-ssd
################|| Downloading mobilenet-ssd ||################

========== Downloading /home/bafu/openvino-models/public/mobilenet-ssd/mobilenet-ssd.prototxt
... 100%, 28 KB, 18576 KB/s, 0 seconds passed

========== Downloading /home/bafu/openvino-models/public/mobilenet-ssd/mobilenet-ssd.caffemodel
... 100%, 22605 KB, 3796 KB/s, 5 seconds passed

converter.py, the IR version is 10

$ mkdir ~/openvino-models/converted-models
$ python3 converter.py -d ~/openvino-models/ -o ~/openvino-models/converted-models --name mobilenet-ssd
========== Converting mobilenet-ssd to IR (FP16)
Conversion command: /usr/bin/python3 -- /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --framework=caffe --data_type=FP16 --output_dir=/home/bafu/openvino-models/converted-models/public/mobilenet-ssd/FP16 --model_name=mobilenet-ssd '--input_shape=[1,3,300,300]' --input=data '--mean_values=data[127.5,127.5,127.5]' '--scale_values=data[127.5]' --output=detection_out --input_model=/home/bafu/openvino-models/public/mobilenet-ssd/mobilenet-ssd.caffemodel --input_proto=/home/bafu/openvino-models/public/mobilenet-ssd/mobilenet-ssd.prototxt

/opt/intel/openvino_2021.2.200/deployment_tools/model_optimizer/mo/main.py:85: SyntaxWarning: "is" with a literal. Did you mean "=="?
  if op is 'k':
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/bafu/openvino-models/public/mobilenet-ssd/mobilenet-ssd.caffemodel
	- Path for generated IR: 	/home/bafu/openvino-models/converted-models/public/mobilenet-ssd/FP16
	- IR output name: 	mobilenet-ssd
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	data
	- Output layers: 	detection_out
	- Input shapes: 	[1,3,300,300]
	- Mean values: 	data[127.5,127.5,127.5]
	- Scale values: 	data[127.5]
	- Scale factor: 	Not specified
	- Precision of IR: 	FP16
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	None
	- Reverse input channels: 	False
Caffe specific parameters:
	- Path to Python Caffe* parser generated from caffe.proto: 	/opt/intel/openvino/deployment_tools/model_optimizer/mo/front/caffe/proto
	- Enable resnet optimization: 	True
	- Path to the Input prototxt: 	/home/bafu/openvino-models/public/mobilenet-ssd/mobilenet-ssd.prototxt
	- Path to CustomLayersMapping.xml: 	Default
	- Path to a mean file: 	Not specified
	- Offsets for a mean file: 	Not specified
Model Optimizer version: 	2021.2.0-1877-176bdf51370-releases/2021/2
[ WARNING ]  
Detected not satisfied dependencies:
	test-generator: installed: 0.1.2, required: == 0.1.1

Please install required versions of components or use install_prerequisites script
/opt/intel/openvino_2021.2.200/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_caffe.sh
Note that install_prerequisites scripts may install additional components.

[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /home/bafu/openvino-models/converted-models/public/mobilenet-ssd/FP16/mobilenet-ssd.xml
[ SUCCESS ] BIN file: /home/bafu/openvino-models/converted-models/public/mobilenet-ssd/FP16/mobilenet-ssd.bin
[ SUCCESS ] Total execution time: 8.33 seconds. 
[ SUCCESS ] Memory consumed: 403 MB. 

========== Converting mobilenet-ssd to IR (FP32)
Conversion command: /usr/bin/python3 -- /opt/intel/openvino/deployment_tools/model_optimizer/mo.py --framework=caffe --data_type=FP32 --output_dir=/home/bafu/openvino-models/converted-models/public/mobilenet-ssd/FP32 --model_name=mobilenet-ssd '--input_shape=[1,3,300,300]' --input=data '--mean_values=data[127.5,127.5,127.5]' '--scale_values=data[127.5]' --output=detection_out --input_model=/home/bafu/openvino-models/public/mobilenet-ssd/mobilenet-ssd.caffemodel --input_proto=/home/bafu/openvino-models/public/mobilenet-ssd/mobilenet-ssd.prototxt

/opt/intel/openvino_2021.2.200/deployment_tools/model_optimizer/mo/main.py:85: SyntaxWarning: "is" with a literal. Did you mean "=="?
  if op is 'k':
Model Optimizer arguments:
Common parameters:
	- Path to the Input Model: 	/home/bafu/openvino-models/public/mobilenet-ssd/mobilenet-ssd.caffemodel
	- Path for generated IR: 	/home/bafu/openvino-models/converted-models/public/mobilenet-ssd/FP32
	- IR output name: 	mobilenet-ssd
	- Log level: 	ERROR
	- Batch: 	Not specified, inherited from the model
	- Input layers: 	data
	- Output layers: 	detection_out
	- Input shapes: 	[1,3,300,300]
	- Mean values: 	data[127.5,127.5,127.5]
	- Scale values: 	data[127.5]
	- Scale factor: 	Not specified
	- Precision of IR: 	FP32
	- Enable fusing: 	True
	- Enable grouped convolutions fusing: 	True
	- Move mean values to preprocess section: 	None
	- Reverse input channels: 	False
Caffe specific parameters:
	- Path to Python Caffe* parser generated from caffe.proto: 	/opt/intel/openvino/deployment_tools/model_optimizer/mo/front/caffe/proto
	- Enable resnet optimization: 	True
	- Path to the Input prototxt: 	/home/bafu/openvino-models/public/mobilenet-ssd/mobilenet-ssd.prototxt
	- Path to CustomLayersMapping.xml: 	Default
	- Path to a mean file: 	Not specified
	- Offsets for a mean file: 	Not specified
Model Optimizer version: 	2021.2.0-1877-176bdf51370-releases/2021/2
[ WARNING ]  
Detected not satisfied dependencies:
	test-generator: installed: 0.1.2, required: == 0.1.1

Please install required versions of components or use install_prerequisites script
/opt/intel/openvino_2021.2.200/deployment_tools/model_optimizer/install_prerequisites/install_prerequisites_caffe.sh
Note that install_prerequisites scripts may install additional components.

[ SUCCESS ] Generated IR version 10 model.
[ SUCCESS ] XML file: /home/bafu/openvino-models/converted-models/public/mobilenet-ssd/FP32/mobilenet-ssd.xml
[ SUCCESS ] BIN file: /home/bafu/openvino-models/converted-models/public/mobilenet-ssd/FP32/mobilenet-ssd.bin
[ SUCCESS ] Total execution time: 8.21 seconds. 
[ SUCCESS ] Memory consumed: 408 MB. 

@bafu
Copy link
Member Author

bafu commented Mar 7, 2021

Benchmark: CPU + MobileNet SSD

Note: The benchmark will take 1-min.

$ python3 benchmark_app.py -m ~/openvino-models/converted-models/public/mobilenet-ssd/FP32/mobilenet-ssd.xml -d CPU -i ~/Downloads/dog.jpg
/opt/intel/openvino/python/python3.8/openvino/tools/benchmark/utils/utils.py:220: SyntaxWarning: "is not" with a literal. Did you mean "!="?
  if arg_name is not '':
/opt/intel/openvino/python/python3.8/openvino/tools/benchmark/utils/utils.py:226: SyntaxWarning: "is not" with a literal. Did you mean "!="?
  if arg_name is not '':
[Step 1/11] Parsing and validating input arguments
/opt/intel/openvino/python/python3.8/openvino/tools/benchmark/main.py:29: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
  logger.warn(" -nstreams default value is determined automatically for a device. "
[ WARNING ]  -nstreams default value is determined automatically for a device. Although the automatic selection usually provides a reasonable performance, but it still may be non-optimal for some cases, for more information look at README. 
[Step 2/11] Loading Inference Engine
[ INFO ] InferenceEngine:
         API version............. 2.1.2021.2.0-1877-176bdf51370-releases/2021/2
[ INFO ] Device info
         CPU
         MKLDNNPlugin............ version 2.1
         Build................... 2021.2.0-1877-176bdf51370-releases/2021/2

[Step 3/11] Setting device configuration
[ WARNING ] -nstreams default value is determined automatically for CPU device. Although the automatic selection usually provides a reasonable performance,but it still may be non-optimal for some cases, for more information look at README.
[Step 4/11] Reading network files
[ INFO ] Read network took 39.65 ms
[Step 5/11] Resizing network to match image sizes and given batch
[ INFO ] Network batch size: 1
[Step 6/11] Configuring input of the model
[Step 7/11] Loading the model to the device
[ INFO ] Load network took 257.37 ms
[Step 8/11] Setting optimal runtime parameters
[Step 9/11] Creating infer requests and filling input blobs with images
[ INFO ] Network input 'data' precision U8, dimensions (NCHW): 1 3 300 300
/opt/intel/openvino/python/python3.8/openvino/tools/benchmark/utils/inputs_filling.py:91: DeprecationWarning: The 'warn' method is deprecated, use 'warning' instead
  logger.warn(
[ WARNING ] Some image input files will be duplicated: 4 files are required, but only 1 were provided
[ INFO ] Infer Request 0 filling
[ INFO ] Prepare image /home/bafu/Downloads/dog.jpg
[ WARNING ] Image is resized from ((576, 768)) to ((300, 300))
[ INFO ] Infer Request 1 filling
[ INFO ] Prepare image /home/bafu/Downloads/dog.jpg
[ WARNING ] Image is resized from ((576, 768)) to ((300, 300))
[ INFO ] Infer Request 2 filling
[ INFO ] Prepare image /home/bafu/Downloads/dog.jpg
[ WARNING ] Image is resized from ((576, 768)) to ((300, 300))
[ INFO ] Infer Request 3 filling
[ INFO ] Prepare image /home/bafu/Downloads/dog.jpg
[ WARNING ] Image is resized from ((576, 768)) to ((300, 300))
[Step 10/11] Measuring performance (Start inference asyncronously, 4 inference requests using 4 streams for CPU, limits: 60000 ms duration)
[ INFO ] First inference took 12.39 ms
[Step 11/11] Dumping statistics report
Count:      5124 iterations
Duration:   60021.76 ms
Latency:    31.86 ms
Throughput: 85.37 FPS

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant