Skip to content
This repository has been archived by the owner on Sep 18, 2024. It is now read-only.

Releases: microsoft/nni

Release 3.0 - 21/8/2023

14 Sep 12:12
36ba04c
Compare
Choose a tag to compare

Web Portal

  • New look and feel

Neural Architecture Search

  • Breaking change: nni.retiarii is no longer maintained and tested. Please migrate to nni.nas.
    • Inherit nni.nas.nn.pytorch.ModelSpace, rather than use @model_wrapper.
    • Use nni.choice, rather than nni.nas.nn.pytorch.ValueChoice.
    • Use nni.nas.experiment.NasExperiment and NasExperimentConfig, rather than RetiariiExperiment.
    • Use nni.nas.model_context, rather than nni.nas.fixed_arch.
    • Please refer to quickstart for more changes.
  • A refreshed experience to construct model space.
    • Enhanced debuggability via freeze() and simplify() APIs.
    • Enhanced expressiveness with nni.choice, nni.uniform, nni.normal and etc.
    • Enhanced experience of customization with MutableModule, ModelSpace and ParamterizedModule.
    • Search space with constraints is now supported.
  • Improved robustness and stability of strategies.
    • Supported search space types are now enriched for PolicyBaseRL, ENAS and Proxyless.
    • Each step of one-shot strategies can be executed alone: model mutation, evaluator mutation and training.
    • Most multi-trial strategies now supports specifying seed for reproducibility.
    • Performance of strategies have been verified on a set of benchmarks.
  • Strategy/engine middleware.
    • Filtering, replicating, deduplicating or retrying models submitted by any strategy.
    • Merging or transforming models before executing (e.g., CGO).
    • Arbitrarily-long chains of middlewares.
  • New execution engine.
    • Improved debuggability via SequentialExecutionEngine: trials can run in a single process and breakpoints are effective.
    • The old execution engine is now decomposed into execution engine and model format.
    • Enhanced extensibility of execution engines.
  • NAS profiler and hardware-aware NAS.
    • New profilers profile a model space, and quickly compute a profiling result for a sampled architecture or a distribution of architectures (FlopsProfiler, NumParamsProfiler and NnMeterProfiler are officially supported).
    • Assemble profiler with arbitrary strategies, including both multi-trial and one-shot.
    • Profiler are extensible. Strategies can be assembled with arbitrary customized profilers.

Model Compression

  • Compression framework is refactored, new framework import path is nni.contrib.compression.
    • Configure keys are refactored, support more detailed compression configurations. view doc
    • Support multi compression methods fusion.
    • Support distillation as a basic compression component.
    • Support more compression targets, like input, ouptut and any registered paramters.
    • Support compressing any module type by customizing module settings.
  • Model compression support in DeepSpeed mode.
  • Fix example bugs.
  • Pruning
    • Pruner interfaces have fine-tuned for easy to use. view doc
    • Support configuring granularity in pruners. view doc
    • Support different mask ways, multiply zero or add a large negative value.
    • Support manully setting dependency group and global group. view doc
    • A new powerful pruning speedup is released, applicability and robustness have been greatly improved. view doc
    • The end to end transformer compression tutorial has been updated, achieved more extreme compression performance. view doc
    • Fix config list in the examples.
  • Quantization
    • Support using Evaluator to handle training/inferencing.
    • Support more module fusion combinations. view doc
    • Support configuring granularity in quantizers. view doc
    • Bias correction is supported in the Post Training Quantization algorithm.
    • LSQ+ quantization algorithm is supported.
  • Distillation
  • Compression documents now updated for the new framework, the old version please view v2.10 doc.
  • New compression examples are under nni/examples/compression

Training Services

  • Breaking change: NNI v3.0 cannot resume experiments created by NNI v2.x
  • Local training service:
    • Reduced latency of creating trials
    • Fixed "GPU metric not found"
    • Fixed bugs about resuming trials
  • Remote training service:
    • reuse_mode now defaults to False; setting it to True will fallback to v2.x remote training service
    • Reduced latency of creating trials
    • Fixed "GPU metric not found"
    • Fixed bugs about resuming trials
    • Supported viewing trial logs on the web portal
    • Supported automatic recover after temporary server failure (network fluctuation, out of memory, etc)
  • Get rid of IoC and remove unused training services.

NNI v3.0 Preview Release (v3.0rc1)

10 May 04:36
97a85c8
Compare
Choose a tag to compare

Web Portal

  • New look and feel

Neural Architecture Search

  • Breaking change: nni.retiarii is no longer maintained and tested. Please migrate to nni.nas.
    • Inherit nni.nas.nn.pytorch.ModelSpace, rather than use @model_wrapper.
    • Use nni.choice, rather than nni.nas.nn.pytorch.ValueChoice.
    • Use nni.nas.experiment.NasExperiment and NasExperimentConfig, rather than RetiariiExperiment.
    • Use nni.nas.model_context, rather than nni.nas.fixed_arch.
    • Please refer to quickstart for more changes.
  • A refreshed experience to construct model space.
    • Enhanced debuggability via freeze() and simplify() APIs.
    • Enhanced expressiveness with nni.choice, nni.uniform, nni.normal and etc.
    • Enhanced experience of customization with MutableModule, ModelSpace and ParamterizedModule.
    • Search space with constraints is now supported.
  • Improved robustness and stability of strategies.
    • Supported search space types are now enriched for PolicyBaseRL, ENAS and Proxyless.
    • Each step of one-shot strategies can be executed alone: model mutation, evaluator mutation and training.
    • Most multi-trial strategies now supports specifying seed for reproducibility.
    • Performance of strategies have been verified on a set of benchmarks.
  • Strategy/engine middleware.
    • Filtering, replicating, deduplicating or retrying models submitted by any strategy.
    • Merging or transforming models before executing (e.g., CGO).
    • Arbitrarily-long chains of middlewares.
  • New execution engine.
    • Improved debuggability via SequentialExecutionEngine: trials can run in a single process and breakpoints are effective.
    • The old execution engine is now decomposed into execution engine and model format.
    • Enhanced extensibility of execution engines.
  • NAS profiler and hardware-aware NAS.
    • New profilers profile a model space, and quickly compute a profiling result for a sampled architecture or a distribution of architectures (FlopsProfiler, NumParamsProfiler and NnMeterProfiler are officially supported).
    • Assemble profiler with arbitrary strategies, including both multi-trial and one-shot.
    • Profiler are extensible. Strategies can be assembled with arbitrary customized profilers.

Compression

  • Compression framework is refactored, new framework import path is nni.contrib.compression.
    • Configure keys are refactored, support more detailed compression configurations. view doc
    • Support multi compression methods fusion. view doc
    • Support distillation as a basic compression component. view doc
    • Support more compression targets, like input, output and any registered parameters. view doc
    • Support compressing any module type by customizing module settings. view doc
  • Pruning
    • Pruner interfaces have fine-tuned for easy to use. view doc
    • Support configuring granularity in pruners. view doc
    • Support different mask ways, multiply zero or add a large negative value.
    • Support manully setting dependency group and global group. view doc
    • A new powerful pruning speedup is released, applicability and robustness have been greatly improved. view doc
    • The end to end transformer compression tutorial has been updated, achieved more extreme compression performance. view doc
  • Quantization
    • Support using Evaluator to handle training/inferencing.
    • Support more module fusion combinations. view doc
    • Support configuring granularity in quantizers. view doc
  • Distillation
  • Compression documents now updated for the new framework, the old version please view v2.10 doc.
  • New compression examples are under nni/examples/compression

Training Services

  • Breaking change: NNI v3.0 cannot resume experiments created by NNI v2.x
  • Local training service:
    • Reduced latency of creating trials
    • Fixed "GPU metric not found"
    • Fixed bugs about resuming trials
  • Remote training service:
    • reuse_mode now defaults to False; setting it to True will fallback to v2.x remote training service
    • Reduced latency of creating trials
    • Fixed "GPU metric not found"
    • Fixed bugs about resuming trials
    • Supported viewing trial logs on the web portal
    • Supported automatic recover after temporary server failure (network fluctuation, out of memory, etc)

NNI v2.10 Release

14 Nov 10:58
c31d257
Compare
Choose a tag to compare

Neural Architecture Search

  • Added trial deduplication for evolutionary search.
  • Fixed the racing issue in RL strategy on submitting models.
  • Fixed an issue introduced by the trial recovery feature.
  • Fixed import error of PyTorch Lightning in NAS.

Compression

  • Supported parsing schema by replacing torch._C.parse_schema in pytorch 1.8.0 in ModelSpeedup.
  • Fixed the bug that speedup rand_like_with_shape is easy to overflow when dtype=torch.int8.
  • Fixed the propagation error with view tensors in speedup.

Hyper-parameter optimization

  • Supported rerunning the interrupted trials induced by the termination of an NNI experiment when resuming this experiment.
  • Fixed a dependency issue of Anneal tuner by changing Anneal tuner dependency to optional.
  • Fixed a bug that tuner might lose connection in long experiments.

Training service

  • Fixed a bug that trial code directory cannot have non-English characters.

Web portal

  • Fixed an error of columns in HPO experiment hyper-parameters page by using localStorage.
  • Fixed a link error in About menu on WebUI.

Known issues

  • Modelspeedup does not support non-tensor intermediate variables.

NNI v2.9 Release

07 Sep 14:07
dab51f7
Compare
Choose a tag to compare

Neural Architecture Search

  • New tutorial of model space hub and one-shot strategy. (tutorial)
  • Add pretrained checkpoints to AutoFormer. (doc)
  • Support loading checkpoint of a trained supernet in a subnet. (doc)
  • Support view and resume of NAS experiment. (doc)

Enhancements

  • Support fit_kwargs in lightning evaluator. (doc)
  • Support drop_path and auxiliary_loss in NASNet. (doc)
  • Support gradient clipping in DARTS. (doc)
  • Add export_probs to monitor the architecture weights.
  • Rewrite configure_optimizers, functions to step optimizers / schedulers, along with other hooks for simplicity, and to be compatible with latest lightning (v1.7).
  • Align implementation of DifferentiableCell with DARTS official repo.
  • Re-implementation of ProxylessNAS.
  • Move nni.retiarii code-base to nni.nas.

Bug fixes

  • Fix a performance issue caused by tensor formatting in weighted_sum.
  • Fix a misuse of lambda expression in NAS-Bench-201 search space.
  • Fix the gumbel temperature schedule in Gumbel DARTS.
  • Fix the architecture weight sharing when sharing labels in differentiable strategies.
  • Fix the memo reusing in exporting differentiable cell.

Compression

  • New tutorial of pruning transformer model. (tutorial)
  • Add TorchEvaluator, LightningEvaluator, TransformersEvaluator to ease the expression of training logic in pruner. (doc, API)

Enhancements

  • Promote all pruner API using Evaluator, the old API is deprecated and will be removed in v3.0. (doc)
  • Greatly enlarge the set of supported operators in pruning speedup via automatic operator conversion.
  • Support lr_scheduler in pruning by using Evaluator.
  • Support pruning NLP task in ActivationAPoZRankPruner and ActivationMeanRankPruner.
  • Add training_steps, regular_scale, movement_mode, sparse_granularity for MovementPruner. (doc)
  • Add GroupNorm replacement in pruning speedup. Thanks external contributor @cin-xing .
  • Optimize balance mode performance in LevelPruner.

Bug fixes

  • Fix the invalid dependency_aware mode in scheduled pruners.
  • Fix the bug where bias mask cannot be generated.
  • Fix the bug where max_sparsity_per_layer has no effect.
  • Fix Linear and LayerNorm speedup replacement in NLP task.
  • Fix tracing LightningModule failed in pytorch_lightning >= 1.7.0.

Hyper-parameter optimization

  • Fix the bug that weights are not defined correctly in adaptive_parzen_normal of TPE.

Training service

  • Fix trialConcurrency bug in K8S training service: use${envId}_run.sh to replace run.sh.
  • Fix upload dir bug in K8S training service: use a separate working directory for each experiment. Thanks external contributor @amznero .

Web portal

  • Support dict keys in Default metric chart in the detail page.
  • Show experiment error message with small popup windows in the bottom right of the page.
  • Upgrade React router to v6 to fix index router issue.
  • Fix the issue of details page crashing due to choices containing None.
  • Fix the issue of missing dict intermediate dropdown in comparing trials dialog.

Known issues

  • Activation based pruner can not support [batch, seq, hidden].
  • Failed trials are NOT auto-submitted when experiment is resumed (#4931 is reverted due to its pitfalls).

NNI v2.8 Release

22 Jun 04:57
e8c78bb
Compare
Choose a tag to compare

Neural Architecture Search

  • Align user experience of one-shot NAS with multi-trial NAS, i.e., users can use one-shot NAS by specifying the corresponding strategy (doc)
  • Support multi-GPU training of one-shot NAS
  • Preview Support load/retrain the pre-searched model of some search spaces, i.e., 18 models in 4 different search spaces (doc)
  • Support AutoFormer search space in search space hub, thanks our collaborators @nbl97 and @penghouwen
  • One-shot NAS supports the NAS API repeat and cell
  • Refactor of RetiariiExperiment to share the common implementation with HPO experiment
  • CGO supports pytorch-lightning 1.6

Model Compression

  • Preview Refactor and improvement of automatic model compress with a new CompressionExperiment
  • Support customizating module replacement function for unsupported modules in model speedup (doc)
  • Support the module replacement function for some user mentioned modules
  • Support output_padding for convtranspose2d in model speedup, thanks external contributor @haoshuai-orka

Hyper-Parameter Optimization

  • Make config.tuner.name case insensitive
  • Allow writing configurations of advisor in tuner format, i.e., aligning the configuration of advisor and tuner

Experiment

  • Support launching multiple HPO experiments in one process

  • Internal refactors and improvements

    • Refactor of the logging mechanism in NNI
    • Refactor of NNI manager globals for flexible and high extensibility
    • Migrate dispatcher IPC to WebSocket
    • Decouple lock stuffs from experiments manager logic
    • Use launcher's sys.executable to detect Python interpreter

WebUI

  • Improve user experience of trial ordering in the overview page
  • Fix the update issue in the trial detail page

Documentation

  • A new translation framework for document
  • Add a new quantization demo (doc)

Notable Bugfixes

  • Fix TPE import issue for old metrics
  • Fix the issue in TPE nested search space
  • Support RecursiveScriptModule in speedup
  • Fix the issue of failed "implicit type cast" in merge_parameter()

NNI v2.7 Release

18 Apr 13:06
1546962
Compare
Choose a tag to compare

Documentation

A full-size upgrade of the documentation, with the following significant improvements in the reading experience, practical tutorials, and examples:

Hyper-Parameter Optimization

  • [Improvement] TPE and random tuners will not generate duplicate hyperparameters anymore.
  • [Improvement] Most Python APIs now have type annotations.

Neural Architecture Search

  • Jointly search for architecture and hyper-parameters: ValueChoice in evaluator. (doc)
  • Support composition (transformation) of one or several value choices. (doc)
  • Enhanced Cell API (merge_op, preprocessor, postprocessor). (doc)
  • The argument depth in the Repeat API allows ValueChoice. (doc)
  • Support loading state_dict between sub-net and super-net. (doc, example in spos)
  • Support BN fine-tuning and evaluation in SPOS example. (doc)
  • Experimental Model hyper-parameter choice. (doc)
  • Preview Lightning implementation for Retiarii including DARTS, ENAS, ProxylessNAS and RandomNAS. (example usage)
  • Preview A search space hub that contains 10 search spaces. (code)

Model Compression

  • Pruning V2 is promoted as default pruning framework, old pruning is legacy and keeps for a few releases.(doc)
  • A new pruning mode balance is supported in LevelPruner.(doc)
  • Support coarse-grained pruning in ADMMPruner.(doc)
  • [Improvement] Support more operation types in pruning speedup.
  • [Improvement] Optimize performance of some pruners.

Experiment

  • [Improvement] Experiment.run() no longer stops web portal on return.

Notable Bugfixes

  • Fixed: experiment list could not open experiment with prefix.
  • Fixed: serializer for complex kinds of arguments.
  • Fixed: some typos in code. (thanks @a1trl9 @mrshu)
  • Fixed: dependency issue across layer in pruning speedup.
  • Fixed: uncheck trial doesn't work bug in the detail table.
  • Fixed: filter name | id bug in the experiment management page.

NNI v2.6.1 Release

18 Feb 09:34
70706eb
Compare
Choose a tag to compare

Bug Fixes

  • Fix a bug that new TPE does not support dict metrics.
  • Fix a bug that missing comma. (Thanks to @mrshu)

NNI v2.6 Release

19 Jan 08:30
0d3802a
Compare
Choose a tag to compare

NOTE: NNI v2.6 is the last version that supports Python 3.6. From next release NNI will require Python 3.7+.

Hyper-Parameter Optimization

Experiment

  • The legacy experiment config format is now deprecated. (doc of new config)
    • If you are still using legacy format, nnictl will show equivalent new config on start. Please save it to replace the old one.
  • nnictl now uses nni.experiment.Experiment APIs as backend. The output message of create, resume, and view commands have changed.
  • Added Kubeflow and Frameworkcontroller support to hybrid mode. (doc)
  • The hidden tuner manifest file has been updated. This should be transparent to users, but if you encounter issues like failed to find tuner, please try to remove ~/.config/nni.

Algorithms

  • Random tuner now supports classArgs seed. (doc)
  • TPE tuner is refactored: (doc)
    • Support classArgs seed.
    • Support classArgs tpe_args for expert users to customize algorithm behavior.
    • Parallel optimization has been turned on by default. To turn it off set tpe_args.constant_liar_type to null (or None in Python).
    • parallel_optimize and constant_liar_type has been removed. If you are using them please update your config to use tpe_args.constant_liar_type instead.
  • Grid search tuner now supports all search space types, including uniform, normal, and nested choice. (doc)

Neural Architecture Search

  • Enhancement to serialization utilities (doc) and changes to recommended practice of customizing evaluators. (doc)
  • Support latency constraint on edge device for ProxylessNAS based on nn-Meter. (doc)
  • Trial parameters are showed more friendly in Retiarii experiments.
  • Refactor NAS examples of ProxylessNAS and SPOS.

Model Compression

  • New Pruner Supported in Pruning V2
    • Auto-Compress Pruner (doc)
    • AMC Pruner (doc)
    • Movement Pruning Pruner (doc)
  • Support nni.trace wrapped Optimizer in Pruning V2. In the case of not affecting the user experience as much as possible, trace the input parameters of the optimizer. (doc)
  • Optimize Taylor Pruner, APoZ Activation Pruner, Mean Activation Pruner in V2 memory usage.
  • Add more examples for Pruning V2.
  • Add document for pruning config list. (doc)
  • Parameter masks_file of ModelSpeedup now accepts pathlib.Path object. (Thanks to @dosemeion) (doc)
  • Bug Fix
    • Fix Slim Pruner in V2 not sparsify the BN weight.
    • Fix Simulator Annealing Task Generator generates config ignoring 0 sparsity.

Documentation

  • Supported GitHub feature "Cite this repository".
  • Updated index page of readthedocs.
  • Updated Chinese documentation.
    • From now on NNI only maintains translation for most import docs and ensures they are up to date.
  • Reorganized HPO tuners' doc.

Bugfixes

  • Fixed a bug where numpy array is used as a truth value. (Thanks to @khituras)
  • Fixed a bug in updating search space.
  • Fixed a bug that HPO search space file does not support scientific notation and tab indent.
    • For now NNI does not support mixing scientific notation and YAML features. We are waiting for PyYAML to update.
  • Fixed a bug that causes DARTS 2nd order to crash.
  • Fixed a bug that causes deep copy of mutation primitives (e.g., LayerChoice) to crash.
  • Removed blank at bottom in Web UI overview page.

NNI v2.5 Release

04 Nov 00:55
6a082fe
Compare
Choose a tag to compare

Model Compression

  • New major version of pruning framework (doc)
    • Iterative pruning is more automated, users can use less code to implement iterative pruning.
    • Support exporting intermediate models in the iterative pruning process.
    • The implementation of the pruning algorithm is closer to the paper.
    • Users can easily customize their own iterative pruning by using PruningScheduler.
    • Optimize the basic pruners underlying generate mask logic, easier to extend new functions.
    • Optimized the memory usage of the pruners.
  • MobileNetV2 end-to-end example (notebook)
  • Improved QAT quantizer (doc)
    • Support dtype and scheme customization
    • Support dp multi-gpu training
    • Support load_calibration_config
  • Model speed-up now supports directly loading the mask (doc)
  • Support speed-up depth-wise convolution
  • Support bn-folding for LSQ quantizer
  • Support QAT and LSQ resume from PTQ
  • Added doc for observer quantizer (doc)

Neural Architecture Search

  • NAS benchmark (doc)
    • Support benchmark table lookup in experiments
    • New data preparation approach
  • Improved quick start doc
  • Experimental CGO execution engine (doc)

Hyper-Parameter Optimization

  • New training platform: Alibaba DSW+DLC (doc)
  • Support passing ConfigSpace definition directly to BOHB (doc) (thanks to @khituras)
  • Reformatted experiment config doc
  • Added example config files for Windows (thanks to @politecat314)
  • FrameworkController now supports reuse mode

Fixed Bugs

  • Experiment cannot start due to platform timestamp format (issue #4077 #4083)
  • Cannot use 1e-5 in search space (issue #4080)
  • Dependency version conflict caused by ConfigSpace (issue #3909) (thanks to @jexxers)
  • Hardware-aware SPOS example does not work (issue #4198)
  • Web UI show wrong remaining time when duration exceeds limit (issue #4015)
  • cudnn.deterministic is always set in AMC pruner (#4117) thanks to @mstczuo

And...

New emoticons!
holiday emoticon

Install from pypi

NNI v2.4 Release

12 Aug 00:36
9a4d0d6
Compare
Choose a tag to compare

Major Updates

Neural Architecture Search

  • NAS visualization: visualize model graph through Netron (#3878)
  • Support NAS bench 101/201 on Retiarii framework (#3871 #3920)
  • Support hypermodule AutoActivation (#3868)
  • Support PyTorch v1.8/v1.9 (#3937)
  • Support Hardware-aware NAS with nn-Meter (#3938)
  • Enable fixed_arch on Retiarii (#3972)

Model Compression

  • Refactor of ModelSpeedup: auto shape/mask inference (#3462)
  • Added more examples for ModelSpeedup (#3880)
  • Support global sort for Taylor pruning (#3896)
  • Support TransformerHeadPruner (#3884)
  • Support batch normalization folding in QAT quantizer (#3911, thanks the external contributor @chenbohua3)
  • Support post-training observer quantizer (#3915, thanks the external contributor @chenbohua3)
  • Support ModelSpeedup for Slim Pruner (#4008)
  • Support TensorRT 8.0.0 in ModelSpeedup (#3866)

Hyper-parameter Tuning

  • Improve HPO benchmarks (#3925)
  • Improve type validation of user defined search space (#3975)

Training service & nnictl

  • Support JupyterLab (#3668 #3954)
  • Support viewing experiment from experiment folder (#3870)
  • Support kubeflow in training service reuse framework (#3919)
  • Support viewing trial log on WebUI for an experiment launched in view mode (#3872)

Minor Updates & Bug Fixes

  • Fix the failure of the exit of Retiarii experiment (#3899)
  • Fix exclude not supported in some config_list cases (#3815)
  • Fix bug in remote training service on reuse mode (#3941)
  • Improve IP address detection in modern way (#3860)
  • Fix bug of the search box on WebUI (#3935)
  • Fix bug in url_prefix of WebUI (#4051)
  • Support dict format of intermediate on WebUI (#3895)
  • Fix bug in openpai training service induced by experiment config v2 (#4027 #4057)
  • Improved doc (#3861 #3885 #3966 #4004 #3955)
  • Improved the API export_model in model compression (#3968)
  • Supported UnSqueeze in ModelSpeedup (#3960)
  • Thanks other external contributors: @Markus92 (#3936), @thomasschmied (#3963), @twmht (#3842)