Skip to content

Latest commit

 

History

History
790 lines (574 loc) · 68.6 KB

changelog.md

File metadata and controls

790 lines (574 loc) · 68.6 KB

Changelog of v0.x

v0.10.4 (23/4/2024)

New Features & Enhancements

  • Support custom artifact_location in MLflowVisBackend. by @daavoo in #1505
  • Add the supported pytorch versions in README by @zhouzaida in #1512
  • Perform evaluation upon training completion by @LZHgrla in #1529
  • Enable exclude_frozen_parameters for DeepSpeedEngine._zero3_consolidated_16bit_state_dict by @LZHgrla in #1517

Bug Fixes

Docs

v0.10.3 (24/1/2024)

New Features & Enhancements

Bug Fixes

Docs

v0.10.2 (26/12/2023)

New Features & Enhancements

  • Support multi-node distributed training with NPU backend by @shun001 in #1459
  • Use ImportError to cover ModuleNotFoundError by @del-zhenwu in #1438

Bug Fixes

Contributors

A total of 4 developers contributed to this release. Thanks @shun001, @del-zhenwu, @SCZwangxiao, @fanqiNO1

v0.10.1 (22/11/2023)

Bug Fixes

Docs

Contributors

A total of 1 developers contributed to this release. Thanks @fanqiNO1

v0.10.0 (21/11/2023)

New Features & Enhancements

  • Support for installing mmengine without opencv by @fanqiNO1 in #1429
  • Support exclude_frozen_parameters for DeepSpeedStrategy's resume by @LZHgrla in #1424

Bug Fixes

Contributors

A total of 3 developers contributed to this release. Thanks @HIT-cwh, @LZHgrla, @fanqiNO1

v0.9.1 (03/11/2023)

New Features & Enhancements

Bug Fixes

  • Fix new config in visualizer by @HAOCHENYE in #1390
  • Fix func params using without init in OneCycleLR (#1401) by @whlook in #1403
  • Fix a bug when module is missing in low version of bitsandbytes by @Ben-Louis in #1388
  • Fix ConcatDataset raising error when metainfo is np.array by @jonbakerfish in #1407

Docs

Contributors

A total of 9 developers contributed to this release. Thanks @POI-WX, @whlook, @jonbakerfish, @LZHgrla, @Ben-Louis, @YiyaoYang1, @fanqiNO1, @HAOCHENYE, @zhouzaida

v0.9.0 (10/10/2023)

Highlights

New Features & Enhancements

Docs

Bug Fixes

Contributors

A total of 21 developers contributed to this release. Thanks @LZHgrla, @wangerlie, @wangg12, @RangeKing, @hiyyg, @LRJKD, @KevinNuNu, @zeyuanyin, @Desjajja, @ShuRaymond, @okotaku, @crazysteeaam, @6Vvv, @NrealLzx, @YinAoXiong, @huaibovip, @xuuyangg, @Dominic23331, @fanqiNO1, @HAOCHENYE, @zhouzaida

v0.8.4 (03/08/2023)

New Features & Enhancements

  • Support callable collate_fn for FlexibleRunner by @LZHgrla in #1284

Bug fixes

Docs

Contributors

A total of 3 developers contributed to this release. Thanks @HAOCHENYE, @zhouzaida, @LZHgrla

v0.8.3 (31/07/2023)

Highlights

  • Support enabling efficient_conv_bn_eval for efficient convolution and batch normalization. See save memory on gpu for more details
  • Add Llama2 finetune example
  • Support multi-node distributed training with MLU backend

New Features & Enhancements

Bug fixes

Docs

Contributors

A total of 9 developers contributed to this release. Thanks @HAOCHENYE, @youkaichao, @josh6688, @i-aki-y, @mmeendez8, @zhouzaida, @gachiemchiep, @KerwinKai, @Li-Qingyun

v0.8.2 (07/12/2023)

Bug fixes

v0.8.1 (07/05/2023)

New Features & Enhancements

  • Accelerate Config.dump and support converting Lazyxxx to string in ConfigDict.to_dictby @HAOCHENYE in #1232

Bug fixes

Docs

  • Add a document to introduce how to train a large model by @zhouzaida in #1228

v0.8.0 (06/30/2023)

Highlights

  • Support training with FSDP and DeepSpeed. Refer to the example for more detailed usages.

  • Introduce the pure Python style configuration file:

    • Support navigating to base configuration file in IDE
    • Support navigating to base variable in IDE
    • Support navigating to source code of class in IDE
    • Support inheriting two configuration files containing the same field
    • Load the configuration file without other third-party requirements

    Refer to the tutorial for more detailed usages.

    new-config-en

New Features & Enhancements

Bug fixes

  • CheckpointHook should check whether file exists before removing it by @zhouzaida in #1198
  • Fix undefined variable error in Runner by @HAOCHENYE in #1219

Docs

Contributors

A total of 9 developers contributed to this release. Thanks @evdcush, @zhouzaida, @AkideLiu, @joihn, @HAOCHENYE, @edkair, @alexander-soare, @syo093c, @zgzhengSEU

v0.7.4 (06/03/2023)

Highlights

  • Support using ClearML to record experiment data
  • Add Sophia optimizers

New Features & Enhancements

Bug fixes

Docs

Contributors

A total of 19 developers contributed to this release. Thanks @Hongru-Xiao @i-aki-y @Bomsw @KickCellarDoor @zhouzaida @YQisme @gachiemchiep @CescMessi @W-ZN @Ginray @adrianjoshua-strutt @CokeDong @xin-li-67 @Xiangxu-0103 @HAOCHENYE @Shiyang980713 @TankNee @zimonitrome @gy-7

v0.7.3 (04/28/2023)

Highlights

  • Support using MLflow to record experiment data
  • Support registering callable objects to the registry

New Features & Enhancements

Bug fixes

Docs

Contributors

A total of 17 developers contributed to this release. Thanks @enkilee, @JunweiZheng93, @sh0622-kim, @jsrdcht, @SheffieldCao, @josh6688, @mzr1996, @zhouzaida, @shufanwu, @Luo-Yihang, @C1rN09, @LEFTeyex, @zccjjj, @Ginray, @HAOCHENYE, @sjiang95, @luomaoling

v0.7.2 (04/06/2023)

Bug fixes

  • Align the evaluation result in log by @kitecats in #1034
  • Update the logic to calculate the repeat_factors in ClassBalancedDataset by @BIGWangYuDong in #1048
  • Initialize sub-modules in DistributedDataParallel that define init_weights during initialization by @HAOCHENYE in #1045
  • Refactor checkpointhook unittest by @HAOCHENYE in #789

Contributors

A total of 3 developers contributed to this release. Thanks @kitecats, @BIGWangYuDong, @HAOCHENYE

v0.7.1 (04/03/2023)

Highlights

  • Support compiling the model and enabling mixed-precision training at the same time
  • Fix the bug where the logs cannot be properly saved to the log file after calling torch.compile

New Features & Enhancements

  • Add mmpretrain to the MODULE2PACKAGE. by @mzr1996 in #1002
  • Support using get_device in the compiled model by @C1rN09 in #1004
  • Make sure the FileHandler still alive after torch.compile by @HAOCHENYE in #1021
  • Unify the use of print_log and logger.info(warning) by @LEFTeyex in #997
  • Publish models after training if published_keys is set in CheckpointHook by @KerwinKai in #987
  • Enhance the error catching in registry by @HAOCHENYE in #1010
  • Do not print config if it is empty by @zhouzaida in #1028

Bug fixes

  • Fix there is no space between data_time and metric in logs by @HAOCHENYE in #1025

Docs

  • Minor fixes in EN docs to remove or replace unicode chars with ascii by @evdcush in #1018

Contributors

A total of 7 developers contributed to this release. Thanks @LEFTeyex, @KerwinKai, @mzr1996, @evdcush, @C1rN09, @HAOCHENYE, @zhouzaida

v0.7.0 (03/16/2023)

Highlights

  • Support PyTorch 2.0! Accelerate training by compiling models. See the tutorial Model Compilation for details
  • Add EarlyStoppingHook to stop training when the metric does not improve

New Features & Enhancements

  • Add configurations to support torch.compile in Runner by @C1rN09 in #976
  • Support EarlyStoppingHook by @nijkah in #739
  • Disable duplicated warning during distributed training by @HAOCHENYE in #961
  • Add FUNCTIONS root Registry by @HAOCHENYE in #983
  • Save the "memory" field to visualization backends by @enkilee in #974
  • Enable bf16 in AmpOptimWrapper by @C1rN09 in #960
  • Support writing data to vis_backend with prefix by @HAOCHENYE in #972
  • Support exporting logs of different ranks in debug mode by @HAOCHENYE in #968
  • Silence error when ManagerMixin built instance with duplicate name. by @HAOCHENYE in #990

Bug fixes

Docs

Contributors

A total of 10 developers contributed to this release. Thanks @xin-li-67, @acdart, @enkilee, @YuetianW, @luomaoling, @nijkah, @VoyagerXvoyagerx, @zhouzaida, @HAOCHENYE, @C1rN09

v0.6.0 (02/24/2023)

Highlights

  • Support Apex with ApexOptimWrapper
  • Support analyzing model complexity.
  • Add Lion optimizer.
  • Support using environment variables in the config file.

New Features & Enhancements

Bug fixes

  • Backend_args should not be modified by get_file_backend by @zhouzaida in #897
  • Support update np.ScalarType data in message_hub by @HAOCHENYE in #898
  • Support rendering Chinese character in Visualizer by @KevinNuNu in #887
  • Fix the bug of DefaultOptimWrapperConstructor when the shared parameters do not require the grad by @HIT-cwh in #903

Docs

Contributors

A total of 15 developers contributed to this release. Thanks @Eiuyc, @xcnick, @KevinNuNu, @XWHtorrentx, @tonysy, @zhouzaida, @Xiangxu-0103, @Dai-Wenxun, @jbwang1997, @apacha, @C1rN09, @HIT-cwh, @vansin, @HAOCHENYE, @luomaoling.

v0.5.0 (01/20/2023)

Highlights

  • Add BaseInferencer to provide a general inference interface
  • Provide ReduceOnPlateauParamScheduler to adjust learning rate by metric
  • Deprecate support for Python3.6

New Features & Enhancements

  • Deprecate support for Python3.6 by @HAOCHENYE in #863
  • Support non-scalar type metric value by @mzr1996 in #827
  • Remove unnecessary calls and lazily import to speed import performance by @zhouzaida in #837
  • Support ReduceOnPlateauParamScheduler by @LEFTeyex in #819
  • Disable warning of subprocess launched by dataloader by @HAOCHENYE in #870
  • Add BaseInferencer to provide general interface by @HAOCHENYE in #874

Bug Fixes

  • Fix support for Ascend device by @wangjiangben-hw in #847
  • Fix Config cannot parse base config when there is . in tmp path, etc. tmp/a.b/c by @HAOCHENYE in #856
  • Fix unloaded weights will not be initialized when using PretrainedIinit by @HAOCHENYE in #764
  • Fix error package name defined in PKG2PROJECT by @HAOCHENYE in #872

Docs

Contributors

A total of 8 developers contributed to this release. Thanks @LEFTeyex, @RangeKing, @yaqi0510, @Xiangxu-0103, @wangjiangben-hw, @mzr1996, @zhouzaida, @HAOCHENYE.

v0.4.0 (12/28/2022)

Highlights

  • Registry supports importing modules automatically
  • Upgrade the documentation and provide the English documentation
  • Provide ProfileHook to profile the running process

New Features & Enhancements

  • Add conf_path in PetrelBackend by @sunyc11 in #774
  • Support multiple --cfg-options. by @mzr1996 in #759
  • Support passing arguments to OptimWrapper.update_params by @twmht in #796
  • Make get_torchvision_model compatible with torch 1.13 by @HAOCHENYE in #793
  • Support flat_decay_mult and fix bias_decay_mult of depth-wise-conv in DefaultOptimWrapperConstructor by @RangiLyu in #771
  • Registry supports importing modules automatically. by @RangiLyu in #643
  • Add profiler hook functionality by @BayMaxBHL in #768
  • Make TTAModel compatible with FSDP. by @HAOCHENYE in #611

Bug Fixes

  • hub.get_model fails on some MMCls models by @C1rN09 in #784
  • Fix BaseModel.to and BaseDataPreprocessor.to to make them consistent with torch.nn.Module by @C1rN09 in #783
  • Fix creating a new logger at PretrainedInit by @xiexinch in #791
  • Fix ZeroRedundancyOptimizer ambiguous error with param groups when PyTorch < 1.12.0 by @C1rN09 in #818
  • Fix MessageHub set resumed key repeatedly by @HAOCHENYE in #839
  • Add progress argument to load_from_http by @austinmw in #770
  • Ensure metrics is not empty when saving best checkpoint by @zhouzaida in #849

Docs

Contributors

A total of 16 developers contributed to this release. Thanks @BayMaxBHL, @RangeKing, @Xiangxu-0103, @xin-li-67, @twmht, @shanmo, @sunyc11, @lyviva, @austinmw, @xiexinch, @mzr1996, @RangiLyu, @MambaWong, @C1rN09, @zhouzaida, @HAOCHENYE

v0.3.2 (11/24/2022)

New Features & Enhancements

  • Send git errors to subprocess.PIPE by @austinmw in #717
  • Add a common TestRunnerTestCase to build a Runner instance. by @HAOCHENYE in #631
  • Align the log by @HAOCHENYE in #436
  • Log the called order of hooks during training process by @songyuc in #672
  • Support setting eta_min_ratio in CosineAnnealingParamScheduler by @cir7 in #725
  • Enhance compatibility of revert_sync_batchnorm by @HAOCHENYE in #695

Bug Fixes

Docs

v0.3.1 (11/09/2022)

Highlights

  • Fix error when saving best checkpoint in ddp-training

New Features & Enhancements

  • Replace print with print_log for those functions called by runner by @HAOCHENYE in #686

Bug Fixes

  • Fix error when saving best checkpoint in ddp-training by @HAOCHENYE in #682

Docs

v0.3.0 (11/02/2022)

New Features & Enhancements

Docs

Bug Fixes

New Contributors

v0.2.0 (10/11/2022)

New Features & Enhancements

Docs

Bug Fixes

  • Fix LogProcessor does not smooth loss if the name of loss doesn't start with loss by @liuyanyi in #539
  • Fix failed to enable detect_anomalous_params in MMSeparateDistributedDataParallel by @HAOCHENYE in #588
  • Fix CheckpointHook behavior unexpected if given filename_tmpl argument by @C1rN09 in #518
  • Fix error argument sequence in FSDP by @HAOCHENYE in #520
  • Fix uploading image in wandb backend @okotaku in #510
  • Fix loading state dictionary in EMAHook by @okotaku in #507
  • Fix circle import in EMAHook by @HAOCHENYE in #523
  • Fix unit test could fail caused by MultiProcessTestCase by @HAOCHENYE in #535
  • Remove unnecessary "if statement" in Registry by @MambaWong in #536
  • Fix _save_to_state_dict by @HAOCHENYE in #542
  • Support comparing NumPy array dataset meta in Runner.resume by @HAOCHENYE in #511
  • Use get instead of pop to dump runner_type in build_runner_from_cfg by @nijkah in #549
  • Upgrade pre-commit hooks by @zhouzaida in #576
  • Delete the error comment in registry.md by @vansin in #514
  • Fix Some out-of-date unit tests by @C1rN09 in #586
  • Fix typo in MMFullyShardedDataParallel by @yhna940 in #569
  • Update Github Action CI and CircleCI by @zhouzaida in #512
  • Fix unit test in windows by @HAOCHENYE in #515
  • Fix merge ci & multiprocessing unit test by @HAOCHENYE in #529

New Contributors