Skip to content

Commit

Permalink
Bump version to mmagic v1.0.0
Browse files Browse the repository at this point in the history
Bump version to mmagic v1.0.0
  • Loading branch information
Z-Fran committed Apr 25, 2023
2 parents f672b21 + 281544f commit 4094cb3
Show file tree
Hide file tree
Showing 5 changed files with 50 additions and 40 deletions.
35 changes: 20 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
<div id="top" align="center">
<img src="docs/en/_static/image/mmagic-logo.png" width="500px"/>
<div>&nbsp;</div>
<div align="center">
<font size="10"><b>M</b>ultimodal <b>A</b>dvanced, <b>G</b>enerative, and <b>I</b>ntelligent <b>C</b>reation (MMagic [em'mædʒɪk])</font>
</div>
<div>&nbsp;</div>
<div align="center">
<b><font size="5">OpenMMLab website</font></b>
<sup>
Expand Down Expand Up @@ -57,7 +61,7 @@ English | [简体中文](README_zh-CN.md)

We are excited to announce the release of MMagic v1.0.0 that inherits from [MMEditing](https://github.com/open-mmlab/mmediting) and [MMGeneration](https://github.com/open-mmlab/mmgeneration).

After iterative updates with OpenMMLab 2.0 framework and merged with MMGeneration, MMEditing has become a powerful tool that supports low-level algorithms based on both GAN and CNN. Today, MMEditing embraces the Diffusion Model and transforms into a more advanced and comprehensive AIGC toolkit: **MMagic** (**M**ultimodal **A**dvanced, **G**enerative, and **I**ntelligent **C**reation). MMagic will provide more agile and flexible experimental support for researchers and AIGC enthusiasts, and help you on your AIGC exploration journey.
After iterative updates with OpenMMLab 2.0 framework and merged with MMGeneration, MMEditing has become a powerful tool that supports low-level algorithms based on both GAN and CNN. Today, MMEditing embraces Generative AI and transforms into a more advanced and comprehensive AIGC toolkit: **MMagic** (**M**ultimodal **A**dvanced, **G**enerative, and **I**ntelligent **C**reation). MMagic will provide more agile and flexible experimental support for researchers and AIGC enthusiasts, and help you on your AIGC exploration journey.

We highlight the following new features.

Expand Down Expand Up @@ -110,14 +114,14 @@ Please refer to [migration documents](docs/en/migration/overview.md) to migrate

## 📄 Table of Contents

- [📖 Introduction](#📖-introduction)
- [🙌 Contributing](#🙌-contributing)
- [🛠️ Installation](#🛠️-installation)
- [📊 Model Zoo](#📊-model-zoo)
- [🤝 Acknowledgement](#🤝-acknowledgement)
- [🖊️ Citation](#🖊️-citation)
- [🎫 License](#🎫-license)
- [🏗️ ️OpenMMLab Family](#🏗️-️openmmlab-family)
- [📖 Introduction](#-introduction)
- [🙌 Contributing](#-contributing)
- [🛠️ Installation](#%EF%B8%8F-installation)
- [📊 Model Zoo](#-model-zoo)
- [🤝 Acknowledgement](#-acknowledgement)
- [🖊️ Citation](#%EF%B8%8F-citation)
- [🎫 License](#-license)
- [🏗️ ️OpenMMLab Family](#%EF%B8%8F-️openmmlab-family)

<p align="right"><a href="#top">🔝Back to top</a></p>

Expand All @@ -143,7 +147,7 @@ The best practice on our main branch works with **Python 3.8+** and **PyTorch 1.

- **Efficient Framework**

By using MMEngine and MMCV of OpenMMLab 2.0 framework, MMagic decompose the editing framework into different modules and one can easily construct a customized editor framework by combining different module. We can define the training process just like playing with Legos and provide rich components and strategies. In MMagic, you can complete controls on the training process with different levels of APIs. With the support of [MMSeparateDistributedDataParallel](https://github.com/open-mmlab/mmengine/blob/main/mmengine/model/wrappers/seperate_distributed.py), distributed training for dynamic architectures can be easily implemented.
By using MMEngine and MMCV of OpenMMLab 2.0 framework, MMagic decompose the editing framework into different modules and one can easily construct a customized editor framework by combining different modules. We can define the training process just like playing with Legos and provide rich components and strategies. In MMagic, you can complete controls on the training process with different levels of APIs. With the support of [MMSeparateDistributedDataParallel](https://github.com/open-mmlab/mmengine/blob/main/mmengine/model/wrappers/seperate_distributed.py), distributed training for dynamic architectures can be easily implemented.

<p align="right"><a href="#top">🔝Back to top</a></p>

Expand Down Expand Up @@ -189,6 +193,7 @@ python -c "import mmagic; print(mmagic.__version__)"
```

**Getting Started**

After installing MMagic successfully, now you are able to play with MMagic! To generate an image from text, you only need several lines of codes by MMagic!

```python
Expand Down Expand Up @@ -365,12 +370,12 @@ Please refer to [installation](docs/en/get_started/install.md) for more detailed
</td>
<td>
<ul>
<li><a href="configs/controlnet/README.md">ControlNet (2023)</a></li>
<li><a href="configs/dreambooth/README.md">DreamBooth (2022)</a></li>
<li><a href="configs/stable_diffusion/README.md">Stable-Diffusion (2022)</a></li>
<li><a href="configs/disco_diffusion/README.md">Disco-Diffusion (2022)</a></li>
<li><a href="configs/guided_diffusion/README.md">Guided Diffusion (NeurIPS'2021)</a></li>
<li><a href="projects/glide/configs/README.md">GLIDE (NeurIPS'2021)</a></li>
<li><a href="configs/guided_diffusion/README.md">Guided Diffusion (NeurIPS'2021)</a></li>
<li><a href="configs/disco_diffusion/README.md">Disco-Diffusion (2022)</a></li>
<li><a href="configs/stable_diffusion/README.md">Stable-Diffusion (2022)</a></li>
<li><a href="configs/dreambooth/README.md">DreamBooth (2022)</a></li>
<li><a href="configs/controlnet/README.md">ControlNet (2023)</a></li>
</ul>
</td>
<td>
Expand Down
33 changes: 19 additions & 14 deletions README_zh-CN.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
<div id="top" align="center">
<img src="docs/zh_cn/_static/image/mmagic-logo.png" width="500px"/>
<div>&nbsp;</div>
<div align="center">
<font size="10"><b>M</b>ultimodal <b>A</b>dvanced, <b>G</b>enerative, and <b>I</b>ntelligent <b>C</b>reation (MMagic [em'mædʒɪk])</font>
</div>
<div>&nbsp;</div>
<div align="center">
<b><font size="5">OpenMMLab 官网</font></b>
<sup>
Expand Down Expand Up @@ -57,7 +61,7 @@

我们正式发布 MMagic v1.0.0 版本,源自 [MMEditing](https://github.com/open-mmlab/mmediting)[MMGeneration](https://github.com/open-mmlab/mmgeneration)

经过 OpenMMLab 2.0 框架的迭代更新以及与 MMGeneration 的合并,MMEditing 已经成为了一个支持基于 GAN 和 CNN 的底层视觉算法的强大工具。而今天,MMEditing 将拥抱 Diffusion Model(扩散模型),正式更名为 **MMagic****M**ultimodal **A**dvanced, **G**enerative, and **I**ntelligent **C**reation),转化为更为先进、全面的 AIGC 开源算法库。MMagic 将为广大研究者与 AIGC 爱好者们提供更加快捷灵活的实验支持,助力你的 AIGC 探索之旅。
经过 OpenMMLab 2.0 框架的迭代更新以及与 MMGeneration 的合并,MMEditing 已经成为了一个支持基于 GAN 和 CNN 的底层视觉算法的强大工具。而今天,MMEditing 将更加拥抱生成式 AI(Generative AI),正式更名为 **MMagic****M**ultimodal **A**dvanced, **G**enerative, and **I**ntelligent **C**reation),致力于打造更先进、更全面的 AIGC 开源算法库。MMagic 将为广大研究者与 AIGC 爱好者们提供更加快捷灵活的实验支持,助力你的 AIGC 探索之旅。

以下是此次版本发布的重点新功能:

Expand Down Expand Up @@ -108,14 +112,14 @@

## 📄 目录

- [📖 介绍](#📖-介绍)
- [🙌 参与贡献](#🙌-参与贡献)
- [🛠️ 安装](#🛠️-安装)
- [📊 模型库](#📊-模型库)
- [🤝 致谢](#🤝-致谢)
- [🖊️ 引用](#🖊️-引用)
- [🎫 许可证](#🎫-许可证)
- [🏗️ ️OpenMMLab 的其他项目](#🏗️-️openmmlab-的其他项目)
- [📖 介绍](#-介绍)
- [🙌 参与贡献](#-参与贡献)
- [🛠️ 安装](#%EF%B8%8F-安装)
- [📊 模型库](#-模型库)
- [🤝 致谢](#-致谢)
- [🖊️ 引用](#%EF%B8%8F-引用)
- [🎫 许可证](#-许可证)
- [🏗️ ️OpenMMLab 的其他项目](#%EF%B8%8F-️openmmlab-的其他项目)

<p align="right"><a href="#top">🔝返回顶部</a></p>

Expand Down Expand Up @@ -186,6 +190,7 @@ python -c "import mmagic; print(mmagic.__version__)"
```

**开始使用**

成功安装 MMagic 后,你可以很容易地上手使用 MMagic!仅需几行代码,你就可以使用 MMagic 完成文本生成图像!

```python
Expand Down Expand Up @@ -362,12 +367,12 @@ pip3 install -e .
</td>
<td>
<ul>
<li><a href="configs/controlnet/README.md">ControlNet (2023)</a></li>
<li><a href="configs/dreambooth/README.md">DreamBooth (2022)</a></li>
<li><a href="configs/stable_diffusion/README.md">Stable-Diffusion (2022)</a></li>
<li><a href="configs/disco_diffusion/README.md">Disco-Diffusion (2022)</a></li>
<li><a href="configs/guided_diffusion/README.md">Guided Diffusion (NeurIPS'2021)</a></li>
<li><a href="projects/glide/configs/README.md">GLIDE (NeurIPS'2021)</a></li>
<li><a href="configs/guided_diffusion/README.md">Guided Diffusion (NeurIPS'2021)</a></li>
<li><a href="configs/disco_diffusion/README.md">Disco-Diffusion (2022)</a></li>
<li><a href="configs/stable_diffusion/README.md">Stable-Diffusion (2022)</a></li>
<li><a href="configs/dreambooth/README.md">DreamBooth (2022)</a></li>
<li><a href="configs/controlnet/README.md">ControlNet (2023)</a></li>
</ul>
</td>
<td>
Expand Down
12 changes: 6 additions & 6 deletions docs/en/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,9 +8,9 @@ We are excited to announce the release of MMagic v1.0.0 that inherits from [MMEd

Since its inception, MMEditing has been the preferred algorithm library for many super-resolution, editing, and generation tasks, helping research teams win more than 10 top international competitions and supporting over 100 GitHub ecosystem projects. After iterative updates with OpenMMLab 2.0 framework and merged with MMGeneration, MMEditing has become a powerful tool that supports low-level algorithms based on both GAN and CNN.

Today, MMEditing embraces the Diffusion Model and transforms into a more advanced and comprehensive AIGC toolkit: **MMagic** (**M**ultimodal **A**dvanced, **G**enerative, and **I**ntelligent **C**reation).
Today, MMEditing embraces Generative AI and transforms into a more advanced and comprehensive AIGC toolkit: **MMagic** (**M**ultimodal **A**dvanced, **G**enerative, and **I**ntelligent **C**reation).

In MMagic, we have supports 53+ models in multiple tasks such as fine-tuning for stable diffusion, text-to-image, image and video restoration, super-resolution, editing and generation. With excellent training and experiment management support from [MMEngine](https://github.com/open-mmlab/mmengine), MMagic will provide more agile and flexible experimental support for researchers and AIGC enthusiasts, and help you on your AIGC exploration journey. With MMagic, experience more magic in generation! Let's open a new era beyond editing together. More than Editing, Unlock the Magic!
In MMagic, we have supported 53+ models in multiple tasks such as fine-tuning for stable diffusion, text-to-image, image and video restoration, super-resolution, editing and generation. With excellent training and experiment management support from [MMEngine](https://github.com/open-mmlab/mmengine), MMagic will provide more agile and flexible experimental support for researchers and AIGC enthusiasts, and help you on your AIGC exploration journey. With MMagic, experience more magic in generation! Let's open a new era beyond editing together. More than Editing, Unlock the Magic!

**Highlights**

Expand Down Expand Up @@ -51,15 +51,15 @@ For the Diffusion Model, we provide the following "magic" :

- Support video generation based on MultiFrame Render.
MMagic supports the generation of long videos in various styles through ControlNet and MultiFrame Render.
prompt key words: a handsome man, silver hair, smiling, play basketball
prompt keywords: a handsome man, silver hair, smiling, play basketball

https://user-images.githubusercontent.com/12782558/227149757-fd054d32-554f-45d5-9f09-319184866d85.mp4

prompt key words: a girl, black hair, white pants, smiling, play basketball
prompt keywords: a girl, black hair, white pants, smiling, play basketball

https://user-images.githubusercontent.com/49083766/233559964-bd5127bd-52f6-44b6-a089-9d7adfbc2430.mp4

prompt key words: a handsome man
prompt keywords: a handsome man

https://user-images.githubusercontent.com/12782558/227152129-d70d5f76-a6fc-4d23-97d1-a94abd08f95a.mp4

Expand All @@ -74,7 +74,7 @@ For the Diffusion Model, we provide the following "magic" :

To improve your "spellcasting" efficiency, we have made the following adjustments to the "magic circuit":

- By using MMEngine and MMCV of OpenMMLab 2.0 framework, We decompose the editing framework into different modules and one can easily construct a customized editor framework by combining different module. We can define the training process just like playing with Legos and provide rich components and strategies. In MMagic, you can complete controls on the training process with different levels of APIs.
- By using MMEngine and MMCV of OpenMMLab 2.0 framework, We decompose the editing framework into different modules and one can easily construct a customized editor framework by combining different modules. We can define the training process just like playing with Legos and provide rich components and strategies. In MMagic, you can complete controls on the training process with different levels of APIs.
- Support for 33+ algorithms accelerated by Pytorch 2.0.
- Refactor DataSample to support the combination and splitting of batch dimensions.
- Refactor DataPreprocessor and unify the data format for various tasks during training and inference.
Expand Down
2 changes: 1 addition & 1 deletion docs/en/get_started/overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ MMagic supports various applications, including:

- **Efficient Framework**

By using MMEngine and MMCV of OpenMMLab 2.0 framework, MMagic decompose the editing framework into different modules and one can easily construct a customized editor framework by combining different module. We can define the training process just like playing with Legos and provide rich components and strategies. In MMagic, you can complete controls on the training process with different levels of APIs. With the support of [MMSeparateDistributedDataParallel](https://github.com/open-mmlab/mmengine/blob/main/mmengine/model/wrappers/seperate_distributed.py), distributed training for dynamic architectures can be easily implemented.
By using MMEngine and MMCV of OpenMMLab 2.0 framework, MMagic decompose the editing framework into different modules and one can easily construct a customized editor framework by combining different modules. We can define the training process just like playing with Legos and provide rich components and strategies. In MMagic, you can complete controls on the training process with different levels of APIs. With the support of [MMSeparateDistributedDataParallel](https://github.com/open-mmlab/mmengine/blob/main/mmengine/model/wrappers/seperate_distributed.py), distributed training for dynamic architectures can be easily implemented.

## Get started

Expand Down
8 changes: 4 additions & 4 deletions docs/zh_cn/changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

自从 MMEditing 诞生以来,它一直是许多图像超分辨率、编辑和生成任务的首选算法库,帮助多个研究团队取得 10 余 项国际顶级赛事的胜利,支撑了 100 多个 GitHub 生态项目。经过 OpenMMLab 2.0 框架的迭代更新以及与 MMGeneration 的合并,MMEditing 已经成为了一个支持基于 GAN 和 CNN 的底层视觉算法的强大工具。

而今天,MMEditing 将拥抱 Diffusion Model(扩散模型),正式更名为 **MMagic****M**ultimodal **A**dvanced, **G**enerative, and **I**ntelligent **C**reation),转化为更为先进、全面的 AIGC 开源算法库。
而今天,MMEditing 将更加拥抱生成式 AI(Generative AI),正式更名为 **MMagic****M**ultimodal **A**dvanced, **G**enerative, and **I**ntelligent **C**reation),致力于打造更先进、更全面的 AIGC 开源算法库。

在 MMagic 中,我们已经支持了 53+ 模型,分布于 Stable Diffusion 的微调、图文生成、图像及视频修复、超分辨率、编辑和生成等多种任务。配合 [MMEngine](https://github.com/open-mmlab/mmengine) 出色的训练与实验管理支持,MMagic 将为广大研究者与 AIGC 爱好者们提供更加快捷灵活的实验支持,助力你的 AIGC 探索之旅。使用 MMagic,体验更多生成的魔力!让我们一起开启超越编辑的新纪元! More than Editing, Unlock the Magic!

Expand Down Expand Up @@ -51,15 +51,15 @@ https://user-images.githubusercontent.com/49083766/233564593-7d3d48ed-e843-4432-

- 支持基于 MultiFrame Render 的视频生成.
MMagic 支持通过 ControlNet 与多帧渲染法实现长视频的生成。
prompt key words: a handsome man, silver hair, smiling, play basketball
prompt keywords: a handsome man, silver hair, smiling, play basketball

https://user-images.githubusercontent.com/12782558/227149757-fd054d32-554f-45d5-9f09-319184866d85.mp4

prompt key words: a girl, black hair, white pants, smiling, play basketball
prompt keywords: a girl, black hair, white pants, smiling, play basketball

https://user-images.githubusercontent.com/49083766/233559964-bd5127bd-52f6-44b6-a089-9d7adfbc2430.mp4

prompt key words: a handsome man
prompt keywords: a handsome man

https://user-images.githubusercontent.com/12782558/227152129-d70d5f76-a6fc-4d23-97d1-a94abd08f95a.mp4

Expand Down

0 comments on commit 4094cb3

Please sign in to comment.