Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

关于e-ASPP #212

Open
MangekyoSasuke opened this issue Sep 24, 2023 · 4 comments
Open

关于e-ASPP #212

MangekyoSasuke opened this issue Sep 24, 2023 · 4 comments

Comments

@MangekyoSasuke
Copy link

你好!我想学习e-ASPP是如何写出来的?请问有参考吗?

@munibkhanali
Copy link

munibkhanali commented Sep 27, 2023

Hi @MangekyoSasuke
Description of eASSP in paper is not specific enough, but i have implemented it as far as i understood it form paper.
I will call it LightEfficientASPP, Author of the Modnet(@ZHKKKe ) claim eASSP has 1% of the ASSP parameters and 1% computational cost. but LightEfficientASPP has 1.8 % parameters and computational cost.

it will be so kind of @ZHKKKe if he comments on LightEfficientASPP

class LightEfficientASPP(nn.Module):
    def __init__(self, in_channels, dilation_rates=[6, 12, 18], channel_reduction=4):
        super(LightEfficientASPP, self).__init__()

        out_channels=in_channels // channel_reduction
        # Channel reduction
        self.channel_reduction_conv = Conv2dIBNormRelu(in_channels, in_channels // channel_reduction, kernel_size=1)
        c1_out=out_channels
        c2_out=c1_out//channel_reduction
        c2_out=c2_out//channel_reduction
        # Depth-wise atrous convolutions with point-wise convolutions
        self.conv3x3_1 = nn.Sequential(
            Conv2dIBNormRelu(c1_out, c1_out, kernel_size=3, padding=dilation_rates[0], dilation=dilation_rates[0], groups=c1_out),
            Conv2dIBNormRelu(c1_out, c1_out, kernel_size=1)
        )
        self.conv3x3_2 = nn.Sequential(
            Conv2dIBNormRelu(out_channels, c2_out, kernel_size=3, padding=dilation_rates[1], dilation=dilation_rates[1], groups=c2_out),
            Conv2dIBNormRelu(c2_out, c2_out, kernel_size=1)
        )
        self.conv3x3_3 = nn.Sequential(
            Conv2dIBNormRelu(out_channels, c2_out, kernel_size=3, padding=dilation_rates[2], dilation=dilation_rates[2], groups=c2_out),
            Conv2dIBNormRelu(c2_out, c2_out, kernel_size=1)
        )

        # Recover the number of channels
        self.recover_channels = Conv2dIBNormRelu(c1_out+c2_out+c2_out, in_channels, kernel_size=1)

    def forward(self, x):
        reduced_features = self.channel_reduction_conv(x)
        conv3x3_1 = self.conv3x3_1(reduced_features)
        conv3x3_2 = self.conv3x3_2(reduced_features)
        conv3x3_3 = self.conv3x3_3(reduced_features)
        combined_features = torch.cat([conv3x3_1, conv3x3_2,conv3x3_3], dim=1)
        output = self.recover_channels(combined_features)

        return output

Thank you

@MangekyoSasuke
Copy link
Author

MangekyoSasuke commented Nov 15, 2023 via email

@vodatvan01
Copy link

How is the performance?

@munibkhanali
Copy link

Hi @vodatvan01 ,
The Performance is pretty comparable.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants