Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Checkpointer] Loading from pretrained_models/resnet50d_ra2-464e36ba.pth fvcore.common.checkpoint WARNING: Some model parameters or buffers are not found in the checkpoint: #109

Open
116022017144 opened this issue Feb 20, 2023 · 0 comments

Comments

@116022017144
Copy link

detectron2 INFO: Rank of current process: 0. World size: 1
[02/20 01:25:16] detectron2 INFO: Environment info:


sys.platform linux
Python 3.7.15 (default, Nov 24 2022, 21:12:53) [GCC 11.2.0]
numpy 1.21.5
detectron2 0.5 @/root/anaconda3/envs/sparseinst/lib/python3.7/site-packages/detectron2
Compiler GCC 7.3
CUDA compiler CUDA 11.0
detectron2 arch flags 3.7, 5.0, 5.2, 6.0, 6.1, 7.0, 7.5, 8.0
DETECTRON2_ENV_MODULE
PyTorch 1.7.1 @/root/anaconda3/envs/sparseinst/lib/python3.7/site-packages/torch
PyTorch debug build False
GPU available Yes
GPU 0 A100-SXM4-40GB (arch=8.0)
CUDA_HOME /usr/local/cuda-11.0
Pillow 9.2.0
torchvision 0.8.2 @/root/anaconda3/envs/sparseinst/lib/python3.7/site-packages/torchvision
torchvision arch flags 3.5, 5.0, 6.0, 7.0, 7.5, 8.0
fvcore 0.1.5.post20221122
iopath 0.1.8
cv2 4.6.0


PyTorch built with:

  • GCC 7.3
  • C++ Version: 201402
  • Intel(R) oneAPI Math Kernel Library Version 2021.4-Product Build 20210904 for Intel(R) 64 architecture applications
  • Intel(R) MKL-DNN v1.6.0 (Git Hash 5ef631a030a6f73131c77892041042805a06064f)
  • OpenMP 201511 (a.k.a. OpenMP 4.5)
  • NNPACK is enabled
  • CPU capability usage: AVX2
  • CUDA Runtime 11.0
  • NVCC architecture flags: -gencode;arch=compute_37,code=sm_37;-gencode;arch=compute_50,code=sm_50;-gencode;arch=compute_60,code=sm_60;-gencode;arch=compute_61,code=sm_61;-gencode;arch=compute_70,code=sm_70;-gencode;arch=compute_75,code=sm_75;-gencode;arch=compute_80,code=sm_80;-gencode;arch=compute_37,code=compute_37
  • CuDNN 8.0.5
  • Magma 2.5.2
  • Build settings: BLAS=MKL, BUILD_TYPE=Release, CXX_FLAGS= -Wno-deprecated -fvisibility-inlines-hidden -DUSE_PTHREADPOOL -fopenmp -DNDEBUG -DUSE_FBGEMM -DUSE_QNNPACK -DUSE_PYTORCH_QNNPACK -DUSE_XNNPACK -DUSE_VULKAN_WRAPPER -O2 -fPIC -Wno-narrowing -Wall -Wextra -Werror=return-type -Wno-missing-field-initializers -Wno-type-limits -Wno-array-bounds -Wno-unknown-pragmas -Wno-sign-compare -Wno-unused-parameter -Wno-unused-variable -Wno-unused-function -Wno-unused-result -Wno-unused-local-typedefs -Wno-strict-overflow -Wno-strict-aliasing -Wno-error=deprecated-declarations -Wno-stringop-overflow -Wno-psabi -Wno-error=pedantic -Wno-error=redundant-decls -Wno-error=old-style-cast -fdiagnostics-color=always -faligned-new -Wno-unused-but-set-variable -Wno-maybe-uninitialized -fno-math-errno -fno-trapping-math -Werror=format -Wno-stringop-overflow, PERF_WITH_AVX=1, PERF_WITH_AVX2=1, PERF_WITH_AVX512=1, USE_CUDA=ON, USE_EXCEPTION_PTR=1, USE_GFLAGS=OFF, USE_GLOG=OFF, USE_MKL=ON, USE_MKLDNN=ON, USE_MPI=OFF, USE_NCCL=ON, USE_NNPACK=ON, USE_OPENMP=ON,

[02/20 01:25:16] detectron2 INFO: Command line arguments: Namespace(config_file='configs/sparse_inst_r50vd_dcn_giam_aug_1.yaml', dist_url='tcp://127.0.0.1:49152', eval_only=False, machine_rank=0, num_gpus=1, num_machines=1, opts=[], resume=False)
[02/20 01:25:16] detectron2 INFO: Contents of args.config_file=configs/sparse_inst_r50vd_dcn_giam_aug_1.yaml:
BASE: "Base-SparseInst_1.yaml"
MODEL:
WEIGHTS: "pretrained_models/resnet50d_ra2-464e36ba.pth"
BACKBONE:
FREEZE_AT: 0
NAME: "build_resnet_vd_backbone"
RESNETS:
DEFORM_ON_PER_STAGE: [False, False, True, True] # dcn on res4, res5
INPUT:
CROP:
ENABLED: True
TYPE: "absolute_range"
SIZE: (384, 600)
MASK_FORMAT: "polygon"

OUTPUT_DIR: "output/sparse_inst_r50vd_dcn_giam_aug"

[02/20 01:25:16] detectron2 INFO: Running with full config:
CUDNN_BENCHMARK: false
DATALOADER:
ASPECT_RATIO_GROUPING: true
FILTER_EMPTY_ANNOTATIONS: true
NUM_WORKERS: 6
REPEAT_THRESHOLD: 0.0
SAMPLER_TRAIN: TrainingSampler
DATASETS:
PRECOMPUTED_PROPOSAL_TOPK_TEST: 1000
PRECOMPUTED_PROPOSAL_TOPK_TRAIN: 2000
PROPOSAL_FILES_TEST: []
PROPOSAL_FILES_TRAIN: []
TEST:

  • coco_2017_val
    TRAIN:
  • coco_2017_train
    GLOBAL:
    HACK: 1.0
    INPUT:
    CROP:
    ENABLED: true
    SIZE:
    • 384
    • 600
      TYPE: absolute_range
      FORMAT: RGB
      MASK_FORMAT: polygon
      MAX_SIZE_TEST: 853
      MAX_SIZE_TRAIN: 853
      MIN_SIZE_TEST: 640
      MIN_SIZE_TRAIN:
  • 416
  • 448
  • 480
  • 512
  • 544
  • 576
  • 608
  • 640
    MIN_SIZE_TRAIN_SAMPLING: choice
    RANDOM_FLIP: horizontal
    MODEL:
    ANCHOR_GENERATOR:
    ANGLES:
      • -90
      • 0
      • 90
        ASPECT_RATIOS:
      • 0.5
      • 1.0
      • 2.0
        NAME: DefaultAnchorGenerator
        OFFSET: 0.0
        SIZES:
      • 32
      • 64
      • 128
      • 256
      • 512
        BACKBONE:
        FREEZE_AT: 0
        NAME: build_resnet_vd_backbone
        CSPNET:
        NAME: darknet53
        NORM: ''
        OUT_FEATURES:
    • csp1
    • csp2
    • csp3
    • csp4
      DEVICE: cuda
      FPN:
      FUSE_TYPE: sum
      IN_FEATURES: []
      NORM: ''
      OUT_CHANNELS: 256
      KEYPOINT_ON: false
      LOAD_PROPOSALS: false
      MASK_ON: true
      META_ARCHITECTURE: SparseInst
      PANOPTIC_FPN:
      COMBINE:
      ENABLED: true
      INSTANCES_CONFIDENCE_THRESH: 0.5
      OVERLAP_THRESH: 0.5
      STUFF_AREA_LIMIT: 4096
      INSTANCE_LOSS_WEIGHT: 1.0
      PIXEL_MEAN:
  • 123.675
  • 116.28
  • 103.53
    PIXEL_STD:
  • 58.395
  • 57.12
  • 57.375
    PROPOSAL_GENERATOR:
    MIN_SIZE: 0
    NAME: RPN
    PVT:
    LINEAR: false
    NAME: b1
    OUT_FEATURES:
    • p2
    • p3
    • p4
      RESNETS:
      DEFORM_MODULATED: false
      DEFORM_NUM_GROUPS: 1
      DEFORM_ON_PER_STAGE:
    • false
    • false
    • true
    • true
      DEPTH: 50
      NORM: FrozenBN
      NUM_GROUPS: 1
      OUT_FEATURES:
    • res3
    • res4
    • res5
      RES2_OUT_CHANNELS: 256
      RES5_DILATION: 1
      STEM_OUT_CHANNELS: 64
      STRIDE_IN_1X1: false
      WIDTH_PER_GROUP: 64
      RETINANET:
      BBOX_REG_LOSS_TYPE: smooth_l1
      BBOX_REG_WEIGHTS: &id001
    • 1.0
    • 1.0
    • 1.0
    • 1.0
      FOCAL_LOSS_ALPHA: 0.25
      FOCAL_LOSS_GAMMA: 2.0
      IN_FEATURES:
    • p3
    • p4
    • p5
    • p6
    • p7
      IOU_LABELS:
    • 0
    • -1
    • 1
      IOU_THRESHOLDS:
    • 0.4
    • 0.5
      NMS_THRESH_TEST: 0.5
      NORM: ''
      NUM_CLASSES: 80
      NUM_CONVS: 4
      PRIOR_PROB: 0.01
      SCORE_THRESH_TEST: 0.05
      SMOOTH_L1_LOSS_BETA: 0.1
      TOPK_CANDIDATES_TEST: 1000
      ROI_BOX_CASCADE_HEAD:
      BBOX_REG_WEIGHTS:
      • 10.0
      • 10.0
      • 5.0
      • 5.0
      • 20.0
      • 20.0
      • 10.0
      • 10.0
      • 30.0
      • 30.0
      • 15.0
      • 15.0
        IOUS:
    • 0.5
    • 0.6
    • 0.7
      ROI_BOX_HEAD:
      BBOX_REG_LOSS_TYPE: smooth_l1
      BBOX_REG_LOSS_WEIGHT: 1.0
      BBOX_REG_WEIGHTS:
    • 10.0
    • 10.0
    • 5.0
    • 5.0
      CLS_AGNOSTIC_BBOX_REG: false
      CONV_DIM: 256
      FC_DIM: 1024
      NAME: ''
      NORM: ''
      NUM_CONV: 0
      NUM_FC: 0
      POOLER_RESOLUTION: 14
      POOLER_SAMPLING_RATIO: 0
      POOLER_TYPE: ROIAlignV2
      SMOOTH_L1_BETA: 0.0
      TRAIN_ON_PRED_BOXES: false
      ROI_HEADS:
      BATCH_SIZE_PER_IMAGE: 512
      IN_FEATURES:
    • res4
      IOU_LABELS:
    • 0
    • 1
      IOU_THRESHOLDS:
    • 0.5
      NAME: Res5ROIHeads
      NMS_THRESH_TEST: 0.5
      NUM_CLASSES: 80
      POSITIVE_FRACTION: 0.25
      PROPOSAL_APPEND_GT: true
      SCORE_THRESH_TEST: 0.05
      ROI_KEYPOINT_HEAD:
      CONV_DIMS:
    • 512
    • 512
    • 512
    • 512
    • 512
    • 512
    • 512
    • 512
      LOSS_WEIGHT: 1.0
      MIN_KEYPOINTS_PER_IMAGE: 1
      NAME: KRCNNConvDeconvUpsampleHead
      NORMALIZE_LOSS_BY_VISIBLE_KEYPOINTS: true
      NUM_KEYPOINTS: 17
      POOLER_RESOLUTION: 14
      POOLER_SAMPLING_RATIO: 0
      POOLER_TYPE: ROIAlignV2
      ROI_MASK_HEAD:
      CLS_AGNOSTIC_MASK: false
      CONV_DIM: 256
      NAME: MaskRCNNConvUpsampleHead
      NORM: ''
      NUM_CONV: 0
      POOLER_RESOLUTION: 14
      POOLER_SAMPLING_RATIO: 0
      POOLER_TYPE: ROIAlignV2
      RPN:
      BATCH_SIZE_PER_IMAGE: 256
      BBOX_REG_LOSS_TYPE: smooth_l1
      BBOX_REG_LOSS_WEIGHT: 1.0
      BBOX_REG_WEIGHTS: *id001
      BOUNDARY_THRESH: -1
      CONV_DIMS:
    • -1
      HEAD_NAME: StandardRPNHead
      IN_FEATURES:
    • res4
      IOU_LABELS:
    • 0
    • -1
    • 1
      IOU_THRESHOLDS:
    • 0.3
    • 0.7
      LOSS_WEIGHT: 1.0
      NMS_THRESH: 0.7
      POSITIVE_FRACTION: 0.5
      POST_NMS_TOPK_TEST: 1000
      POST_NMS_TOPK_TRAIN: 2000
      PRE_NMS_TOPK_TEST: 6000
      PRE_NMS_TOPK_TRAIN: 12000
      SMOOTH_L1_BETA: 0.0
      SEM_SEG_HEAD:
      COMMON_STRIDE: 4
      CONVS_DIM: 128
      IGNORE_VALUE: 255
      IN_FEATURES:
    • p2
    • p3
    • p4
    • p5
      LOSS_WEIGHT: 1.0
      NAME: SemSegFPNHead
      NORM: GN
      NUM_CLASSES: 54
      SPARSE_INST:
      CLS_THRESHOLD: 0.005
      DATASET_MAPPER: SparseInstDatasetMapper
      DECODER:
      GROUPS: 4
      INST:
      CONVS: 4
      DIM: 256
      KERNEL_DIM: 128
      MASK:
      CONVS: 4
      DIM: 256
      NAME: GroupIAMDecoder
      NUM_CLASSES: 80
      NUM_MASKS: 100
      OUTPUT_IAM: false
      SCALE_FACTOR: 2.0
      ENCODER:
      IN_FEATURES:
      • res3
      • res4
      • res5
        NAME: InstanceContextEncoder
        NORM: ''
        NUM_CHANNELS: 256
        LOSS:
        CLASS_WEIGHT: 2.0
        ITEMS:
      • labels
      • masks
        MASK_DICE_WEIGHT: 2.0
        MASK_PIXEL_WEIGHT: 5.0
        NAME: SparseInstCriterion
        OBJECTNESS_WEIGHT: 1.0
        MASK_THRESHOLD: 0.45
        MATCHER:
        ALPHA: 0.8
        BETA: 0.2
        NAME: SparseInstMatcher
        MAX_DETECTIONS: 100
        WEIGHTS: pretrained_models/resnet50d_ra2-464e36ba.pth
        OUTPUT_DIR: output/sparse_inst_r50vd_dcn_giam_aug
        SEED: -1
        SOLVER:
        AMP:
        ENABLED: false
        AMSGRAD: false
        BACKBONE_MULTIPLIER: 1.0
        BASE_LR: 5.0e-05
        BIAS_LR_FACTOR: 1.0
        CHECKPOINT_PERIOD: 5000
        CLIP_GRADIENTS:
        CLIP_TYPE: value
        CLIP_VALUE: 1.0
        ENABLED: false
        NORM_TYPE: 2.0
        GAMMA: 0.1
        IMS_PER_BATCH: 32
        LR_SCHEDULER_NAME: WarmupMultiStepLR
        MAX_ITER: 270000
        MOMENTUM: 0.9
        NESTEROV: false
        OPTIMIZER: ADAMW
        REFERENCE_WORLD_SIZE: 0
        STEPS:
  • 210000
  • 250000
    WARMUP_FACTOR: 0.001
    WARMUP_ITERS: 1000
    WARMUP_METHOD: linear
    WEIGHT_DECAY: 0.0001
    WEIGHT_DECAY_BIAS: 0.0001
    WEIGHT_DECAY_NORM: 0.0
    TEST:
    AUG:
    ENABLED: false
    FLIP: true
    MAX_SIZE: 4000
    MIN_SIZES:
    • 400
    • 500
    • 600
    • 700
    • 800
    • 900
    • 1000
    • 1100
    • 1200
      DETECTIONS_PER_IMAGE: 100
      EVAL_PERIOD: 7330
      EXPECTED_RESULTS: []
      KEYPOINT_OKS_SIGMAS: []
      PRECISE_BN:
      ENABLED: false
      NUM_ITER: 200
      VERSION: 2
      VIS_PERIOD: 0

[02/20 01:25:16] detectron2 INFO: Full config saved to output/sparse_inst_r50vd_dcn_giam_aug/config.yaml
[02/20 01:25:16] d2.utils.env INFO: Using a generated random seed 16543652
[02/20 01:25:20] d2.engine.defaults INFO: Model:
SparseInst(
(backbone): ResNet(
(conv1): Sequential(
(0): Conv2d(3, 32, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(1): FrozenBatchNorm2d(num_features=32, eps=1e-05)
(2): ReLU(inplace=True)
(3): Conv2d(32, 32, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(4): FrozenBatchNorm2d(num_features=32, eps=1e-05)
(5): ReLU(inplace=True)
(6): Conv2d(32, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
)
(bn1): FrozenBatchNorm2d(num_features=64, eps=1e-05)
(act1): ReLU(inplace=True)
(maxpool): MaxPool2d(kernel_size=3, stride=2, padding=1, dilation=1, ceil_mode=False)
(layer1): Sequential(
(0): Bottleneck(
(conv1): Conv2d(64, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(num_features=64, eps=1e-05)
(act1): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(num_features=64, eps=1e-05)
(drop_block): Identity()
(act2): ReLU(inplace=True)
(aa): Identity()
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(num_features=256, eps=1e-05)
(act3): ReLU(inplace=True)
(downsample): Sequential(
(0): Identity()
(1): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): FrozenBatchNorm2d(num_features=256, eps=1e-05)
)
)
(1): Bottleneck(
(conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(num_features=64, eps=1e-05)
(act1): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(num_features=64, eps=1e-05)
(drop_block): Identity()
(act2): ReLU(inplace=True)
(aa): Identity()
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(num_features=256, eps=1e-05)
(act3): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(num_features=64, eps=1e-05)
(act1): ReLU(inplace=True)
(conv2): Conv2d(64, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(num_features=64, eps=1e-05)
(drop_block): Identity()
(act2): ReLU(inplace=True)
(aa): Identity()
(conv3): Conv2d(64, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(num_features=256, eps=1e-05)
(act3): ReLU(inplace=True)
)
)
(layer2): Sequential(
(0): Bottleneck(
(conv1): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(num_features=128, eps=1e-05)
(act1): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(num_features=128, eps=1e-05)
(drop_block): Identity()
(act2): ReLU(inplace=True)
(aa): Identity()
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(num_features=512, eps=1e-05)
(act3): ReLU(inplace=True)
(downsample): Sequential(
(0): AvgPool2d(kernel_size=2, stride=2, padding=0)
(1): Conv2d(256, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): FrozenBatchNorm2d(num_features=512, eps=1e-05)
)
)
(1): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(num_features=128, eps=1e-05)
(act1): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(num_features=128, eps=1e-05)
(drop_block): Identity()
(act2): ReLU(inplace=True)
(aa): Identity()
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(num_features=512, eps=1e-05)
(act3): ReLU(inplace=True)
)
(2): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(num_features=128, eps=1e-05)
(act1): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(num_features=128, eps=1e-05)
(drop_block): Identity()
(act2): ReLU(inplace=True)
(aa): Identity()
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(num_features=512, eps=1e-05)
(act3): ReLU(inplace=True)
)
(3): Bottleneck(
(conv1): Conv2d(512, 128, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(num_features=128, eps=1e-05)
(act1): ReLU(inplace=True)
(conv2): Conv2d(128, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), bias=False)
(bn2): FrozenBatchNorm2d(num_features=128, eps=1e-05)
(drop_block): Identity()
(act2): ReLU(inplace=True)
(aa): Identity()
(conv3): Conv2d(128, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(num_features=512, eps=1e-05)
(act3): ReLU(inplace=True)
)
)
(layer3): Sequential(
(0): DeformableBottleneck(
(conv1): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(num_features=256, eps=1e-05)
(act1): ReLU(inplace=True)
(conv2_offset): Conv2d(256, 18, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(conv2): DeformConv(in_channels=256, out_channels=256, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), dilation=(1, 1), groups=1, deformable_groups=1, bias=False)
(bn2): FrozenBatchNorm2d(num_features=256, eps=1e-05)
(act2): ReLU(inplace=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
(act3): ReLU(inplace=True)
(downsample): Sequential(
(0): AvgPool2d(kernel_size=2, stride=2, padding=0)
(1): Conv2d(512, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
)
)
(1): DeformableBottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(num_features=256, eps=1e-05)
(act1): ReLU(inplace=True)
(conv2_offset): Conv2d(256, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): DeformConv(in_channels=256, out_channels=256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), dilation=(1, 1), groups=1, deformable_groups=1, bias=False)
(bn2): FrozenBatchNorm2d(num_features=256, eps=1e-05)
(act2): ReLU(inplace=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
(act3): ReLU(inplace=True)
)
(2): DeformableBottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(num_features=256, eps=1e-05)
(act1): ReLU(inplace=True)
(conv2_offset): Conv2d(256, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): DeformConv(in_channels=256, out_channels=256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), dilation=(1, 1), groups=1, deformable_groups=1, bias=False)
(bn2): FrozenBatchNorm2d(num_features=256, eps=1e-05)
(act2): ReLU(inplace=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
(act3): ReLU(inplace=True)
)
(3): DeformableBottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(num_features=256, eps=1e-05)
(act1): ReLU(inplace=True)
(conv2_offset): Conv2d(256, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): DeformConv(in_channels=256, out_channels=256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), dilation=(1, 1), groups=1, deformable_groups=1, bias=False)
(bn2): FrozenBatchNorm2d(num_features=256, eps=1e-05)
(act2): ReLU(inplace=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
(act3): ReLU(inplace=True)
)
(4): DeformableBottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(num_features=256, eps=1e-05)
(act1): ReLU(inplace=True)
(conv2_offset): Conv2d(256, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): DeformConv(in_channels=256, out_channels=256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), dilation=(1, 1), groups=1, deformable_groups=1, bias=False)
(bn2): FrozenBatchNorm2d(num_features=256, eps=1e-05)
(act2): ReLU(inplace=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
(act3): ReLU(inplace=True)
)
(5): DeformableBottleneck(
(conv1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(num_features=256, eps=1e-05)
(act1): ReLU(inplace=True)
(conv2_offset): Conv2d(256, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): DeformConv(in_channels=256, out_channels=256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), dilation=(1, 1), groups=1, deformable_groups=1, bias=False)
(bn2): FrozenBatchNorm2d(num_features=256, eps=1e-05)
(act2): ReLU(inplace=True)
(conv3): Conv2d(256, 1024, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(num_features=1024, eps=1e-05)
(act3): ReLU(inplace=True)
)
)
(layer4): Sequential(
(0): DeformableBottleneck(
(conv1): Conv2d(1024, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(num_features=512, eps=1e-05)
(act1): ReLU(inplace=True)
(conv2_offset): Conv2d(512, 18, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1))
(conv2): DeformConv(in_channels=512, out_channels=512, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), dilation=(1, 1), groups=1, deformable_groups=1, bias=False)
(bn2): FrozenBatchNorm2d(num_features=512, eps=1e-05)
(act2): ReLU(inplace=True)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
(act3): ReLU(inplace=True)
(downsample): Sequential(
(0): AvgPool2d(kernel_size=2, stride=2, padding=0)
(1): Conv2d(1024, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(2): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
)
)
(1): DeformableBottleneck(
(conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(num_features=512, eps=1e-05)
(act1): ReLU(inplace=True)
(conv2_offset): Conv2d(512, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): DeformConv(in_channels=512, out_channels=512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), dilation=(1, 1), groups=1, deformable_groups=1, bias=False)
(bn2): FrozenBatchNorm2d(num_features=512, eps=1e-05)
(act2): ReLU(inplace=True)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
(act3): ReLU(inplace=True)
)
(2): DeformableBottleneck(
(conv1): Conv2d(2048, 512, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn1): FrozenBatchNorm2d(num_features=512, eps=1e-05)
(act1): ReLU(inplace=True)
(conv2_offset): Conv2d(512, 18, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(conv2): DeformConv(in_channels=512, out_channels=512, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), dilation=(1, 1), groups=1, deformable_groups=1, bias=False)
(bn2): FrozenBatchNorm2d(num_features=512, eps=1e-05)
(act2): ReLU(inplace=True)
(conv3): Conv2d(512, 2048, kernel_size=(1, 1), stride=(1, 1), bias=False)
(bn3): FrozenBatchNorm2d(num_features=2048, eps=1e-05)
(act3): ReLU(inplace=True)
)
)
)
(encoder): InstanceContextEncoder(
(fpn_laterals): ModuleList(
(0): Conv2d(2048, 256, kernel_size=(1, 1), stride=(1, 1))
(1): Conv2d(1024, 256, kernel_size=(1, 1), stride=(1, 1))
(2): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
)
(fpn_outputs): ModuleList(
(0): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
)
(ppm): PyramidPoolingModule(
(stages): ModuleList(
(0): Sequential(
(0): AdaptiveAvgPool2d(output_size=(1, 1))
(1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
)
(1): Sequential(
(0): AdaptiveAvgPool2d(output_size=(2, 2))
(1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
)
(2): Sequential(
(0): AdaptiveAvgPool2d(output_size=(3, 3))
(1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
)
(3): Sequential(
(0): AdaptiveAvgPool2d(output_size=(6, 6))
(1): Conv2d(256, 64, kernel_size=(1, 1), stride=(1, 1))
)
)
(bottleneck): Conv2d(512, 256, kernel_size=(1, 1), stride=(1, 1))
)
(fusion): Conv2d(768, 256, kernel_size=(1, 1), stride=(1, 1))
)
(decoder): GroupIAMDecoder(
(inst_branch): GroupInstanceBranch(
(inst_convs): Sequential(
(0): Conv2d(258, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace=True)
(2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace=True)
(4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(5): ReLU(inplace=True)
(6): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(7): ReLU(inplace=True)
)
(iam_conv): Conv2d(256, 400, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1), groups=4)
(fc): Linear(in_features=1024, out_features=1024, bias=True)
(cls_score): Linear(in_features=1024, out_features=80, bias=True)
(mask_kernel): Linear(in_features=1024, out_features=128, bias=True)
(objectness): Linear(in_features=1024, out_features=1, bias=True)
)
(mask_branch): MaskBranch(
(mask_convs): Sequential(
(0): Conv2d(258, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): ReLU(inplace=True)
(2): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(3): ReLU(inplace=True)
(4): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(5): ReLU(inplace=True)
(6): Conv2d(256, 256, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(7): ReLU(inplace=True)
)
(projection): Conv2d(256, 128, kernel_size=(1, 1), stride=(1, 1))
)
)
(criterion): SparseInstCriterion(
(matcher): SparseInstMatcher()
)
)
[02/20 01:25:20] sparseinst.dataset_mapper INFO: [DatasetMapper] Augmentations used in training: [RandomFlip(), ResizeShortestEdge(short_edge_length=[400, 500, 600], sample_style='choice'), RandomCrop(crop_type='absolute_range', crop_size=[384, 600]), ResizeShortestEdge(short_edge_length=(416, 448, 480, 512, 544, 576, 608, 640), max_size=853, sample_style='choice')]
[02/20 01:25:36] d2.data.datasets.coco INFO: Loading datasets/coco/annotations/instances_train2017.json takes 16.17 seconds.
[02/20 01:25:37] d2.data.datasets.coco INFO: Loaded 118287 images in COCO format from datasets/coco/annotations/instances_train2017.json
[02/20 01:25:45] d2.data.build INFO: Removed 1021 images with no usable annotations. 117266 images left.
[02/20 01:25:51] d2.data.build INFO: Distribution of instances among all 80 categories:
�[36m| category | #instances | category | #instances | category | #instances |
|:-------------:|:-------------|:------------:|:-------------|:-------------:|:-------------|
| person | 257253 | bicycle | 7056 | car | 43533 |
| motorcycle | 8654 | airplane | 5129 | bus | 6061 |
| train | 4570 | truck | 9970 | boat | 10576 |
| traffic light | 12842 | fire hydrant | 1865 | stop sign | 1983 |
| parking meter | 1283 | bench | 9820 | bird | 10542 |
| cat | 4766 | dog | 5500 | horse | 6567 |
| sheep | 9223 | cow | 8014 | elephant | 5484 |
| bear | 1294 | zebra | 5269 | giraffe | 5128 |
| backpack | 8714 | umbrella | 11265 | handbag | 12342 |
| tie | 6448 | suitcase | 6112 | frisbee | 2681 |
| skis | 6623 | snowboard | 2681 | sports ball | 6299 |
| kite | 8802 | baseball bat | 3273 | baseball gl.. | 3747 |
| skateboard | 5536 | surfboard | 6095 | tennis racket | 4807 |
| bottle | 24070 | wine glass | 7839 | cup | 20574 |
| fork | 5474 | knife | 7760 | spoon | 6159 |
| bowl | 14323 | banana | 9195 | apple | 5776 |
| sandwich | 4356 | orange | 6302 | broccoli | 7261 |
| carrot | 7758 | hot dog | 2884 | pizza | 5807 |
| donut | 7005 | cake | 6296 | chair | 38073 |
| couch | 5779 | potted plant | 8631 | bed | 4192 |
| dining table | 15695 | toilet | 4149 | tv | 5803 |
| laptop | 4960 | mouse | 2261 | remote | 5700 |
| keyboard | 2854 | cell phone | 6422 | microwave | 1672 |
| oven | 3334 | toaster | 225 | sink | 5609 |
| refrigerator | 2634 | book | 24077 | clock | 6320 |
| vase | 6577 | scissors | 1464 | teddy bear | 4729 |
| hair drier | 198 | toothbrush | 1945 | | |
| total | 849949 | | | | |�[0m
[02/20 01:25:51] d2.data.build INFO: Using training sampler TrainingSampler
[02/20 01:25:51] d2.data.common INFO: Serializing 117266 elements to byte tensors and concatenating them all ...
[02/20 01:25:55] d2.data.common INFO: Serialized dataset takes 451.21 MiB
[02/20 01:25:58] fvcore.common.checkpoint INFO: [Checkpointer] Loading from pretrained_models/resnet50d_ra2-464e36ba.pth ...
[02/20 01:25:59] fvcore.common.checkpoint WARNING: Some model parameters or buffers are not found in the checkpoint:
�[34mbackbone.bn1.{bias, weight}�[0m
�[34mbackbone.conv1.0.weight�[0m
�[34mbackbone.conv1.1.{bias, weight}�[0m
�[34mbackbone.conv1.3.weight�[0m
�[34mbackbone.conv1.4.{bias, weight}�[0m
�[34mbackbone.conv1.6.weight�[0m
�[34mbackbone.layer1.0.bn1.{bias, weight}�[0m
�[34mbackbone.layer1.0.bn2.{bias, weight}�[0m
�[34mbackbone.layer1.0.bn3.{bias, weight}�[0m
�[34mbackbone.layer1.0.conv1.weight�[0m
�[34mbackbone.layer1.0.conv2.weight�[0m
�[34mbackbone.layer1.0.conv3.weight�[0m
�[34mbackbone.layer1.0.downsample.1.weight�[0m
�[34mbackbone.layer1.0.downsample.2.{bias, weight}�[0m
�[34mbackbone.layer1.1.bn1.{bias, weight}�[0m
�[34mbackbone.layer1.1.bn2.{bias, weight}�[0m
�[34mbackbone.layer1.1.bn3.{bias, weight}�[0m
�[34mbackbone.layer1.1.conv1.weight�[0m
�[34mbackbone.layer1.1.conv2.weight�[0m
�[34mbackbone.layer1.1.conv3.weight�[0m
�[34mbackbone.layer1.2.bn1.{bias, weight}�[0m
�[34mbackbone.layer1.2.bn2.{bias, weight}�[0m
�[34mbackbone.layer1.2.bn3.{bias, weight}�[0m
�[34mbackbone.layer1.2.conv1.weight�[0m
�[34mbackbone.layer1.2.conv2.weight�[0m
�[34mbackbone.layer1.2.conv3.weight�[0m
�[34mbackbone.layer2.0.bn1.{bias, weight}�[0m
�[34mbackbone.layer2.0.bn2.{bias, weight}�[0m
�[34mbackbone.layer2.0.bn3.{bias, weight}�[0m
�[34mbackbone.layer2.0.conv1.weight�[0m
�[34mbackbone.layer2.0.conv2.weight�[0m
�[34mbackbone.layer2.0.conv3.weight�[0m
�[34mbackbone.layer2.0.downsample.1.weight�[0m
�[34mbackbone.layer2.0.downsample.2.{bias, weight}�[0m
�[34mbackbone.layer2.1.bn1.{bias, weight}�[0m
�[34mbackbone.layer2.1.bn2.{bias, weight}�[0m
�[34mbackbone.layer2.1.bn3.{bias, weight}�[0m
�[34mbackbone.layer2.1.conv1.weight�[0m
�[34mbackbone.layer2.1.conv2.weight�[0m
�[34mbackbone.layer2.1.conv3.weight�[0m
�[34mbackbone.layer2.2.bn1.{bias, weight}�[0m
�[34mbackbone.layer2.2.bn2.{bias, weight}�[0m
�[34mbackbone.layer2.2.bn3.{bias, weight}�[0m
�[34mbackbone.layer2.2.conv1.weight�[0m
�[34mbackbone.layer2.2.conv2.weight�[0m
�[34mbackbone.layer2.2.conv3.weight�[0m
�[34mbackbone.layer2.3.bn1.{bias, weight}�[0m
�[34mbackbone.layer2.3.bn2.{bias, weight}�[0m
�[34mbackbone.layer2.3.bn3.{bias, weight}�[0m
�[34mbackbone.layer2.3.conv1.weight�[0m
�[34mbackbone.layer2.3.conv2.weight�[0m
�[34mbackbone.layer2.3.conv3.weight�[0m
�[34mbackbone.layer3.0.bn1.{bias, weight}�[0m
�[34mbackbone.layer3.0.bn2.{bias, weight}�[0m
�[34mbackbone.layer3.0.bn3.{bias, weight}�[0m
�[34mbackbone.layer3.0.conv1.weight�[0m
�[34mbackbone.layer3.0.conv2.weight�[0m
�[34mbackbone.layer3.0.conv2_offset.{bias, weight}�[0m
�[34mbackbone.layer3.0.conv3.weight�[0m
�[34mbackbone.layer3.0.downsample.1.weight�[0m
�[34mbackbone.layer3.0.downsample.2.{bias, weight}�[0m
�[34mbackbone.layer3.1.bn1.{bias, weight}�[0m
�[34mbackbone.layer3.1.bn2.{bias, weight}�[0m
�[34mbackbone.layer3.1.bn3.{bias, weight}�[0m
�[34mbackbone.layer3.1.conv1.weight�[0m
�[34mbackbone.layer3.1.conv2.weight�[0m
�[34mbackbone.layer3.1.conv2_offset.{bias, weight}�[0m
�[34mbackbone.layer3.1.conv3.weight�[0m
�[34mbackbone.layer3.2.bn1.{bias, weight}�[0m
�[34mbackbone.layer3.2.bn2.{bias, weight}�[0m
�[34mbackbone.layer3.2.bn3.{bias, weight}�[0m
�[34mbackbone.layer3.2.conv1.weight�[0m
�[34mbackbone.layer3.2.conv2.weight�[0m
�[34mbackbone.layer3.2.conv2_offset.{bias, weight}�[0m
�[34mbackbone.layer3.2.conv3.weight�[0m
�[34mbackbone.layer3.3.bn1.{bias, weight}�[0m
�[34mbackbone.layer3.3.bn2.{bias, weight}�[0m
�[34mbackbone.layer3.3.bn3.{bias, weight}�[0m
�[34mbackbone.layer3.3.conv1.weight�[0m
�[34mbackbone.layer3.3.conv2.weight�[0m
�[34mbackbone.layer3.3.conv2_offset.{bias, weight}�[0m
�[34mbackbone.layer3.3.conv3.weight�[0m
�[34mbackbone.layer3.4.bn1.{bias, weight}�[0m
�[34mbackbone.layer3.4.bn2.{bias, weight}�[0m
�[34mbackbone.layer3.4.bn3.{bias, weight}�[0m
�[34mbackbone.layer3.4.conv1.weight�[0m
�[34mbackbone.layer3.4.conv2.weight�[0m
�[34mbackbone.layer3.4.conv2_offset.{bias, weight}�[0m
�[34mbackbone.layer3.4.conv3.weight�[0m
�[34mbackbone.layer3.5.bn1.{bias, weight}�[0m
�[34mbackbone.layer3.5.bn2.{bias, weight}�[0m
�[34mbackbone.layer3.5.bn3.{bias, weight}�[0m
�[34mbackbone.layer3.5.conv1.weight�[0m
�[34mbackbone.layer3.5.conv2.weight�[0m
�[34mbackbone.layer3.5.conv2_offset.{bias, weight}�[0m
�[34mbackbone.layer3.5.conv3.weight�[0m
�[34mbackbone.layer4.0.bn1.{bias, weight}�[0m
�[34mbackbone.layer4.0.bn2.{bias, weight}�[0m
�[34mbackbone.layer4.0.bn3.{bias, weight}�[0m
�[34mbackbone.layer4.0.conv1.weight�[0m
�[34mbackbone.layer4.0.conv2.weight�[0m
�[34mbackbone.layer4.0.conv2_offset.{bias, weight}�[0m
�[34mbackbone.layer4.0.conv3.weight�[0m
�[34mbackbone.layer4.0.downsample.1.weight�[0m
�[34mbackbone.layer4.0.downsample.2.{bias, weight}�[0m
�[34mbackbone.layer4.1.bn1.{bias, weight}�[0m
�[34mbackbone.layer4.1.bn2.{bias, weight}�[0m
�[34mbackbone.layer4.1.bn3.{bias, weight}�[0m
�[34mbackbone.layer4.1.conv1.weight�[0m
�[34mbackbone.layer4.1.conv2.weight�[0m
�[34mbackbone.layer4.1.conv2_offset.{bias, weight}�[0m
�[34mbackbone.layer4.1.conv3.weight�[0m
�[34mbackbone.layer4.2.bn1.{bias, weight}�[0m
�[34mbackbone.layer4.2.bn2.{bias, weight}�[0m
�[34mbackbone.layer4.2.bn3.{bias, weight}�[0m
�[34mbackbone.layer4.2.conv1.weight�[0m
�[34mbackbone.layer4.2.conv2.weight�[0m
�[34mbackbone.layer4.2.conv2_offset.{bias, weight}�[0m
�[34mbackbone.layer4.2.conv3.weight�[0m
�[34mdecoder.inst_branch.cls_score.{bias, weight}�[0m
�[34mdecoder.inst_branch.fc.{bias, weight}�[0m
�[34mdecoder.inst_branch.iam_conv.{bias, weight}�[0m
�[34mdecoder.inst_branch.inst_convs.0.{bias, weight}�[0m
�[34mdecoder.inst_branch.inst_convs.2.{bias, weight}�[0m
�[34mdecoder.inst_branch.inst_convs.4.{bias, weight}�[0m
�[34mdecoder.inst_branch.inst_convs.6.{bias, weight}�[0m
�[34mdecoder.inst_branch.mask_kernel.{bias, weight}�[0m
�[34mdecoder.inst_branch.objectness.{bias, weight}�[0m
�[34mdecoder.mask_branch.mask_convs.0.{bias, weight}�[0m
�[34mdecoder.mask_branch.mask_convs.2.{bias, weight}�[0m
�[34mdecoder.mask_branch.mask_convs.4.{bias, weight}�[0m
�[34mdecoder.mask_branch.mask_convs.6.{bias, weight}�[0m
�[34mdecoder.mask_branch.projection.{bias, weight}�[0m
�[34mencoder.fpn_laterals.0.{bias, weight}�[0m
�[34mencoder.fpn_laterals.1.{bias, weight}�[0m
�[34mencoder.fpn_laterals.2.{bias, weight}�[0m
�[34mencoder.fpn_outputs.0.{bias, weight}�[0m
�[34mencoder.fpn_outputs.1.{bias, weight}�[0m
�[34mencoder.fpn_outputs.2.{bias, weight}�[0m
�[34mencoder.fusion.{bias, weight}�[0m
�[34mencoder.ppm.bottleneck.{bias, weight}�[0m
�[34mencoder.ppm.stages.0.1.{bias, weight}�[0m
�[34mencoder.ppm.stages.1.1.{bias, weight}�[0m
�[34mencoder.ppm.stages.2.1.{bias, weight}�[0m
�[34mencoder.ppm.stages.3.1.{bias, weight}�[0m
[02/20 01:25:59] fvcore.common.checkpoint WARNING: The checkpoint state_dict contains keys that are not used by the model:
�[35mconv1.0.weight�[0m
�[35mconv1.1.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mconv1.3.weight�[0m
�[35mconv1.4.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mconv1.6.weight�[0m
�[35mbn1.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer1.0.conv1.weight�[0m
�[35mlayer1.0.bn1.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer1.0.conv2.weight�[0m
�[35mlayer1.0.bn2.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer1.0.conv3.weight�[0m
�[35mlayer1.0.bn3.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer1.0.downsample.1.weight�[0m
�[35mlayer1.0.downsample.2.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer1.1.conv1.weight�[0m
�[35mlayer1.1.bn1.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer1.1.conv2.weight�[0m
�[35mlayer1.1.bn2.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer1.1.conv3.weight�[0m
�[35mlayer1.1.bn3.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer1.2.conv1.weight�[0m
�[35mlayer1.2.bn1.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer1.2.conv2.weight�[0m
�[35mlayer1.2.bn2.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer1.2.conv3.weight�[0m
�[35mlayer1.2.bn3.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer2.0.conv1.weight�[0m
�[35mlayer2.0.bn1.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer2.0.conv2.weight�[0m
�[35mlayer2.0.bn2.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer2.0.conv3.weight�[0m
�[35mlayer2.0.bn3.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer2.0.downsample.1.weight�[0m
�[35mlayer2.0.downsample.2.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer2.1.conv1.weight�[0m
�[35mlayer2.1.bn1.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer2.1.conv2.weight�[0m
�[35mlayer2.1.bn2.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer2.1.conv3.weight�[0m
�[35mlayer2.1.bn3.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer2.2.conv1.weight�[0m
�[35mlayer2.2.bn1.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer2.2.conv2.weight�[0m
�[35mlayer2.2.bn2.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer2.2.conv3.weight�[0m
�[35mlayer2.2.bn3.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer2.3.conv1.weight�[0m
�[35mlayer2.3.bn1.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer2.3.conv2.weight�[0m
�[35mlayer2.3.bn2.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer2.3.conv3.weight�[0m
�[35mlayer2.3.bn3.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer3.0.conv1.weight�[0m
�[35mlayer3.0.bn1.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer3.0.conv2.weight�[0m
�[35mlayer3.0.bn2.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer3.0.conv3.weight�[0m
�[35mlayer3.0.bn3.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer3.0.downsample.1.weight�[0m
�[35mlayer3.0.downsample.2.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer3.1.conv1.weight�[0m
�[35mlayer3.1.bn1.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer3.1.conv2.weight�[0m
�[35mlayer3.1.bn2.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer3.1.conv3.weight�[0m
�[35mlayer3.1.bn3.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer3.2.conv1.weight�[0m
�[35mlayer3.2.bn1.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer3.2.conv2.weight�[0m
�[35mlayer3.2.bn2.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer3.2.conv3.weight�[0m
�[35mlayer3.2.bn3.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer3.3.conv1.weight�[0m
�[35mlayer3.3.bn1.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer3.3.conv2.weight�[0m
�[35mlayer3.3.bn2.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer3.3.conv3.weight�[0m
�[35mlayer3.3.bn3.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer3.4.conv1.weight�[0m
�[35mlayer3.4.bn1.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer3.4.conv2.weight�[0m
�[35mlayer3.4.bn2.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer3.4.conv3.weight�[0m
�[35mlayer3.4.bn3.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer3.5.conv1.weight�[0m
�[35mlayer3.5.bn1.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer3.5.conv2.weight�[0m
�[35mlayer3.5.bn2.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer3.5.conv3.weight�[0m
�[35mlayer3.5.bn3.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer4.0.conv1.weight�[0m
�[35mlayer4.0.bn1.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer4.0.conv2.weight�[0m
�[35mlayer4.0.bn2.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer4.0.conv3.weight�[0m
�[35mlayer4.0.bn3.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer4.0.downsample.1.weight�[0m
�[35mlayer4.0.downsample.2.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer4.1.conv1.weight�[0m
�[35mlayer4.1.bn1.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer4.1.conv2.weight�[0m
�[35mlayer4.1.bn2.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer4.1.conv3.weight�[0m
�[35mlayer4.1.bn3.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer4.2.conv1.weight�[0m
�[35mlayer4.2.bn1.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer4.2.conv2.weight�[0m
�[35mlayer4.2.bn2.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mlayer4.2.conv3.weight�[0m
�[35mlayer4.2.bn3.{bias, num_batches_tracked, running_mean, running_var, weight}�[0m
�[35mfc.{bias, weight}�[0m

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant