Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sibaja 2 #3697

Open
wants to merge 44 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
44 commits
Select commit Hold shift + click to select a range
f95fabc
DeepGlobePy
A01781042 May 31, 2024
34cd949
Resize images
AnYelg May 31, 2024
61a68ce
Upload of right size
AnYelg May 31, 2024
7aab358
Merge pull request #1 from A01781042/yela
A01781042 May 31, 2024
7d21d81
Adding unet
AnYelg Jun 1, 2024
7d5dfdd
Merge pull request #2 from A01781042/yela
AnYelg Jun 1, 2024
5d42080
Mean and Std added
EmiSib Jun 1, 2024
07e20e3
rm work-dir
EmiSib Jun 1, 2024
89148d5
Merge pull request #3 from A01781042/sibaja
A01781042 Jun 3, 2024
e5688a5
Merge pull request #4 from A01781042/main
A01781042 Jun 3, 2024
a30aea1
CCNET
EmiSib Jun 3, 2024
0d69683
Merge branch 'main' into sibaja
EmiSib Jun 3, 2024
cd8f9d7
Merge remote-tracking branch 'origin/main' into main
EmiSib Jun 3, 2024
230379d
FCN Model
AnYelg Jun 3, 2024
eeaee77
Merge pull request #5 from A01781042/yela
AnYelg Jun 3, 2024
3340e2c
hr18
A01781042 Jun 3, 2024
281593a
Merge pull request #6 from A01781042/octa
A01781042 Jun 3, 2024
9d33f4d
CCNET config
EmiSib Jun 4, 2024
160704f
CCNET config
EmiSib Jun 4, 2024
fb7a5b9
Merge remote-tracking branch 'origin/sibaja' into sibaja
EmiSib Jun 4, 2024
0080718
Merge branch 'sibaja' into main
EmiSib Jun 4, 2024
7b800d4
CCNET v.2
EmiSib Jun 4, 2024
5c23ab1
class
A01781042 Jun 4, 2024
ec1d8dc
Merge pull request #8 from A01781042/octa
A01781042 Jun 4, 2024
a9597f3
dataconfig
A01781042 Jun 4, 2024
55a4e69
Merge pull request #9 from A01781042/octa
A01781042 Jun 4, 2024
caf83e3
Mean and STD change, dataset added, scale resized
EmiSib Jun 4, 2024
39f019f
tensorboard
A01781042 Jun 5, 2024
347dca0
batch
A01781042 Jun 5, 2024
f4f3331
GcNet-DeepLab
A01781042 Jun 5, 2024
97091a5
modelfix
A01781042 Jun 5, 2024
66d3569
model_test
A01781042 Jun 5, 2024
26064f8
del
A01781042 Jun 5, 2024
9c28a6b
Predicciones1
AnYelg Jun 5, 2024
2960dad
Merge pull request #11 from A01781042/main
A01781042 Jun 5, 2024
b533298
Final Predictions FCN
AnYelg Jun 5, 2024
dccc8ae
Delete colorfile
AnYelg Jun 5, 2024
7abe519
Merge pull request #12 from A01781042/yela
A01781042 Jun 5, 2024
696ea80
PredictionsHRDeepFCN
A01781042 Jun 5, 2024
0d30a3e
Predictions
A01781042 Jun 5, 2024
eff3831
Merge pull request #14 from A01781042/Octavio
A01781042 Jun 5, 2024
539d1b6
PrediccionesCCNET
EmiSib Jun 5, 2024
b8bcb34
work-dir -> Deeplabplus and CCNET
EmiSib Jun 5, 2024
41aecd0
DeepLabPlus fix
EmiSib Jun 5, 2024
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Binary file added PrediccionesCCNET/201966_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesCCNET/298983_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesCCNET/650673_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesCCNET/802551_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesCCNET/896504_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesDeepLab/201966_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesDeepLab/298983_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesDeepLab/650673_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesDeepLab/802551_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesDeepLab/896504_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesFCN/201966_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesFCN/298983_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesFCN/650673_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesFCN/802551_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesFCN/896504_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesHRNet/201966_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesHRNet/298983_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesHRNet/650673_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesHRNet/802551_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added PrediccionesHRNet/896504_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Test_model.zip
Binary file not shown.
Binary file added Test_model/201966_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Test_model/298983_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Test_model/650673_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Test_model/802551_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added Test_model/896504_sat.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
69 changes: 69 additions & 0 deletions configs/_base_/datasets/deepGlobe.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
#configs/_base_/datasets/deepGlobe.py
#mmseseg/configs/_base_/datasets/deepGlobe.py
# dataset settings
dataset_type = 'DeepGlobeDataset'
data_root = 'data/deepglobe_ds/'
crop_size = (256, 256)
train_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='LoadAnnotations'),
dict(
type='RandomResize',
scale=(512, 512),
ratio_range=(0.5, 2.0),
keep_ratio=True),
dict(type='RandomCrop', crop_size=crop_size, cat_max_ratio=0.75),
dict(type='RandomFlip', prob=0.5),
dict(type='PhotoMetricDistortion'),
dict(type='PackSegInputs')
]
test_pipeline = [
dict(type='LoadImageFromFile'),
dict(type='Resize', scale=(512, 512), keep_ratio=True),
# add loading annotation after ``Resize`` because ground truth
# does not need to do resize data transform
dict(type='LoadAnnotations'),
dict(type='PackSegInputs')
]
img_ratios = [0.5, 0.75, 1.0, 1.25, 1.5, 1.75]
tta_pipeline = [
dict(type='LoadImageFromFile', backend_args=None),
dict(
type='TestTimeAug',
transforms=[
[
dict(type='Resize', scale_factor=r, keep_ratio=True)
for r in img_ratios
],
[
dict(type='RandomFlip', prob=0., direction='horizontal'),
dict(type='RandomFlip', prob=1., direction='horizontal')
], [dict(type='LoadAnnotations')], [dict(type='PackSegInputs')]
])
]
train_dataloader = dict(
batch_size=32,
num_workers=4,
persistent_workers=True,
sampler=dict(type='InfiniteSampler', shuffle=True),
dataset=dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict(
img_path='img_dir/train_sat', seg_map_path='ann_dir/train_mask_grayscale'),
pipeline=train_pipeline))
val_dataloader = dict(
batch_size=16,
num_workers=4,
persistent_workers=True,
sampler=dict(type='DefaultSampler', shuffle=False),
dataset=dict(
type=dataset_type,
data_root=data_root,
data_prefix=dict(
img_path='img_dir/val_sat', seg_map_path='ann_dir/val_mask_grayscale'),
pipeline=test_pipeline))
test_dataloader = val_dataloader

val_evaluator = dict(type='IoUMetric', iou_metrics=['mIoU'])
test_evaluator = val_evaluator
4 changes: 3 additions & 1 deletion configs/_base_/default_runtime.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,9 @@
mp_cfg=dict(mp_start_method='fork', opencv_num_threads=0),
dist_cfg=dict(backend='nccl'),
)
vis_backends = [dict(type='LocalVisBackend')]
vis_backends = [dict(type='LocalVisBackend'),
dict(type='TensorboardVisBackend')]

visualizer = dict(
type='SegLocalVisualizer', vis_backends=vis_backends, name='visualizer')
log_processor = dict(by_epoch=False)
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@
norm_cfg = dict(type='SyncBN', requires_grad=True)
data_preprocessor = dict(
type='SegDataPreProcessor',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
mean=[0.4082, 0.3791, 0.2815],
std=[0.1451, 0.1116, 0.1013],
bgr_to_rgb=True,
pad_val=0,
seg_pad_val=255)
Expand All @@ -29,7 +29,7 @@
channels=512,
recurrence=2,
dropout_ratio=0.1,
num_classes=19,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
Expand All @@ -42,7 +42,7 @@
num_convs=1,
concat_input=False,
dropout_ratio=0.1,
num_classes=19,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
Expand Down
9 changes: 5 additions & 4 deletions configs/_base_/models/deeplabv3_r50-d8.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
#configs/_base_/models/deeplabv3_r50-d8.py
# model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
data_preprocessor = dict(
type='SegDataPreProcessor',
mean=[123.675, 116.28, 103.53],
std=[58.395, 57.12, 57.375],
mean=[0.4082, 0.3791, 0.2815],
std=[ 0.1451, 0.1116, 0.1013],
bgr_to_rgb=True,
pad_val=0,
seg_pad_val=255)
Expand All @@ -29,7 +30,7 @@
channels=512,
dilations=(1, 12, 24, 36),
dropout_ratio=0.1,
num_classes=19,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
Expand All @@ -42,7 +43,7 @@
num_convs=1,
concat_input=False,
dropout_ratio=0.1,
num_classes=19,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
Expand Down
53 changes: 53 additions & 0 deletions configs/_base_/models/deeplabv3_r50-d8_deepGlobe.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
#configs/_base_/models/deeplabv3_r50-d8.py
# model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
data_preprocessor = dict(
type='SegDataPreProcessor',
mean=[0.4082, 0.3791, 0.2815],
std=[ 0.1451, 0.1116, 0.1013],
bgr_to_rgb=True,
pad_val=0,
seg_pad_val=255)
model = dict(
type='EncoderDecoder',
data_preprocessor=data_preprocessor,
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=(1, 2, 1, 1),
norm_cfg=norm_cfg,
norm_eval=False,
style='pytorch',
contract_dilation=True),
decode_head=dict(
type='ASPPHead',
in_channels=2048,
in_index=3,
channels=512,
dilations=(1, 12, 24, 36),
dropout_ratio=0.1,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
auxiliary_head=dict(
type='FCNHead',
in_channels=1024,
in_index=2,
channels=256,
num_convs=1,
concat_input=False,
dropout_ratio=0.1,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
# model training and testing settings
train_cfg=dict(),
test_cfg=dict(mode='whole'))
54 changes: 54 additions & 0 deletions configs/_base_/models/deeplabv3plus_r50-d8_deepglobe.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
data_preprocessor = dict(
type='SegDataPreProcessor',
mean=[0.4082, 0.3791, 0.2815],
std=[ 0.1451, 0.1116, 0.1013],
bgr_to_rgb=True,
pad_val=0,
seg_pad_val=255)
model = dict(
type='EncoderDecoder',
data_preprocessor=data_preprocessor,
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=(1, 2, 1, 1),
norm_cfg=norm_cfg,
norm_eval=False,
style='pytorch',
contract_dilation=True),
decode_head=dict(
type='DepthwiseSeparableASPPHead',
in_channels=2048,
in_index=3,
channels=512,
dilations=(1, 12, 24, 36),
c1_in_channels=256,
c1_channels=48,
dropout_ratio=0.1,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
auxiliary_head=dict(
type='FCNHead',
in_channels=1024,
in_index=2,
channels=256,
num_convs=1,
concat_input=False,
dropout_ratio=0.1,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
# model training and testing settings
train_cfg=dict(),
test_cfg=dict(mode='whole'))
60 changes: 60 additions & 0 deletions configs/_base_/models/fcn_hr18_deepGlobe.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,60 @@
# model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
data_preprocessor = dict(
type='SegDataPreProcessor',
mean=[0.4082, 0.3791, 0.2815],
std=[ 0.1351, 0.1022, 0.0931],
bgr_to_rgb=True,
pad_val=0,
seg_pad_val=255)
model = dict(
type='EncoderDecoder',
data_preprocessor=data_preprocessor,
pretrained='open-mmlab://msra/hrnetv2_w18',
backbone=dict(
type='HRNet',
norm_cfg=norm_cfg,
norm_eval=False,
extra=dict(
stage1=dict(
num_modules=1,
num_branches=1,
block='BOTTLENECK',
num_blocks=(4, ),
num_channels=(64, )),
stage2=dict(
num_modules=1,
num_branches=2,
block='BASIC',
num_blocks=(4, 4),
num_channels=(18, 36)),
stage3=dict(
num_modules=4,
num_branches=3,
block='BASIC',
num_blocks=(4, 4, 4),
num_channels=(18, 36, 72)),
stage4=dict(
num_modules=3,
num_branches=4,
block='BASIC',
num_blocks=(4, 4, 4, 4),
num_channels=(18, 36, 72, 144)))),
decode_head=dict(
type='FCNHead',
in_channels=[18, 36, 72, 144],
in_index=(0, 1, 2, 3),
channels=sum([18, 36, 72, 144]),
input_transform='resize_concat',
kernel_size=1,
num_convs=1,
concat_input=False,
dropout_ratio=-1,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
# model training and testing settings
train_cfg=dict(),
test_cfg=dict(mode='whole'))
53 changes: 53 additions & 0 deletions configs/_base_/models/fcn_r50-d8deepglobe.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
# model settings
norm_cfg = dict(type='SyncBN', requires_grad=True)
data_preprocessor = dict(
type='SegDataPreProcessor',
mean=[0.4082, 0.3791, 0.2815],
std=[0.1351, 0.1022, 0.0931],
bgr_to_rgb=True,
pad_val=0,
seg_pad_val=255)
model = dict(
type='EncoderDecoder',
data_preprocessor=data_preprocessor,
pretrained='open-mmlab://resnet50_v1c',
backbone=dict(
type='ResNetV1c',
depth=50,
num_stages=4,
out_indices=(0, 1, 2, 3),
dilations=(1, 1, 2, 4),
strides=(1, 2, 1, 1),
norm_cfg=norm_cfg,
norm_eval=False,
style='pytorch',
contract_dilation=True),
decode_head=dict(
type='FCNHead',
in_channels=2048,
in_index=3,
channels=512,
num_convs=2,
concat_input=True,
dropout_ratio=0.1,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=1.0)),
auxiliary_head=dict(
type='FCNHead',
in_channels=1024,
in_index=2,
channels=256,
num_convs=1,
concat_input=False,
dropout_ratio=0.1,
num_classes=7,
norm_cfg=norm_cfg,
align_corners=False,
loss_decode=dict(
type='CrossEntropyLoss', use_sigmoid=False, loss_weight=0.4)),
# model training and testing settings
train_cfg=dict(),
test_cfg=dict(mode='whole'))
Loading