Skip to content

Gan implementation first pass#160

Draft
ibai-0 wants to merge 1 commit intoBiaPyX:masterfrom
ibai-0:Gan_workflow
Draft

Gan implementation first pass#160
ibai-0 wants to merge 1 commit intoBiaPyX:masterfrom
ibai-0:Gan_workflow

Conversation

@ibai-0
Copy link
Copy Markdown
Contributor

@ibai-0 ibai-0 commented Mar 23, 2026

No description provided.

_C.MODEL.ARCHITECTURE = "unet"
# Architecture of the network. Possible values are:
# * 'patchgan'
_C.MODEL.ARCHITECTURE_D = "patchgan"
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a feature only of nafnet so introduce it inside MODEL.NAFNET

_C.MODEL.NAFNET.FFN_EXPAND = 2

# Discriminator PATCHGAN
_C.MODEL.PATCHGAN = CN()
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Move MODEL.PATCHGAN inside MODEL.NAFNET

_C.LOSS.COMPOSED_GAN.GAMMA_SSIM = 1.0

# Backward-compatible alias for previous naming.
_C.LOSS.GAN = CN()
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's the purpose of this? It shouldn't be necessary

# Optimizer to use. Possible values: "SGD", "ADAM" or "ADAMW"
_C.TRAIN.OPTIMIZER = "SGD"
# Optimizer to use. Possible values: "SGD", "ADAM" or "ADAMW" for GAN discriminator
_C.TRAIN.OPTIMIZER_D = "SGD"
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now that the more than one opt is used change TRAIN.OPTIMIZER to be a list of str. For all the model only one opt should be required but for the GAN at hand

# Learning rate
_C.TRAIN.LR = 1.0e-4
# Learning rate for GAN discriminator
_C.TRAIN.LR_D = 1.0e-4
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same as with optimizers: TRAIN.LR should be converted to a list now. Check that the opts and lrs must have the same lenght in check_configuration.py


return model, str(callable_model.__name__), collected_sources, all_import_lines, scanned_files, args, network_stride # type: ignore

def build_discriminator(cfg: CN, device: torch.device):
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be inside nafnet

@@ -0,0 +1,165 @@
import torch
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comment the functions and extend the descriptions. Check other models to see how we do it and try to do it in the same way.

@@ -0,0 +1,23 @@
import torch.nn as nn
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

same as with nafnet.py

jobname,
epoch,
model_without_ddp,
optimizer,
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be a list now. Loop over them to store and remove optimizer_d

Optional discriminator model to include in checkpoints for GAN training.
optimizer_d : Optional[torch.optim.Optimizer], optional
Optional discriminator optimizer state to include in checkpoints for GAN training.
extra_checkpoint_items : Optional[dict], optional
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what is this? It is necessary? if not remove it please

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants