This module builds a dynamic U-Net from any backbone pretrained on ImageNet, automatically inferring the intermediate sizes.
from fastai.gen_doc.nbdoc import *
from fastai.vision.models.unet import *
This is the original U-Net. The difference here is that the left part is a pretrained model.
show_doc(DynamicUnet)
class
DynamicUnet
[source][test]
DynamicUnet
(encoder
:Module
,n_classes
:int
,blur
:bool
=*False
,blur_final
=True
,self_attention
:bool
=False
,y_range
:OptRange
=None
,last_cross
:bool
=True
,bottle
:bool
=False
, ***kwargs
**) ::SequentialEx
Create a U-Net from a given architecture.
This U-Net will sit on top of an encoder
(that can be a pretrained model) and with a final output of n_classes
. During the initialization, it uses Hooks
to determine the intermediate features sizes by passing a dummy input through the model and create the upward path automatically.
blur
is used to avoid checkerboard artifacts at each layer, blur_final
is specific to the last layer. self_attention
determines if we use a self attention layer at the third block before the end. If y_range
is passed, the last activations go through a sigmoid rescaled to that range. last_cross
determines if we use a cross-connection with the direct input of the model, and in this case bottle
flags if we use a bottleneck or not for that skip connection.
show_doc(UnetBlock)
class
UnetBlock
[source][test]
UnetBlock
(up_in_c
:int
,x_in_c
:int
,hook
:Hook
,final_div
:bool
=*True
,blur
:bool
=False
,leaky
:float
=None
,self_attention
:bool
=False
, ***kwargs
**) ::Module
No tests found for UnetBlock
. To contribute a test please refer to this guide and this discussion.
A quasi-UNet block, using PixelShuffle_ICNR upsampling
.
This block receives the output of the last block to be upsampled (size up_in_c
) and the activations features from an intermediate layer of the encoder
(size x_in_c
, this is the lateral connection). The hook
is set to this intermediate layer to store the output needed for this block. final_div
determines if we divide the number of features by 2 with the upsampling, blur
is to avoid checkerboard artifacts. If leaky
is set, use a leaky ReLU with a slope equals to that parameter instead of a ReLU, and self_attention
determines if we use a self-attention layer. kwargs
are passed to conv_layer
.
show_doc(UnetBlock.forward)
forward
[source][test]
forward
(up_in
:Tensor
) →Tensor
No tests found for forward
. To contribute a test please refer to this guide and this discussion.
Defines the computation performed at every call. Should be overridden by all subclasses.
.. note::
Although the recipe for forward pass needs to be defined within
this function, one should call the :class:Module
instance afterwards
instead of this since the former takes care of running the
registered hooks while the latter silently ignores them.