TorchUtils is a pytorch lib with several useful tools and training tricks. (Work In Progress)
bash
git clone https://github.com/seefun/TorchUtils.git
cd TorchUtils
bash
pip install -r requirements.txt
pip install .
import torch_utils as tu
SEED = 42
tu.tools.seed_everything(SEED)
``` import albumentations from albumentations import pytorch as AT train_transform = albumentations.Compose([ albumentations.Resize(IMAGE_SIZE, IMAGE_SIZE), albumentations.HorizontalFlip(p=0.5), tu.dataset.randAugment(image_size=IMAGE_SIZE, N=2, M=12, p=0.9, mode='all', cut_out=False), albumentations.Normalize(), albumentations.CoarseDropout(max_holes=8, max_height=IMAGE_SIZE // 8, max_width=IMAGE_SIZE // 8, fill_value=0, p=0.25), AT.ToTensorV2(), ])
mixup_dataset = tu.dataset.MixupDataset(dataset, alpha=0.2, prob=0.2, mixup_to_cutmix=0.25)
```
fast build models with torch_utils:
model = tu.ImageModel(name='resnest50d', pretrained=True,
pooling='concat', fc='multi-dropout',
num_feature=2048, classes=1)
model.cuda()
using other libs along with torch_utils: ``` import timm
model = timm.create_model('tresnet_m', pretrained=True) model.global_pool = tu.layers.FastGlobalConcatPool2d(flatten=True) model.head = tu.layers.get_attention_fc(2048*2, 1) model.cuda() ```
``` from pytorchcv.model_provider import get_model as ptcv_get_model
model = ptcv_get_model('seresnext50_32x4d', pretrained=True)
model.features.final_pool = tu.layers.GeM()
model.output = tu.layers.get_simple_fc(2048, 1)
model.cuda()
```
segmentation models:
hrnet = tu.get_hrnet(name='hrnet_w18', out_channel=1, pretrained=True).cuda()
unet = tu.get_unet(name='resnest50d', out_channel=1, aspp=False, pretrained=True).cuda()
recommanded pretrained models:
recommanded github reposοΌ
model utils: ```
tu.summary(model, input_size=(batch_size, 3, 224, 224))
tu.profile(model, input_shape=(batch_size, 3, 224, 224))
weight_rgb = model.conv1.weight.data weight_grey = weight_rgb.sum(dim=1, keepdim=True) model.conv1 = nn.Conv2d(1, 64, kernel_size=xxx, stride=xxx, padding=xxx, bias=False) model.conv1.weight.data = weight_grey
weight_rgb = model.conv1.weight.data weight_y = weight_rgb.mean(dim=1, keepdim=True) weight_rgby = torch.cat([weight_rgb,weight_y], axis=1) * 3 / 4 model.conv1 = nn.Conv2d(4, 64, kernel_size=xxx, stride=xxx, padding=xxx, bias=False) model.conv1.weight.data = weight_rgby
```
``` optimizer_ranger = tu.Ranger(model_conv.parameters(), lr=LR)
```
```
criterion = tu.SmoothBCEwLogits(smoothing=0.02)
```
lr_finder = tu.LRFinder(model, optimizer, criterion, device="cuda")
lr_finder.range_test(train_loader, end_lr=10, num_iter=500, accumulation_steps=1)
lr_finder.plot() # to inspect the loss-learning rate graph
lr_finder.reset() # to reset the model and optimizer to their initial state
``` scheduler = tu.get_flat_anneal_scheduler(optimizer, max_iter, warmup_iter=0, decay_start=0.5, anneal='cos', gamma=0.05)
```
Ref: https://pytorch.org/docs/master/notes/amp_examples.html