This is an open solution to the Airbus Ship Detection Challenge.
We are building entirely open solution to this competition. Specifically: 1. Learning from the process - updates about new ideas, code and experiments is the best way to learn data science. Our activity is especially useful for people who wants to enter the competition, but lack appropriate experience. 1. Encourage more Kagglers to start working on this competition. 1. Deliver open source solution with no strings attached. Code is available on our GitHub repository :computer:. This solution should establish solid benchmark, as well as provide good base for your custom ideas and experiments. We care about clean code :smiley: 1. We are opening our experiments as well: everybody can have live preview on our experiments, parameters, code, etc. Check: Airbus Ship Detection Challenge :chart_with_upwards_trend: or screen below.
|Train and validation monitor :bar_chart:| |:---:| ||
In this open source solution you will find references to the neptune.ml. It is free platform for community Users, which we use daily to keep track of our experiments. Please note that using neptune.ml is not necessary to proceed with this solution. You may run it as plain Python script :snake:.
| link to code | CV | LB | |:---:|:---:|:---:| |solution 1|0.541|0.573| |solution 2|0.661|0.679| |solution 3|0.694|0.696| |solution 4|0.722|0.703| |solution 5|0.719|0.725|
You can jump start your participation in the competition by using our starter pack. Installation instruction below will guide you through the setup.
pip3 install -r requirements.txt
neptune account login
Create project say Ships (SHIP)
neptune.yaml and change:
to your username and project name
Prepare metadata and overlayed target masks It only needs to be done once
```bash neptune send --worker xs \ --environment base-cpu-py3 \ --config neptune.yaml \ prepare_metadata.py
They will be saved in the
From now on we will load the metadata by changing the
and adding the path to the experiment that generated metadata say SHIP-1 to every command
Let's train the model by running the
```bash neptune send --worker m-2p100 \ --environment pytorch-0.3.1-gpu-py3 \ --config neptune.yaml \ --input /SHIP-1/output/metadata.csv \ --input /SHIP-1/output/masks_overlayed \ main.py
The model will be saved in the:
submission.csv will be saved in
You can easily use models trained during one experiment in other experiments. For example when running evaluation we need to use the previous model folder in our experiment. We do that by:
CLONE_EXPERIMENT_DIR_FROM = '/SHIP-2/output/experiment'
and running the following command:
neptune send --worker m-2p100 \
--environment pytorch-0.3.1-gpu-py3 \
--config neptune.yaml \
--input /SHIP-1/output/metadata.csv \
--input /SHIP-1/output/masks_overlayed \
--input /SHIP-2 \
Login to neptune if you want to use it
neptune account login
Prepare metadata by running:
neptune run --config neptune.yaml prepare_metadata.py
Training and inference by running
neptune run --config neptune.yaml main.py
You can always run it with pure python :snake:
You are welcome to contribute your code and ideas to this open solution. To get started: 1. Check competition project on GitHub to see what we are working on right now. 1. Express your interest in particular task by writing comment in this task, or by creating new one with your fresh idea. 1. We will get back to you quickly in order to start working together. 1. Check CONTRIBUTING for some more information.
There are several ways to seek help: 1. Kaggle discussion is our primary way of communication. 1. Submit an issue directly in this repo.
data-science machine-learning deep-learning deep-neural-networks unet unet-image-segmentation python python3 pytorch pytorch-implmention neptune neptune-framework python35 open-science kaggle kaggle-competition