Preprocessing data based on https://marinecadastre.gov/ais/
This repository is dedicated to downloading, extracting, and preprocessing the Coast Guard AIS dataset for use in machine learning algorithms. Data are available for all months of the years 2015 - 2017, and for UTM zones 1 - 20.
This guide assumes that you have installed python 3.x on your machine, preferably python 3.7 and above.
This repository is intended to be run using pipenv
to manage package dependencies.
If you do not have pipenv
installed, try calling
shell
pip install pipenv
If you have pip
installed, or install it using your OS package manager (details: https://github.com/pypa/pipenv). Once pipenv
has been installed, run the following commands from the repository home directory, unless you have an environment that runs python 3.7 already, in which case just activate that environment and skip this command.
shell
pipenv --python 3.7
Once an environment running python 3.7 is available and pipenv is installed in that environment, just run this command to install all dependencies.
shell
pipenv install
If the above command runs successfully, you should be ready to run the programs in the worflow.
If you have pip installed and can easily run python 3, then installing required packages with requirements.txt should work okay.
shell
pip install -r requirements.txt
If the above command runs successfully, you should be ready to run the programs in the worflow.
get_raw.sh
Run this file first to download and unzip the dataset. A command line argument may be used to specify the year, month, and zone, or without a command line argument, all files within the year, month, and zone ranges will be downloaded, according to the boundaries specified in the file. Optionally, the output directory path can be specified as another command line argument, either by itself or preceding the year, month, and zone arguments. Read the top of the file for more details on what each parameter does.
process_ais_data.py
Once get_raw.sh
has finished downloading and unzipping the desired files, process_ais_data.py
processes all the desired csv files to condense them into a final sequence file that can be used as an input to algorithms. process_ais_data.py
has flexibility in how it pre-processes the data, which is described in config.yaml
. The coordinate grid can be bounded, certain time and zone ranges can be specified, and more. See config.yaml
and process_ais_data.py
for more details on what all the options and functionality are.
AIS_demo_data.ipynb
Once get_raw.sh
and process_ais_data.py
have been run as desired, this Jupyter Notebook uses pandas and plotly to map trajectories on an interactive map to demonstrate how the pipeline has processed the data. If you are unfamiliar with Jupyter notebooks, run jupyter notebook
and select this file to run it interactively.
get_raw.sh
DELETE_ZIPPED
- specify whether or not to delete zipped files that are downloaded during execution.CLEAR_BEFORE_DOWNLOAD
- specify whether or not to clear files already extracted before downloading new ones.YEAR_BEGIN
, MONTH_BEGIN
- the earliest data to download, inclusive. e.g. YEAR_BEGIN=2016, MONTH_BEGIN=3
would download files beginning with March 2016.YEAR_END
, MONTH_END
- the latest data to download, inclusive. e.g. YEAR_END=2016, MONTH_END=3
would download files ending with March 2016.ZONE_BEGIN
, ZONE_END
- the zone range to download (inclusive). Zone description: 1 - 9 -> Alaska, 4 - 5 -> Hawaii, 9 - 20 -> continental US. See get_raw.sh
for more information.OUTPUT_DIR
- the directory where the downloaded files should live once the script completes. The output files will be in the folder AIS_ASCII_BY_UTM_Month
in the specified directory.process_ais_data.py
All options for this file are specified in config.yaml
, not the file itself.
options
The main options that can be specified for script.
- limit_rows
- specifies whether to only read the first max_rows
of each csv.
- max_rows
- when limit_rows
is true, specifies the number of rows to read of each CSV file.
- bound_lon
- specifies whether to use specified hard longitude boundaries instead of inferring them. Boundaries are read from grid_params
.
- bound_lat
- specifies whether to use specified hard latitude boundaries instead of inferring them. Boundaries are read from grid_params
.
- bound_time
- specifies whether to bound the times considered. Boundaries are read from meta_params
.
- bound_zone
- specifies whether to bound the zones being read. Boundaries are read from meta_params
.
- interp_actions
- specifies whether to interpolate actions if state transitions are not adjacent, otherwise actions can be arbitrarily large.
- allow_diag
- when interp_actions
is true, specifies whether to allow for diagonal grid actions.
- append_coords
- specifies whether to add raw latitude, longitude values as columns to the output csv. This will also add an extra row to each trajectory containing just the final state and its original coordinates. When interpolating, the coordinates of the interpolated states will just be the center of the grid squares they represent.
- prec_coords
- specifies the precision of coordinates in output as the number of decimal places to round each coordinate to.
- min_states
- specifies the minimum number of sequentially unique discretized states needed to qualify a trajectory for final output, e.g. if min_states=3
then the state trajectory 1, 2, 1
would qualify whereas 1, 1, 2
would not. In other words, the minimum number of states in a trajectory that were not reached via self-transition.
directories
The input and output directory specification for the script.
- in_dir_path
- specifies the directory where input data is located.
- in_dir_data
- specifies the folder name containing all the data in the input directory.
- out_dir_path
- specifies the directory where output data files should be written.
- out_dir_file
- specifies output file name (should be .csv).
meta_params
Specifies the same time and zone boundary controls available in get_raw.sh
, only considering files within those boundaries. Data are available between 2015 - 2017, January - December, zones 1 - 20.
grid_params
When hard boundaries are set in options
, those grid boundaries are specified here, except grid_len
, which matters regardless.
- grid_len
- the length of one side of one grid square, in degrees.
To get the data, we use get_raw.sh
. Let's assume that we want all the data in zone 10 between November 2016 and January 2017 in a file that already exists in our working directory called data
. We have 4 options for how to do this:
Specify output directory and times and zones sequentially in command line:
shell
./get_raw.sh data 2016 11 10
./get_raw.sh data 2016 12 10
./get_raw.sh data 2017 1 10
Specify output directory in get_raw.sh
by setting:
shell
OUTPUT_DIR=data
Then specify which times and zones to download in the command line:
shell
./get_raw.sh 2016 11 10
./get_raw.sh 2016 12 10
./get_raw.sh 2017 1 10
Specify which times and zones to download in get_raw.sh
:
shell
BEGIN_YEAR=2016
BEGIN_MONTH=11
BEGIN_ZONE=10
END_YEAR=2017
END_MONTH=11
END_ZONE=10
Then specify output directory in command line:
shell
./get_raw.sh data
./get_raw.sh data 2016 12 10
./get_raw.sh data 2017 1 10
Specify times and zones to download and output directory in the get_raw.sh
, combining options 2 and 3:
shell
./get_raw.sh
Once the data finishes downloading and inflating, we are ready to preprocess the data.
To preprocess the data, we use process_ais_data.py
. To use the script, config.yaml
should be configured first.
First, because we put our data in the data
folder in our current directory, we need to change directories:in_dir_path
to data/
to reflect this.
We'd like to make use of all the data we just downloaded, so we should read the whole file for each csv and set options: limit_rows
to False
.
Next, because one longitudinal zone is a much smaller range than its latitude range, we would like to bound the latitude we consider by setting options:bound_lat
to True
and then setting our min_lat
and max_lat
to our desired values. Values between 35 N and 45 N latitude seem like reasonable values, so we'll set min_lat : 35.0
and max_lat : 45.0
.
We'd also like to make use of all the data we just downloaded, so we need to set the meta_params
accordingly to consider the right data when preprocessing. Assuming options: bound_time
is True
, we'll need to set meta_params: min_year=2016, min_month=11, max_year=2017, max_month=1
to consider data in the time period we just downloaded. Assuming options: bound_zone
is True
, we'll need to set meta_params: min_zone: 10, max_zone: 10
to consider data in the zone we just downloaded. If any of options: bound_time, bound_zone
is False
, then their corresponding meta_params
won't be evaluated.
For interpolating actions, we'd like to limit the actions to just up, down, left, right, and none so we will set options: interp_actions
to True
, and options: allow_diag
to False
.
Since we'd like to get a csv
with just the id-state-action-state
entries for each row, we will set options: append_coords
to False
. If we had instead set this to True
, then the output would have two extra columns specifying the longitude and latitude for each state in each row (just the first state in the row), and an extra row would be appended to each trajectory with just the last state and its coordinates for easier plotting. If actions are being interpolated, then the latitude and longitude will be written as the middle coordinates of the corresponding interpolated state.
For our purposes, let's assume that 3 decimal places of precision are sufficient for our coordinates, so we'll set options: prec_coords=3
.
Lastly, since we'd like the final grid to have a manageable number of states (< 1000), we will size the grid accordingly. The one zone we downloaded (10) has a width of 6 degrees longitude, and we specified the latitude boundaries to be between 35 and 45 degrees north, so now if we choose grid_params: grid_len=0.5
, then we will have 6/.5 = 12
columns and 10/0.5 = 20
rows in our final grid, resulting in a grand total of 240 states in our final grid.
Our final config.yaml
should look like this:
```yaml
options:
limit_rows : False
max_rows : 100000
bound_lon : False
bound_lat : True
bound_time : True
bound_zone : True
interp_actions : True
allow_diag : False
append_coords : False
prec_coords : 3
min_states : 2
directories: in_dir_path : data/ in_dir_data : AIS_ASCII_by_UTM_Month/ out_dir_path : ./ out_dir_file : ais_data_output.csv
meta_params: min_year : 2016 min_month : 11 max_year : 2017 max_month : 1 min_zone : 10 max_zone : 10
grid_params: min_lon : -78.0 max_lon : -72.0 min_lat : 35.0 max_lat : 45.0 num_cols : 0 grid_len : 0.5
```
Now that our script is configured to our liking, we are finally ready to just run the script:
shell
python process_ais_data.py
Once the script finishes (hopefully without error), the output csv will be in the current directory as ais_data_output.csv
and ready for further processing.
The csv files obtained have the following data available: - MMSI - Maritime Mobile Service Identity value (integer as text) - BaseDateTime - Full UTC date and time (YYYY-MM-DDTHH:MM:SS) - LAT - Latitude (decimal degrees as double) - LON - Longitude (decimal degrees as double) - SOG - Speed Over Ground (knots as float) - COG - Course Over Ground (degrees as float) - Heading - True heading angle (degrees as float) - VesselName - Name as shown on the station radio license (text) - IMO - International Maritime Organization Vessel number (text) - CallSign - Call sign as assigned by FCC (text) - VesselType - Vessel type as defined in NAIS specifications (int) - Status - Navigation status as defined by the COLREGS (text) - Length - Length of vessel (see NAIS specifications) (meters as float) - Width - Width of vesses (see NAIS specifications) (meters as float) - Draft - Draft depth of vessel (see NAIS specification and codes) (meters as float) - Cargo - Cargo type (SEE NAIS specification and codes) (text) - TransceiverClass - Class of AIS transceiver (text) (unavailable in 2017 dataset)
This preprocessing program only makes use of the bolded columns above. MMSI maps to sequence_id
, and (LAT, LON) tuples map to state_id
s. action_id
is inferred by looking at sequential state_id
s when data are sorted by BaseDateTime.
sequence_id
- the unique identifier of the ship, represented as integers ascending from 0, an alias for MMSIfrom_state_id
- the coordinate grid square the ship started in for a given transition, represented as an integeraction_id
- the direction and length a ship went to transition between states, represented as an integer. See process_ais_data.py
for more detailto_state_id
- the coordinate grid square the ship ended in for a given transition, represented as an integerlon
(optional) - the original longitude of the from_state_id
in the row when options:append_coords
is set to True
, unless the from_state_id
is an inferred state resulting from interpolation during preprocessing, in which case the longitude will be the middle of the grid square corresponding to from_state_id
lat
(optional) - the original latitude of the from_state_id
in the row when options:append_coords
is set to True
, unless the from_state_id
is an inferred state resulting from interpolation during preprocessing, in which case the latitude will be the middle of the grid square corresponding to from_state_id
Note: this will not work on windows.
To build the docs, go from the main project directory to the docs
directory.
shell
cd docs
Then use make
to completely rebuild the docs.
shell
make clean
make build
Given that the docs build correctly, they can be viewed locally by changing to the build output directory and starting a local http server in python.
shell
cd build/html
python -m http.server
Bumps certifi from 2019.6.16 to 2022.12.7.
9e9e840
2022.12.07b81bdb2
2022.09.24939a28f
2022.09.14aca828a
2022.06.15.2de0eae1
Only use importlib.resources's new files() / Traversable API on Python ≥3.11 ...b8eb5e9
2022.06.15.147fb7ab
Fix deprecation warning on Python 3.11 (#199)b0b48e0
fixes #198 -- update link in license9d514b4
2022.06.154151e88
Add py.typed to MANIFEST.in to package in sdist (#196)Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Bumps nbconvert from 5.5.0 to 6.5.1.
Sourced from nbconvert's releases.
Release 6.5.1
No release notes provided.
6.5.0
What's Changed
- Drop dependency on testpath. by
@anntzer
in jupyter/nbconvert#1723- Adopt pre-commit by
@blink1073
in jupyter/nbconvert#1744- Add pytest settings and handle warnings by
@blink1073
in jupyter/nbconvert#1745- Apply Autoformatters by
@blink1073
in jupyter/nbconvert#1746- Add git-blame-ignore-revs by
@blink1073
in jupyter/nbconvert#1748- Update flake8 config by
@blink1073
in jupyter/nbconvert#1749- support bleach 5, add packaging and tinycss2 dependencies by
@bollwyvl
in jupyter/nbconvert#1755- [pre-commit.ci] pre-commit autoupdate by
@pre-commit-ci
in jupyter/nbconvert#1752- update cli example by
@leahecole
in jupyter/nbconvert#1753- Clean up pre-commit by
@blink1073
in jupyter/nbconvert#1757- Clean up workflows by
@blink1073
in jupyter/nbconvert#1750New Contributors
@pre-commit-ci
made their first contribution in jupyter/nbconvert#1752Full Changelog: https://github.com/jupyter/nbconvert/compare/6.4.5...6.5
6.4.3
What's Changed
- Add section to
customizing
showing how to use template inheritance by@stefanv
in jupyter/nbconvert#1719- Remove ipython genutils by
@rgs258
in jupyter/nbconvert#1727- Update changelog for 6.4.3 by
@blink1073
in jupyter/nbconvert#1728New Contributors
@stefanv
made their first contribution in jupyter/nbconvert#1719@rgs258
made their first contribution in jupyter/nbconvert#1727Full Changelog: https://github.com/jupyter/nbconvert/compare/6.4.2...6.4.3
6.4.0
What's Changed
- Optionally speed up validation by
@gwincr11
in jupyter/nbconvert#1672- Adding missing div compared to JupyterLab DOM structure by
@SylvainCorlay
in jupyter/nbconvert#1678- Allow passing extra args to code highlighter by
@yuvipanda
in jupyter/nbconvert#1683- Prevent page breaks in outputs when printing by
@SylvainCorlay
in jupyter/nbconvert#1679- Add collapsers to template by
@SylvainCorlay
in jupyter/nbconvert#1689- Fix recent pandoc latex tables by adding calc and array (#1536, #1566) by
@cgevans
in jupyter/nbconvert#1686- Add an invalid notebook error by
@gwincr11
in jupyter/nbconvert#1675- Fix typos in execute.py by
@TylerAnderson22
in jupyter/nbconvert#1692- Modernize latex greek math handling (partially fixes #1673) by
@cgevans
in jupyter/nbconvert#1687- Fix use of deprecated API and update test matrix by
@blink1073
in jupyter/nbconvert#1696- Update nbconvert_library.ipynb by
@letterphile
in jupyter/nbconvert#1695- Changelog for 6.4 by
@blink1073
in jupyter/nbconvert#1697New Contributors
... (truncated)
7471b75
Release 6.5.1c1943e0
Fix pre-commit8685e93
Fix tests0abf290
Run black and prettier418d545
Run test on 6.x branchbef65d7
Convert input to string prior to escape HTML0818628
Check input type before escapingb206470
GHSL-2021-1017, GHSL-2021-1020, GHSL-2021-1021a03cbb8
GHSL-2021-1026, GHSL-2021-102548fe71e
GHSL-2021-1024Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Bumps mistune from 0.8.4 to 2.0.3.
Sourced from mistune's releases.
Version 2.0.2
Fix
escape_url
via lepture/mistune#295Version 2.0.1
Fix XSS for image link syntax.
Version 2.0.0
First release of Mistune v2.
Version 2.0.0 RC1
In this release, we have a Security Fix for harmful links.
Version 2.0.0 Alpha 1
This is the first release of v2. An alpha version for users to have a preview of the new mistune.
Sourced from mistune's changelog.
Changelog
Here is the full history of mistune v2.
Version 2.0.4
Released on Jul 15, 2022
- Fix
url
plugin in<a>
tag- Fix
*
formattingVersion 2.0.3
Released on Jun 27, 2022
- Fix
table
plugin- Security fix for CVE-2022-34749
Version 2.0.2
Released on Jan 14, 2022
Fix
escape_url
Version 2.0.1
Released on Dec 30, 2021
XSS fix for image link syntax.
Version 2.0.0
Released on Dec 5, 2021
This is the first non-alpha release of mistune v2.
Version 2.0.0rc1
Released on Feb 16, 2021
Version 2.0.0a6
</tr></table>
... (truncated)
3f422f1
Version bump 2.0.3a6d4321
Fix asteris emphasis regex CVE-2022-347495638e46
Merge pull request #307 from jieter/patch-10eba471
Fix typo in guide.rst61e9337
Fix table plugin76dec68
Add documentation for renderer heading when TOC enabled799cd11
Version bump 2.0.2babb0cf
Merge pull request #295 from dairiki/bug.escape_urlfc2cd53
Make mistune.util.escape_url less aggressive3e8d352
Version bump 2.0.1Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Bumps notebook from 6.0.0 to 6.4.12.
Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Bumps ipython from 7.7.0 to 7.16.3.
d43c7c7
release 7.16.35fa1e40
Merge pull request from GHSA-pq7m-3gw7-gq5x8df8971
back to dev9f477b7
release 7.16.2138f266
bring back release helper from master branch5aa3634
Merge pull request #13341 from meeseeksmachine/auto-backport-of-pr-13335-on-7...bcae8e0
Backport PR #13335: What's new 7.16.28fcdcd3
Pin Jedi to <0.17.2.2486838
release 7.16.120bdc6f
fix conda buildDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Bumps babel from 2.7.0 to 2.9.1.
Sourced from babel's releases.
Version 2.9.1
Bugfixes
- The internal locale-data loading functions now validate the name of the locale file to be loaded and only allow files within Babel's data directory. Thank you to Chris Lyne of Tenable, Inc. for discovering the issue!
Version 2.9.0
Upcoming version support changes
- This version, Babel 2.9, is the last version of Babel to support Python 2.7, Python 3.4, and Python 3.5.
Improvements
- CLDR: Use CLDR 37 – Aarni Koskela (#734)
- Dates: Handle ZoneInfo objects in get_timezone_location, get_timezone_name - Alessio Bogon (#741)
- Numbers: Add group_separator feature in number formatting - Abdullah Javed Nesar (#726)
Bugfixes
- Dates: Correct default Format().timedelta format to 'long' to mute deprecation warnings – Aarni Koskela
- Import: Simplify iteration code in "import_cldr.py" – Felix Schwarz
- Import: Stop using deprecated ElementTree methods "getchildren()" and "getiterator()" – Felix Schwarz
- Messages: Fix unicode printing error on Python 2 without TTY. – Niklas Hambüchen
- Messages: Introduce invariant that _invalid_pofile() takes unicode line. – Niklas Hambüchen
- Tests: fix tests when using Python 3.9 – Felix Schwarz
- Tests: Remove deprecated 'sudo: false' from Travis configuration – Jon Dufresne
- Tests: Support Py.test 6.x – Aarni Koskela
- Utilities: LazyProxy: Handle AttributeError in specified func – Nikiforov Konstantin (#724)
- Utilities: Replace usage of parser.suite with ast.parse – Miro Hrončok
Documentation
- Update parse_number comments – Brad Martin (#708)
- Add iter to Catalog documentation –
@CyanNani123
Version 2.8.1
This patch version only differs from 2.8.0 in that it backports in #752.
Version 2.8.0
Improvements
- CLDR: Upgrade to CLDR 36.0 - Aarni Koskela (#679)
- Messages: Don't even open files with the "ignore" extraction method -
@sebleblanc
(#678)Bugfixes
- Numbers: Fix formatting very small decimals when quantization is disabled - Lev Lybin,
@miluChen
(#662)- Messages: Attempt to sort all messages – Mario Frasca (#651, #606)
Docs
... (truncated)
Sourced from babel's changelog.
Version 2.9.1
Bugfixes
* The internal locale-data loading functions now validate the name of the locale file to be loaded and only allow files within Babel's data directory. Thank you to Chris Lyne of Tenable, Inc. for discovering the issue!
Version 2.9.0
Upcoming version support changes
- This version, Babel 2.9, is the last version of Babel to support Python 2.7, Python 3.4, and Python 3.5.
Improvements
* CLDR: Use CLDR 37 – Aarni Koskela ([#734](https://github.com/python-babel/babel/issues/734)) * Dates: Handle ZoneInfo objects in get_timezone_location, get_timezone_name - Alessio Bogon ([#741](https://github.com/python-babel/babel/issues/741)) * Numbers: Add group_separator feature in number formatting - Abdullah Javed Nesar ([#726](https://github.com/python-babel/babel/issues/726))
Bugfixes
[](https://docs.github.com/en/github/managing-security-vulnerabilities/about-dependabot-security-updates#about-compatibility-scores) Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting `@dependabot rebase`. [//]: # (dependabot-automerge-start) [//]: # (dependabot-automerge-end) ---* Dates: Correct default Format().timedelta format to 'long' to mute deprecation warnings – Aarni Koskela * Import: Simplify iteration code in "import_cldr.py" – Felix Schwarz * Import: Stop using deprecated ElementTree methods "getchildren()" and "getiterator()" – Felix Schwarz * Messages: Fix unicode printing error on Python 2 without TTY. – Niklas Hambüchen * Messages: Introduce invariant that _invalid_pofile() takes unicode line. – Niklas Hambüchen * Tests: fix tests when using Python 3.9 – Felix Schwarz * Tests: Remove deprecated 'sudo: false' from Travis configuration – Jon Dufresne * Tests: Support Py.test 6.x – Aarni Koskela * Utilities: LazyProxy: Handle AttributeError in specified func – Nikiforov Konstantin ([#724](https://github.com/python-babel/babel/issues/724)) * Utilities: Replace usage of parser.suite with ast.parse – Miro Hrončok Documentation </code></pre> <ul> <li>Update parse_number comments – Brad Martin (<a href="https://github-redirect.dependabot.com/python-babel/babel/issues/708">#708</a>)</li> <li>Add <strong>iter</strong> to Catalog documentation – <a href="https://github.com/CyanNani123"><code>@CyanNani123</code></a></li> </ul> <h2>Version 2.8.1</h2> <p>This is solely a patch release to make running tests on Py.test 6+ possible.</p> <p>Bugfixes</p> <!-- raw HTML omitted --> </blockquote> <p>... (truncated)</p> </details> <details> <summary>Commits</summary> <ul> <li><a href="https://github.com/python-babel/babel/commit/a99fa2474c808b51ebdabea18db871e389751559"><code>a99fa24</code></a> Use 2.9.0's setup.py for 2.9.1</li> <li><a href="https://github.com/python-babel/babel/commit/60b33e083801109277cb068105251e76d0b7c14e"><code>60b33e0</code></a> Become 2.9.1</li> <li><a href="https://github.com/python-babel/babel/commit/412015ef642bfcc0d8ba8f4d05cdbb6aac98d9b3"><code>412015e</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/python-babel/babel/issues/782">#782</a> from python-babel/locale-basename</li> <li><a href="https://github.com/python-babel/babel/commit/5caf717ceca4bd235552362b4fbff88983c75d8c"><code>5caf717</code></a> Disallow special filenames on Windows</li> <li><a href="https://github.com/python-babel/babel/commit/3a700b5b8b53606fd98ef8294a56f9510f7290f8"><code>3a700b5</code></a> Run locale identifiers through <code>os.path.basename()</code></li> <li><a href="https://github.com/python-babel/babel/commit/5afe2b2f11dcdd6090c00231d342c2e9cd1bdaab"><code>5afe2b2</code></a> Merge pull request <a href="https://github-redirect.dependabot.com/python-babel/babel/issues/754">#754</a> from python-babel/github-ci</li> <li><a href="https://github.com/python-babel/babel/commit/58de8342f865df88697a4a166191e880e3c84d82"><code>58de834</code></a> Replace Travis + Appveyor with GitHub Actions (WIP)</li> <li><a href="https://github.com/python-babel/babel/commit/d1bbc08e845d03d8e1f0dfa0e04983d755f39cb5"><code>d1bbc08</code></a> import_cldr: use logging; add -q option</li> <li><a href="https://github.com/python-babel/babel/commit/156b7fb9f377ccf58c71cf01dc69fb10c7b69314"><code>156b7fb</code></a> Quiesce CLDR download progress bar if requested (or not a TTY)</li> <li><a href="https://github.com/python-babel/babel/commit/613dc1700f91c3d40b081948c0dd6023d8ece057"><code>613dc17</code></a> Make the import warnings about unsupported number systems less verbose</li> <li>Additional commits viewable in <a href="https://github.com/python-babel/babel/compare/v2.7.0...v2.9.1">compare view</a></li> </ul> </details> <br />
Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/connor-hawley/AIS-Vessel-Data-Pipeline/network/alerts).