DI-smartcross is an open-source Decision Intelligence platform for Traffic Crossing Signal Control task. DI-smartcross applies several Reinforcement Learning policies training & evaluation for the traffic signal control system in provided road nets. DI-smartcross is application platform under OpenDILab.
DI-smartcross uses DI-engine, a Reinforcement Learning platform, to build RL experiments. DI-smartcross uses SUMO (Simulation of Urban MObility) and CityFlow traffic simulator packages to run signal control simulation.
DI-smartcross supports:
DI-smartcross supports SUMO version >= 1.6.0. You can refer to
SUMO documentation or follow our installation guidance in
documents.
CityFlow can be installed and compiled from source code. You can clone their repo and run pip install .
Then, DI-smartcross is able to be installed from the source code.
Simply run pip install .
in the root folder of this repository.
This will automatically install DI-engine as well.
bash
pip install -e . --user
DI-smartcross provides simple entry for RL training and evaluation. DI-smartcross supports DQN, Off-policy PPO and Rainbow DQN RL methods with multi-discrete actions for each crossing, as well as multi-agent RL policies in which each crossing is handled by a individual agent. A set of default DI-engine configs is provided for each policy. You can check the document of DI-engine to get detailed instructions on these configs.
Here we show RL training sript for sumo envs, same with cityflow env.
Example of running DQN in sumo wj3 env with default config.
bash
sumo_train -e smartcross/envs/sumo_wj3_default_config.yaml -d entry/config/sumo_wj3_dqn_default_config.py
Example of running PPO in cityflow grid env with default config.
bash
cityflow_train -e ./smartcross/envs/cityflow_grid/cityflow_grid_config.json -d entry/cityflow_config/cityflow_grid_ppo_default_config.py
Example of running random policy in wj3 env.
bash
sumo_eval -p random -e smartcross/envs/sumo_wj3_default_config.yaml
Example of running fix policy in cityflow grid env.
bash
cityflow_eval -e smartcross/envs/cityflow_grid/cityflow_auto_grid_config.json -d entry/cityflow_config/cityflow_eval_default_config.py -p fix
It is rerecommended to refer to documation for detailed information.
DI-smartcross
|-- .flake8
|-- .gitignore
|-- .style.yapf
|-- LICENSE
|-- README.md
|-- format.sh
|-- modify_traci_connect_timeout.sh
|-- setup.py
|-- docs
| |-- .gitignore
| |-- Makefile
| |-- figs
| |-- source
|-- entry
| |-- cityflow_eval
| |-- cityflow_train
| |-- sumo_eval
| |-- sumo_train
| |-- cityflow_config
| |-- sumo_config
|-- smartcross
|-- __init__.py
|-- envs
| |-- __init__.py
| |-- cityflow_env.py
| |-- crossing.py
| |-- sumo_arterial7_default_config.yaml
| |-- sumo_arterial7_multi_agent_config.yaml
| |-- sumo_env.py
| |-- sumo_wj3_default_config.yaml
| |-- sumo_wj3_multi_agent_config.yaml
| |-- action
| |-- cityflow_grid
| |-- obs
| |-- reward
| |-- sumo_arterial_7roads
| |-- sumo_wj3
| |-- tests
| |-- test_cityflow_env.py
| |-- test_sumo_env.py
|-- policy
| |-- __init__.py
| |-- default_policy.py
| |-- tests
| |-- test_policy.py
|-- utils
|-- config_utils.py
|-- env_utils.py
We appreciate all contributions to improve DI-smartcross, both algorithms and system designs. Welcome to OpenDILab community! Scan the QR code and add us on Wechat:
Or you can contact us with slack or email ([email protected]).
DI-smartcross released under the Apache 2.0 license.
latex
@misc{smartcross,
title={{DI-smartcross: OpenDILab} Decision Intelligence platform for Traffic Crossing Signal Control},
author={DI-smartcross Contributors},
publisher = {GitHub},
howpublished = {\url{https://github.com/opendilab/DI-smartcross}},
year={2021},
}
Initial release of DI-smartcross with SUMO and CityFlow env supported.
traffic-light-control traffic-signal-control reinforcement-learning