A Type-1 Diabetes simulator implemented in Python for Reinforcement Learning purpose

jxx123, updated 🕥 2022-12-28 09:50:29

simglucose

Downloads Downloads Downloads

A Type-1 Diabetes simulator implemented in Python for Reinforcement Learning purpose

This simulator is a python implementation of the FDA-approved UVa/Padova Simulator (2008 version) for research purpose only. The simulator includes 30 virtual patients, 10 adolescents, 10 adults, 10 children.

HOW TO CITE: Jinyu Xie. Simglucose v0.2.1 (2018) [Online]. Available: https://github.com/jxx123/simglucose. Accessed on: Month-Date-Year.

  • Note: simglucose only supports python3.

| Animation | CVGA Plot | BG Trace Plot | Risk Index Stats | |---------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------| | animation screenshot | CVGA | BG Trace Plot | Risk Index Stats |

Main Features

  • Simulation environment follows OpenAI gym and rllab APIs. It returns observation, reward, done, info at each step, which means the simulator is "reinforcement-learning-ready".
  • Supports customized reward function. The reward function is a function of blood glucose measurements in the last hour. By default, the reward at each step is risk[t-1] - risk[t]. risk[t] is the risk index at time t defined in this paper.
  • Supports parallel computing. The simulator simulates multiple patients in parallel using pathos multiprocessing package (you are free to turn parallel off by setting parallel=False).
  • The simulator provides a random scenario generator (from simglucose.simulation.scenario_gen import RandomScenario) and a customized scenario generator (from simglucose.simulation.scenario import CustomScenario). Commandline user-interface will guide you through the scenario settings.
  • The simulator provides the most basic basal-bolus controller for now. It provides very simple syntax to implement your own controller, like Model Predictive Control, PID control, reinforcement learning control, etc.
  • You can specify random seed in case you want to repeat your experiments.
  • The simulator will generate several plots for performance analysis after simulation. The plots include blood glucose trace plot, Control Variability Grid Analysis (CVGA) plot, statistics plot of blood glucose in different zones, risk indices statistics plot.
  • NOTE: animate and parallel cannot be set to True at the same time in macOS. Most backends of matplotlib in macOS is not thread-safe. Windows has not been tested. Let me know the results if anybody has tested it out.

Installation

It is highly recommended using pip to install simglucose, follow this link to install pip.

Auto installation: bash pip install simglucose

Manual installation: bash git clone https://github.com/jxx123/simglucose.git cd simglucose If you have pip installed, then bash pip install -e . If you do not have pip, then bash python setup.py install

If rllab (optional) is installed, the package will utilize some functionalities in rllab.

Note: there might be some minor differences between auto install version and manual install version. Use git clone and manual installation to get the latest version.

Quick Start

Use simglucose as a simulator and test controllers

Run the simulator user interface python from simglucose.simulation.user_interface import simulate simulate()

You are free to implement your own controller, and test it in the simulator. For example, ```python from simglucose.simulation.user_interface import simulate from simglucose.controller.base import Controller, Action

class MyController(Controller): def init(self, init_state): self.init_state = init_state self.state = init_state

def policy(self, observation, reward, done, **info):
    '''
    Every controller must have this implementation!
    ----
    Inputs:
    observation - a namedtuple defined in simglucose.simulation.env. For
                  now, it only has one entry: blood glucose level measured
                  by CGM sensor.
    reward      - current reward returned by environment
    done        - True, game over. False, game continues
    info        - additional information as key word arguments,
                  simglucose.simulation.env.T1DSimEnv returns patient_name
                  and sample_time
    ----
    Output:
    action - a namedtuple defined at the beginning of this file. The
             controller action contains two entries: basal, bolus
    '''
    self.state = observation
    action = Action(basal=0, bolus=0)
    return action

def reset(self):
    '''
    Reset the controller state to inital state, must be implemented
    '''
    self.state = self.init_state

ctrller = MyController(0) simulate(controller=ctrller) ```

These two examples can also be found in examples\ folder.

In fact, you can specify a lot more simulation parameters through simulation: python simulate(sim_time=my_sim_time, scenario=my_scenario, controller=my_controller, start_time=my_start_time, save_path=my_save_path, animate=False, parallel=True)

OpenAI Gym usage

  • Using default reward ```python import gym

Register gym environment. By specifying kwargs,

you are able to choose which patient to simulate.

patient_name must be 'adolescent#001' to 'adolescent#010',

or 'adult#001' to 'adult#010', or 'child#001' to 'child#010'

from gym.envs.registration import register register( id='simglucose-adolescent2-v0', entry_point='simglucose.envs:T1DSimEnv', kwargs={'patient_name': 'adolescent#002'} )

env = gym.make('simglucose-adolescent2-v0')

observation = env.reset() for t in range(100): env.render(mode='human') print(observation) # Action in the gym environment is a scalar # representing the basal insulin, which differs from # the regular controller action outside the gym # environment (a tuple (basal, bolus)). # In the perfect situation, the agent should be able # to control the glucose only through basal instead # of asking patient to take bolus action = env.action_space.sample() observation, reward, done, info = env.step(action) if done: print("Episode finished after {} timesteps".format(t + 1)) break - Customized reward functionpython import gym from gym.envs.registration import register

def custom_reward(BG_last_hour): if BG_last_hour[-1] > 180: return -1 elif BG_last_hour[-1] < 70: return -2 else: return 1

register( id='simglucose-adolescent2-v0', entry_point='simglucose.envs:T1DSimEnv', kwargs={'patient_name': 'adolescent#002', 'reward_fun': custom_reward} )

env = gym.make('simglucose-adolescent2-v0')

reward = 1 done = False

observation = env.reset() for t in range(200): env.render(mode='human') action = env.action_space.sample() observation, reward, done, info = env.step(action) print(observation) print("Reward = {}".format(reward)) if done: print("Episode finished after {} timesteps".format(t + 1)) break ```

rllab usage

```python from rllab.algos.ddpg import DDPG from rllab.envs.normalized_env import normalize from rllab.exploration_strategies.ou_strategy import OUStrategy from rllab.policies.deterministic_mlp_policy import DeterministicMLPPolicy from rllab.q_functions.continuous_mlp_q_function import ContinuousMLPQFunction from rllab.envs.gym_env import GymEnv from gym.envs.registration import register

register( id='simglucose-adolescent2-v0', entry_point='simglucose.envs:T1DSimEnv', kwargs={'patient_name': 'adolescent#002'} )

env = GymEnv('simglucose-adolescent2-v0') env = normalize(env)

policy = DeterministicMLPPolicy( env_spec=env.spec, # The neural network policy should have two hidden layers, each with 32 hidden units. hidden_sizes=(32, 32) )

es = OUStrategy(env_spec=env.spec)

qf = ContinuousMLPQFunction(env_spec=env.spec)

algo = DDPG( env=env, policy=policy, es=es, qf=qf, batch_size=32, max_path_length=100, epoch_length=1000, min_pool_size=10000, n_epochs=1000, discount=0.99, scale_reward=0.01, qf_learning_rate=1e-3, policy_learning_rate=1e-4 ) algo.train() ```

Advanced Usage

You can create the simulation objects, and run batch simulation. For example, ```python from simglucose.simulation.env import T1DSimEnv from simglucose.controller.basal_bolus_ctrller import BBController from simglucose.sensor.cgm import CGMSensor from simglucose.actuator.pump import InsulinPump from simglucose.patient.t1dpatient import T1DPatient from simglucose.simulation.scenario_gen import RandomScenario from simglucose.simulation.scenario import CustomScenario from simglucose.simulation.sim_engine import SimObj, sim, batch_sim from datetime import timedelta from datetime import datetime

specify start_time as the beginning of today

now = datetime.now() start_time = datetime.combine(now.date(), datetime.min.time())

--------- Create Random Scenario --------------

Specify results saving path

path = './results'

Create a simulation environment

patient = T1DPatient.withName('adolescent#001') sensor = CGMSensor.withName('Dexcom', seed=1) pump = InsulinPump.withName('Insulet') scenario = RandomScenario(start_time=start_time, seed=1) env = T1DSimEnv(patient, sensor, pump, scenario)

Create a controller

controller = BBController()

Put them together to create a simulation object

s1 = SimObj(env, controller, timedelta(days=1), animate=False, path=path) results1 = sim(s1) print(results1)

--------- Create Custom Scenario --------------

Create a simulation environment

patient = T1DPatient.withName('adolescent#001') sensor = CGMSensor.withName('Dexcom', seed=1) pump = InsulinPump.withName('Insulet')

custom scenario is a list of tuples (time, meal_size)

scen = [(7, 45), (12, 70), (16, 15), (18, 80), (23, 10)] scenario = CustomScenario(start_time=start_time, scenario=scen) env = T1DSimEnv(patient, sensor, pump, scenario)

Create a controller

controller = BBController()

Put them together to create a simulation object

s2 = SimObj(env, controller, timedelta(days=1), animate=False, path=path) results2 = sim(s2) print(results2)

--------- batch simulation --------------

Re-initialize simulation objects

s1.reset() s2.reset()

create a list of SimObj, and call batch_sim

s = [s1, s2] results = batch_sim(s, parallel=True) print(results) ```

Run analysis offline (example/offline_analysis.py): ```python from simglucose.analysis.report import report import pandas as pd from pathlib import Path

get the path to the example folder

exmaple_pth = Path(file).parent

find all csv with pattern #.csv, e.g. adolescent#001.csv

result_filenames = list(exmaple_pth.glob( 'results/2017-12-31_17-46-32/#.csv')) patient_names = [f.stem for f in result_filenames] df = pd.concat( [pd.read_csv(str(f), index_col=0) for f in result_filenames], keys=patient_names) report(df) ```

Release Notes

03/10/2021

  • Fixed some random seed issues.

5/27/2020

  • Add PIDController at simglucose/controller/pid_ctrller. There is an example at examples/run_pid_controller.py showing how to use it.

9/10/2018

  • Controller policy method gets access to all the current patient state through info['patient_state'].

2/26/2018

  • Support customized reward function.

1/10/2018

  • Added workaround to select patient when make gym environment: register gym environment by passing kwargs of patient_name.

1/7/2018

  • Added OpenAI gym support, use gym.make('simglucose-v0') to make the environment.
  • Noticed issue: the patient name selection is not available in gym.make for now. The patient name has to be hard-coded in the constructor of simglucose.envs.T1DSimEnv.

Reporting issues

Shoot me any bugs, enhancements or even discussion by creating issues.

How to contribute

The following instruction is originally from the contribution instructions of sklearn.

The preferred workflow for contributing to simglucose is to fork the main repository on GitHub, clone, and develop on a branch. Steps:

  1. Fork the project repository by clicking on the 'Fork' button near the top right of the page. This creates a copy of the code under your GitHub user account. For more details on how to fork a repository see this guide.

  2. Clone your fork of the simglucose repo from your GitHub account to your local disk:

bash $ git clone [email protected]:YourLogin/simglucose.git $ cd simglucose

  1. Create a feature branch to hold your development changes:

bash $ git checkout -b my-feature

Always use a feature branch. It's good practice to never work on the master branch!

  1. Develop the feature on your feature branch. Add changed files using git add and then git commit files:

bash $ git add modified_files $ git commit

to record your changes in Git, then push the changes to your GitHub account with:

bash $ git push -u origin my-feature

  1. Follow these instructions to create a pull request from your fork. This will email the committers.

(If any of the above seems like magic to you, please look up the Git documentation on the web, or ask a friend or another contributor for help.)

Issues

Init gym env with list of patients/meals

opened on 2022-12-28 09:33:09 by Shurikal

I added the possibility to create a gym env with multiple patients and meals. Every time the env is reseted, a new patient/meal is randomly sampled. I also changed the _create_env_from_random_state(custom_scenario) name and parameter to _create_env() because the custom_scenario is an instance variable. ``` from datetime import datetime import gym import simglucose from simglucose.simulation.scenario import CustomScenario import numpy as np

start_time = datetime(2018, 1, 1, 0, 0, 0) meal_scenario_1 = CustomScenario(start_time=start_time, scenario=[(1,20)]) meal_scenario_2 = CustomScenario(start_time=start_time,scenario=[(3,15)])

patient_name = ['adult#001','adult#002','adult#003','adult#004','adult#005','adult#006','adult#007','adult#008','adult#009','adult#010']

gym.envs.register( id='env-v0', entry_point="simglucose.envs:T1DSimEnv", kwargs={'patient_name': patient_name, 'custom_scenario': [meal_scenario_1, meal_scenario_2]} )

env = gym.make('env-v0')

env.reset()

min_insulin = env.action_space.low max_insulin = env.action_space.high

observation = env.reset() for t in range(100): env.render(mode='human') #action = np.random.uniform(min_insulin, max_insulin)

action = observation.CGM * 0.0005
if observation.CGM < 120:
    action = 0

#print(action)
observation, reward, done, info = env.step(action)
if done:
    print("Episode finished after {} timesteps".format(t + 1))
    break

```

whether there is paper related to this project or not?

opened on 2022-12-28 07:17:28 by v3551G

If yes, please tell me the clues of the related paper.

Basic test for custom gym scenario

opened on 2022-12-27 22:21:01 by Shurikal

Here's a very basic test for the custom meals in the gym environment. If you have any ideas how to improve it let me know :) .

Bump certifi from 2021.5.30 to 2022.12.7

opened on 2022-12-09 05:18:08 by dependabot[bot]

Bumps certifi from 2021.5.30 to 2022.12.7.

Commits


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/jxx123/simglucose/network/alerts).

Bump pillow from 8.2.0 to 9.3.0

opened on 2022-11-22 08:16:40 by dependabot[bot]

Bumps pillow from 8.2.0 to 9.3.0.

Release notes

Sourced from pillow's releases.

9.3.0

https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html

Changes

... (truncated)

Changelog

Sourced from pillow's changelog.

9.3.0 (2022-10-29)

  • Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]

  • Initialize libtiff buffer when saving #6699 [radarhere]

  • Inline fname2char to fix memory leak #6329 [nulano]

  • Fix memory leaks related to text features #6330 [nulano]

  • Use double quotes for version check on old CPython on Windows #6695 [hugovk]

  • Remove backup implementation of Round for Windows platforms #6693 [cgohlke]

  • Fixed set_variation_by_name offset #6445 [radarhere]

  • Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]

  • Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]

  • Added ExifTags enums #6630 [radarhere]

  • Do not modify previous frame when calculating delta in PNG #6683 [radarhere]

  • Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]

  • Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]

  • Added GPS TIFF tag info #6661 [radarhere]

  • Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]

  • Do not attempt normalization if mode is already normal #6644 [radarhere]

... (truncated)

Commits


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/jxx123/simglucose/network/alerts).

Scenario with no meal in gym env

opened on 2022-01-18 10:43:48 by maxime-louis

Hey,

Thank you for this awesome library.

I'm trying to run a gym env without any meal (ideally I would like a controller which can also suggest carb but from what I understood it's not supported).

I did the following: ``` start_time = datetime.now() no_meal_scenario = CustomScenario(start_time=start_time, scenario=[])

register( id='env-v0', entry_point='simglucose.envs:T1DSimEnv', kwargs={'patient_name': 'adult#001', 'custom_scenario': no_meal_scenario} )

env = gym.make('env-v0')

min_insulin = env.action_space.low max_insulin = env.action_space.high

observation = env.reset() for t in range(100): env.render(mode='human') print(observation) action = np.random.uniform(min_insulin, max_insulin)

print(action)
observation, reward, done, info = env.step(action)
if done:
    print("Episode finished after {} timesteps".format(t + 1))
    break

``` But I still het carbohydrates inputs sometimes: image

What is the correct way to do that ? Thank you !

Jinyu Xie

From control systems to machine learning to Artificial Intelligence.

GitHub Repository

simulator-controls reinforcement-learning diabetes simulator artificial-pancreas glucose-monitoring rllab simulation python openai-gym