A Type-1 Diabetes simulator implemented in Python for Reinforcement Learning purpose
This simulator is a python implementation of the FDA-approved UVa/Padova Simulator (2008 version) for research purpose only. The simulator includes 30 virtual patients, 10 adolescents, 10 adults, 10 children.
HOW TO CITE: Jinyu Xie. Simglucose v0.2.1 (2018) [Online]. Available: https://github.com/jxx123/simglucose. Accessed on: Month-Date-Year.
| Animation | CVGA Plot | BG Trace Plot | Risk Index Stats |
|---------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------|--------------------------------------------------------------------------------------------------|
| |
|
|
|
risk[t-1] - risk[t]
. risk[t]
is the risk index at time t
defined in this paper. parallel=False
).from simglucose.simulation.scenario_gen import RandomScenario
) and a customized scenario generator (from simglucose.simulation.scenario import CustomScenario
). Commandline user-interface will guide you through the scenario settings.animate
and parallel
cannot be set to True
at the same time in macOS. Most backends of matplotlib in macOS is not thread-safe. Windows has not been tested. Let me know the results if anybody has tested it out.It is highly recommended using pip
to install simglucose
, follow this link to install pip.
Auto installation:
bash
pip install simglucose
Manual installation:
bash
git clone https://github.com/jxx123/simglucose.git
cd simglucose
If you have pip
installed, then
bash
pip install -e .
If you do not have pip
, then
bash
python setup.py install
If rllab (optional) is installed, the package will utilize some functionalities in rllab.
Note: there might be some minor differences between auto install version and manual install version. Use git clone
and manual installation to get the latest version.
Run the simulator user interface
python
from simglucose.simulation.user_interface import simulate
simulate()
You are free to implement your own controller, and test it in the simulator. For example, ```python from simglucose.simulation.user_interface import simulate from simglucose.controller.base import Controller, Action
class MyController(Controller): def init(self, init_state): self.init_state = init_state self.state = init_state
def policy(self, observation, reward, done, **info):
'''
Every controller must have this implementation!
----
Inputs:
observation - a namedtuple defined in simglucose.simulation.env. For
now, it only has one entry: blood glucose level measured
by CGM sensor.
reward - current reward returned by environment
done - True, game over. False, game continues
info - additional information as key word arguments,
simglucose.simulation.env.T1DSimEnv returns patient_name
and sample_time
----
Output:
action - a namedtuple defined at the beginning of this file. The
controller action contains two entries: basal, bolus
'''
self.state = observation
action = Action(basal=0, bolus=0)
return action
def reset(self):
'''
Reset the controller state to inital state, must be implemented
'''
self.state = self.init_state
ctrller = MyController(0) simulate(controller=ctrller) ```
These two examples can also be found in examples\ folder.
In fact, you can specify a lot more simulation parameters through simulation
:
python
simulate(sim_time=my_sim_time,
scenario=my_scenario,
controller=my_controller,
start_time=my_start_time,
save_path=my_save_path,
animate=False,
parallel=True)
from gym.envs.registration import register register( id='simglucose-adolescent2-v0', entry_point='simglucose.envs:T1DSimEnv', kwargs={'patient_name': 'adolescent#002'} )
env = gym.make('simglucose-adolescent2-v0')
observation = env.reset()
for t in range(100):
env.render(mode='human')
print(observation)
# Action in the gym environment is a scalar
# representing the basal insulin, which differs from
# the regular controller action outside the gym
# environment (a tuple (basal, bolus)).
# In the perfect situation, the agent should be able
# to control the glucose only through basal instead
# of asking patient to take bolus
action = env.action_space.sample()
observation, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(t + 1))
break
- Customized reward function
python
import gym
from gym.envs.registration import register
def custom_reward(BG_last_hour): if BG_last_hour[-1] > 180: return -1 elif BG_last_hour[-1] < 70: return -2 else: return 1
register( id='simglucose-adolescent2-v0', entry_point='simglucose.envs:T1DSimEnv', kwargs={'patient_name': 'adolescent#002', 'reward_fun': custom_reward} )
env = gym.make('simglucose-adolescent2-v0')
reward = 1 done = False
observation = env.reset() for t in range(200): env.render(mode='human') action = env.action_space.sample() observation, reward, done, info = env.step(action) print(observation) print("Reward = {}".format(reward)) if done: print("Episode finished after {} timesteps".format(t + 1)) break ```
```python from rllab.algos.ddpg import DDPG from rllab.envs.normalized_env import normalize from rllab.exploration_strategies.ou_strategy import OUStrategy from rllab.policies.deterministic_mlp_policy import DeterministicMLPPolicy from rllab.q_functions.continuous_mlp_q_function import ContinuousMLPQFunction from rllab.envs.gym_env import GymEnv from gym.envs.registration import register
register( id='simglucose-adolescent2-v0', entry_point='simglucose.envs:T1DSimEnv', kwargs={'patient_name': 'adolescent#002'} )
env = GymEnv('simglucose-adolescent2-v0') env = normalize(env)
policy = DeterministicMLPPolicy( env_spec=env.spec, # The neural network policy should have two hidden layers, each with 32 hidden units. hidden_sizes=(32, 32) )
es = OUStrategy(env_spec=env.spec)
qf = ContinuousMLPQFunction(env_spec=env.spec)
algo = DDPG( env=env, policy=policy, es=es, qf=qf, batch_size=32, max_path_length=100, epoch_length=1000, min_pool_size=10000, n_epochs=1000, discount=0.99, scale_reward=0.01, qf_learning_rate=1e-3, policy_learning_rate=1e-4 ) algo.train() ```
You can create the simulation objects, and run batch simulation. For example, ```python from simglucose.simulation.env import T1DSimEnv from simglucose.controller.basal_bolus_ctrller import BBController from simglucose.sensor.cgm import CGMSensor from simglucose.actuator.pump import InsulinPump from simglucose.patient.t1dpatient import T1DPatient from simglucose.simulation.scenario_gen import RandomScenario from simglucose.simulation.scenario import CustomScenario from simglucose.simulation.sim_engine import SimObj, sim, batch_sim from datetime import timedelta from datetime import datetime
now = datetime.now() start_time = datetime.combine(now.date(), datetime.min.time())
path = './results'
patient = T1DPatient.withName('adolescent#001') sensor = CGMSensor.withName('Dexcom', seed=1) pump = InsulinPump.withName('Insulet') scenario = RandomScenario(start_time=start_time, seed=1) env = T1DSimEnv(patient, sensor, pump, scenario)
controller = BBController()
s1 = SimObj(env, controller, timedelta(days=1), animate=False, path=path) results1 = sim(s1) print(results1)
patient = T1DPatient.withName('adolescent#001') sensor = CGMSensor.withName('Dexcom', seed=1) pump = InsulinPump.withName('Insulet')
scen = [(7, 45), (12, 70), (16, 15), (18, 80), (23, 10)] scenario = CustomScenario(start_time=start_time, scenario=scen) env = T1DSimEnv(patient, sensor, pump, scenario)
controller = BBController()
s2 = SimObj(env, controller, timedelta(days=1), animate=False, path=path) results2 = sim(s2) print(results2)
s1.reset() s2.reset()
s = [s1, s2] results = batch_sim(s, parallel=True) print(results) ```
Run analysis offline (example/offline_analysis.py): ```python from simglucose.analysis.report import report import pandas as pd from pathlib import Path
exmaple_pth = Path(file).parent
result_filenames = list(exmaple_pth.glob( 'results/2017-12-31_17-46-32/#.csv')) patient_names = [f.stem for f in result_filenames] df = pd.concat( [pd.read_csv(str(f), index_col=0) for f in result_filenames], keys=patient_names) report(df) ```
policy
method gets access to all the current patient state through info['patient_state']
.gym.make('simglucose-v0')
to make the environment.simglucose.envs.T1DSimEnv
.Shoot me any bugs, enhancements or even discussion by creating issues.
The following instruction is originally from the contribution instructions of sklearn.
The preferred workflow for contributing to simglucose is to fork the main repository on GitHub, clone, and develop on a branch. Steps:
Fork the project repository by clicking on the 'Fork' button near the top right of the page. This creates a copy of the code under your GitHub user account. For more details on how to fork a repository see this guide.
Clone your fork of the simglucose repo from your GitHub account to your local disk:
bash
$ git clone [email protected]:YourLogin/simglucose.git
$ cd simglucose
feature
branch to hold your development changes:bash
$ git checkout -b my-feature
Always use a feature
branch. It's good practice to never work on the master
branch!
git add
and then git commit
files:bash
$ git add modified_files
$ git commit
to record your changes in Git, then push the changes to your GitHub account with:
bash
$ git push -u origin my-feature
(If any of the above seems like magic to you, please look up the Git documentation on the web, or ask a friend or another contributor for help.)
I added the possibility to create a gym env with multiple patients and meals.
Every time the env is reseted, a new patient/meal is randomly sampled.
I also changed the _create_env_from_random_state(custom_scenario)
name and parameter to
_create_env()
because the custom_scenario is an instance variable.
```
from datetime import datetime
import gym
import simglucose
from simglucose.simulation.scenario import CustomScenario
import numpy as np
start_time = datetime(2018, 1, 1, 0, 0, 0) meal_scenario_1 = CustomScenario(start_time=start_time, scenario=[(1,20)]) meal_scenario_2 = CustomScenario(start_time=start_time,scenario=[(3,15)])
patient_name = ['adult#001','adult#002','adult#003','adult#004','adult#005','adult#006','adult#007','adult#008','adult#009','adult#010']
gym.envs.register( id='env-v0', entry_point="simglucose.envs:T1DSimEnv", kwargs={'patient_name': patient_name, 'custom_scenario': [meal_scenario_1, meal_scenario_2]} )
env = gym.make('env-v0')
env.reset()
min_insulin = env.action_space.low max_insulin = env.action_space.high
observation = env.reset() for t in range(100): env.render(mode='human') #action = np.random.uniform(min_insulin, max_insulin)
action = observation.CGM * 0.0005
if observation.CGM < 120:
action = 0
#print(action)
observation, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(t + 1))
break
```
If yes, please tell me the clues of the related paper.
Here's a very basic test for the custom meals in the gym environment. If you have any ideas how to improve it let me know :) .
Bumps certifi from 2021.5.30 to 2022.12.7.
9e9e840
2022.12.07b81bdb2
2022.09.24939a28f
2022.09.14aca828a
2022.06.15.2de0eae1
Only use importlib.resources's new files() / Traversable API on Python ≥3.11 ...b8eb5e9
2022.06.15.147fb7ab
Fix deprecation warning on Python 3.11 (#199)b0b48e0
fixes #198 -- update link in license9d514b4
2022.06.154151e88
Add py.typed to MANIFEST.in to package in sdist (#196)Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Bumps pillow from 8.2.0 to 9.3.0.
Sourced from pillow's releases.
9.3.0
https://pillow.readthedocs.io/en/stable/releasenotes/9.3.0.html
Changes
- Initialize libtiff buffer when saving #6699 [
@radarhere
]- Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [
@wiredfool
]- Inline fname2char to fix memory leak #6329 [
@nulano
]- Fix memory leaks related to text features #6330 [
@nulano
]- Use double quotes for version check on old CPython on Windows #6695 [
@hugovk
]- GHA: replace deprecated set-output command with GITHUB_OUTPUT file #6697 [
@nulano
]- Remove backup implementation of Round for Windows platforms #6693 [
@cgohlke
]- Upload fribidi.dll to GitHub Actions #6532 [
@nulano
]- Fixed set_variation_by_name offset #6445 [
@radarhere
]- Windows build improvements #6562 [
@nulano
]- Fix malloc in _imagingft.c:font_setvaraxes #6690 [
@cgohlke
]- Only use ASCII characters in C source file #6691 [
@cgohlke
]- Release Python GIL when converting images using matrix operations #6418 [
@hmaarrfk
]- Added ExifTags enums #6630 [
@radarhere
]- Do not modify previous frame when calculating delta in PNG #6683 [
@radarhere
]- Added support for reading BMP images with RLE4 compression #6674 [
@npjg
]- Decode JPEG compressed BLP1 data in original mode #6678 [
@radarhere
]- pylint warnings #6659 [
@marksmayo
]- Added GPS TIFF tag info #6661 [
@radarhere
]- Added conversion between RGB/RGBA/RGBX and LAB #6647 [
@radarhere
]- Do not attempt normalization if mode is already normal #6644 [
@radarhere
]- Fixed seeking to an L frame in a GIF #6576 [
@radarhere
]- Consider all frames when selecting mode for PNG save_all #6610 [
@radarhere
]- Don't reassign crc on ChunkStream close #6627 [
@radarhere
]- Raise a warning if NumPy failed to raise an error during conversion #6594 [
@radarhere
]- Only read a maximum of 100 bytes at a time in IMT header #6623 [
@radarhere
]- Show all frames in ImageShow #6611 [
@radarhere
]- Allow FLI palette chunk to not be first #6626 [
@radarhere
]- If first GIF frame has transparency for RGB_ALWAYS loading strategy, use RGBA mode #6592 [
@radarhere
]- Round box position to integer when pasting embedded color #6517 [
@radarhere
]- Removed EXIF prefix when saving WebP #6582 [
@radarhere
]- Pad IM palette to 768 bytes when saving #6579 [
@radarhere
]- Added DDS BC6H reading #6449 [
@ShadelessFox
]- Added support for opening WhiteIsZero 16-bit integer TIFF images #6642 [
@JayWiz
]- Raise an error when allocating translucent color to RGB palette #6654 [
@jsbueno
]- Moved mode check outside of loops #6650 [
@radarhere
]- Added reading of TIFF child images #6569 [
@radarhere
]- Improved ImageOps palette handling #6596 [
@PososikTeam
]- Defer parsing of palette into colors #6567 [
@radarhere
]- Apply transparency to P images in ImageTk.PhotoImage #6559 [
@radarhere
]- Use rounding in ImageOps contain() and pad() #6522 [
@bibinhashley
]- Fixed GIF remapping to palette with duplicate entries #6548 [
@radarhere
]- Allow remap_palette() to return an image with less than 256 palette entries #6543 [
@radarhere
]- Corrected BMP and TGA palette size when saving #6500 [
@radarhere
]
... (truncated)
Sourced from pillow's changelog.
9.3.0 (2022-10-29)
Limit SAMPLESPERPIXEL to avoid runtime DOS #6700 [wiredfool]
Initialize libtiff buffer when saving #6699 [radarhere]
Inline fname2char to fix memory leak #6329 [nulano]
Fix memory leaks related to text features #6330 [nulano]
Use double quotes for version check on old CPython on Windows #6695 [hugovk]
Remove backup implementation of Round for Windows platforms #6693 [cgohlke]
Fixed set_variation_by_name offset #6445 [radarhere]
Fix malloc in _imagingft.c:font_setvaraxes #6690 [cgohlke]
Release Python GIL when converting images using matrix operations #6418 [hmaarrfk]
Added ExifTags enums #6630 [radarhere]
Do not modify previous frame when calculating delta in PNG #6683 [radarhere]
Added support for reading BMP images with RLE4 compression #6674 [npjg, radarhere]
Decode JPEG compressed BLP1 data in original mode #6678 [radarhere]
Added GPS TIFF tag info #6661 [radarhere]
Added conversion between RGB/RGBA/RGBX and LAB #6647 [radarhere]
Do not attempt normalization if mode is already normal #6644 [radarhere]
... (truncated)
d594f4c
Update CHANGES.rst [ci skip]909dc64
9.3.0 version bump1a51ce7
Merge pull request #6699 from hugovk/security-libtiff_buffer2444cdd
Merge pull request #6700 from hugovk/security-samples_per_pixel-sec744f455
Added release notes0846bfa
Add to release notes799a6a0
Fix linting00b25fd
Hide UserWarning in logs05b175e
Tighter test case13f2c5a
Prevent DOS with large SAMPLESPERPIXEL in Tiff IFDDependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Hey,
Thank you for this awesome library.
I'm trying to run a gym env without any meal (ideally I would like a controller which can also suggest carb but from what I understood it's not supported).
I did the following: ``` start_time = datetime.now() no_meal_scenario = CustomScenario(start_time=start_time, scenario=[])
register( id='env-v0', entry_point='simglucose.envs:T1DSimEnv', kwargs={'patient_name': 'adult#001', 'custom_scenario': no_meal_scenario} )
env = gym.make('env-v0')
min_insulin = env.action_space.low max_insulin = env.action_space.high
observation = env.reset() for t in range(100): env.render(mode='human') print(observation) action = np.random.uniform(min_insulin, max_insulin)
print(action)
observation, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(t + 1))
break
```
But I still het carbohydrates inputs sometimes:
What is the correct way to do that ? Thank you !
simulator-controls reinforcement-learning diabetes simulator artificial-pancreas glucose-monitoring rllab simulation python openai-gym