Simple library for interactive autograding.
Previous names include gradememaybe
and okgrade
.
See the Gofer Grader documentation for more information.
This library can be used to autograde Jupyter Notebooks and Python files.
Instructors can write tests in a subset of the okpy test format (other formats coming soon), and students can dynamically check if their code is correct or not. These notebooks / .py files can later be collected and a grade assigned to them automatically.
As an effort to help autograding with Berkeley's offering of Data 8 online, Gofer also works with two other components that could be useful for other courses. courses. The primary one, Gofer service, is a tornado service that receives notebook submissions and runs/grades them in docker containers. The second piece, Gofer submit is a Jupyter notebook extension that submits the current notebook to the service. Though they could be modified to work on your own setup, these are meant to play particularly nicely with Jupyterhub.
okpy is used at Berkeley for a number of large classes (CS61A, data8, etc). It has a lot of features that are very useful for large and diverse classes. However, this comes with a complexity cost for instructors who only need a subset of these features and sysadmins operating an okpy server installation.
This project is tightly scoped to only do automatic grading, and nothing else.
Gofer executes arbitrary user code within the testing environment, rather than parsing standard out. While there are certain measures implemented to make it more difficult for users to maliciously modify the tests, it is not 100% possible to secure against these attacks since Python exposes all the objects.
Lots of credit to the amazing teams that have worked on okpy over the years.
Bumps ipython from 6.4.0 to 8.10.0.
Sourced from ipython's releases.
See https://pypi.org/project/ipython/
We do not use GitHub release anymore. Please see PyPI https://pypi.org/project/ipython/
7.9.0
No release notes provided.
7.8.0
No release notes provided.
7.7.0
No release notes provided.
7.6.1
No release notes provided.
7.6.0
No release notes provided.
7.5.0
No release notes provided.
7.4.0
No release notes provided.
7.3.0
No release notes provided.
7.2.0
No release notes provided.
7.1.1
No release notes provided.
7.1.0
No release notes provided.
7.0.1
No release notes provided.
7.0.0
No release notes provided.
7.0.0-doc
No release notes provided.
7.0.0rc1
No release notes provided.
7.0.0b1
No release notes provided.
15ea1ed
release 8.10.0560ad10
DOC: Update what's new for 8.10 (#13939)7557ade
DOC: Update what's new for 8.10385d693
Merge pull request from GHSA-29gw-9793-fvw7e548ee2
Swallow potential exceptions from showtraceback() (#13934)0694b08
MAINT: mock slowest test. (#13885)8655912
MAINT: mock slowest test.a011765
Isolate the attack tests with setUp and tearDown methodsc7a9470
Add some regression tests for this changefd34cf5
Swallow potential exceptions from showtraceback()Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Bumps codecov from 2.0.15 to 2.0.16.
3a8b06b
Version 2.0.16b2951c0
Merge pull request #231 from codecov/ce-13802a80aa4
CE-1380_sanitize_args73b1b13
Merge pull request #218 from rly/patch-180a3fcc
Fix broken bitly link in helpba51a78
Merge pull request #106 from blueyed/verbose1d5d288
Merge branch 'master' into verbose3502fdd
Merge pull request #201 from takluyver/patch-1de92d4f
Merge pull request #202 from takluyver/patch-254033d8
Merge pull request #192 from hugovk/add-3.7Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase
.
Add setup, teardown by prefixing, suffixing to test code.
Closes #25.
Break up testing function to allow return of individual test results.
Is this a reasonable way to go?
I'm making a command line interface, and I want to give a report for all the tests, with their test failures.
I have some existing assessments that use test setup, but I notice that the grading code disallows these at the moment:
https://github.com/data-8/Gofer-Grader/blob/master/gofer/ok.py#L126
Is there some structural reason for this? How difficult would it be to add support for these? (I'm happy to work on it, if it's feasible).
I have a strange error that I've been able to reproduce on 2 machines but it doesn't occur on others. For example, I have the issue running on my local laptop but then not with other environments.
The behavior is such that Gofer Grader runs fine until pandas is imported. Then the issue below occurs every time you run the grading.
KeyError Traceback (most recent call last)
~/anaconda3/envs/auto/lib/python3.6/site-packages/client/api/notebook.py in grade(self, question, global_env) 56 # inspect trick to pass in its parents' global env. 57 global_env = inspect.currentframe().f_back.f_globals ---> 58 result = check(path, global_env) 59 # We display the output if we're in IPython. 60 # This keeps backwards compatibility with okpy's grade method
~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/ok.py in check(test_file_path, global_env) 294 # inspect trick to pass in its parents' global env. 295 global_env = inspect.currentframe().f_back.f_globals --> 296 return tests.run(global_env, include_grade=False)
~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/ok.py in run(self, global_environment, include_grade) 143 failed_tests = [] 144 for t in self.tests: --> 145 passed, hint = t.run(global_environment) 146 if passed: 147 passed_tests.append(t)
~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/ok.py in run(self, global_environment) 85 def run(self, global_environment): 86 for i, t in enumerate(self.tests): ---> 87 passed, result = run_doctest(self.name + ' ' + str(i), t, global_environment) 88 if not passed: 89 return False, OKTest.result_fail_template.render(
~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/ok.py in run_doctest(name, doctest_string, global_environment) 43 runresults = io.StringIO() 44 with redirect_stdout(runresults), redirect_stderr(runresults), hide_outputs(): ---> 45 doctestrunner.run(test, clear_globs=False) 46 with open('/dev/null', 'w') as f, redirect_stderr(f), redirect_stdout(f): 47 result = doctestrunner.summarize(verbose=True)
~/anaconda3/envs/auto/lib/python3.6/contextlib.py in exit(self, type, value, traceback) 86 if type is None: 87 try: ---> 88 next(self.gen) 89 except StopIteration: 90 return False
~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/utils.py in hide_outputs() 46 yield 47 finally: ---> 48 flush_inline_matplotlib_plots() 49 ipy.display_formatter.formatters = old_formatters
~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/utils.py in flush_inline_matplotlib_plots() 21 try: 22 import matplotlib as mpl ---> 23 from ipykernel.pylab.backend_inline import flush_figures 24 except ImportError: 25 return
~/anaconda3/envs/auto/lib/python3.6/site-packages/ipykernel/pylab/backend_inline.py in
~/anaconda3/envs/auto/lib/python3.6/site-packages/ipykernel/pylab/backend_inline.py in _enable_matplotlib_integration() 158 try: 159 activate_matplotlib(backend) --> 160 configure_inline_support(ip, backend) 161 except (ImportError, AttributeError): 162 # bugs may cause a circular import on Python 2
~/anaconda3/envs/auto/lib/python3.6/site-packages/IPython/core/pylabtools.py in configure_inline_support(shell, backend) 409 if new_backend_name != cur_backend: 410 # Setup the default figure format --> 411 select_figure_formats(shell, cfg.figure_formats, **cfg.print_figure_kwargs) 412 configure_inline_support.current_backend = new_backend_name
~/anaconda3/envs/auto/lib/python3.6/site-packages/IPython/core/pylabtools.py in select_figure_formats(shell, formats, **kwargs) 215 from matplotlib.figure import Figure 216 --> 217 svg_formatter = shell.display_formatter.formatters['image/svg+xml'] 218 png_formatter = shell.display_formatter.formatters['image/png'] 219 jpg_formatter = shell.display_formatter.formatters['image/jpeg']
KeyError: 'image/svg+xml' ```