Small autograding library

data-8, updated 🕥 2023-02-10 21:45:58

Gofer Grader

CircleCI codecov

Simple library for interactive autograding.

Previous names include gradememaybe and okgrade.

See the Gofer Grader documentation for more information.

What?

This library can be used to autograde Jupyter Notebooks and Python files.

Instructors can write tests in a subset of the okpy test format (other formats coming soon), and students can dynamically check if their code is correct or not. These notebooks / .py files can later be collected and a grade assigned to them automatically.

Integrating Gofer into your course

As an effort to help autograding with Berkeley's offering of Data 8 online, Gofer also works with two other components that could be useful for other courses. courses. The primary one, Gofer service, is a tornado service that receives notebook submissions and runs/grades them in docker containers. The second piece, Gofer submit is a Jupyter notebook extension that submits the current notebook to the service. Though they could be modified to work on your own setup, these are meant to play particularly nicely with Jupyterhub.

Why?

okpy is used at Berkeley for a number of large classes (CS61A, data8, etc). It has a lot of features that are very useful for large and diverse classes. However, this comes with a complexity cost for instructors who only need a subset of these features and sysadmins operating an okpy server installation.

This project is tightly scoped to only do automatic grading, and nothing else.

Caveats

Gofer executes arbitrary user code within the testing environment, rather than parsing standard out. While there are certain measures implemented to make it more difficult for users to maliciously modify the tests, it is not 100% possible to secure against these attacks since Python exposes all the objects.

Credit

Lots of credit to the amazing teams that have worked on okpy over the years.

  1. Academic Publications
  2. GitHub Organizatio
  3. ok-client GitHub repository

Issues

Bump ipython from 6.4.0 to 8.10.0

opened on 2023-02-10 21:45:54 by dependabot[bot]

Bumps ipython from 6.4.0 to 8.10.0.

Release notes

Sourced from ipython's releases.

See https://pypi.org/project/ipython/

We do not use GitHub release anymore. Please see PyPI https://pypi.org/project/ipython/

7.9.0

No release notes provided.

7.8.0

No release notes provided.

7.7.0

No release notes provided.

7.6.1

No release notes provided.

7.6.0

No release notes provided.

7.5.0

No release notes provided.

7.4.0

No release notes provided.

7.3.0

No release notes provided.

7.2.0

No release notes provided.

7.1.1

No release notes provided.

7.1.0

No release notes provided.

7.0.1

No release notes provided.

7.0.0

No release notes provided.

7.0.0-doc

No release notes provided.

7.0.0rc1

No release notes provided.

7.0.0b1

No release notes provided.

Commits
  • 15ea1ed release 8.10.0
  • 560ad10 DOC: Update what's new for 8.10 (#13939)
  • 7557ade DOC: Update what's new for 8.10
  • 385d693 Merge pull request from GHSA-29gw-9793-fvw7
  • e548ee2 Swallow potential exceptions from showtraceback() (#13934)
  • 0694b08 MAINT: mock slowest test. (#13885)
  • 8655912 MAINT: mock slowest test.
  • a011765 Isolate the attack tests with setUp and tearDown methods
  • c7a9470 Add some regression tests for this change
  • fd34cf5 Swallow potential exceptions from showtraceback()
  • Additional commits viewable in compare view


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/data-8/Gofer-Grader/network/alerts).

Bump codecov from 2.0.15 to 2.0.16

opened on 2022-07-15 18:40:54 by dependabot[bot]

Bumps codecov from 2.0.15 to 2.0.16.

Changelog

Sourced from codecov's changelog.

2.0.16

  • fixed reported command injection vulnerability.
Commits


Dependabot compatibility score

Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


Dependabot commands and options
You can trigger Dependabot actions by commenting on this PR: - `@dependabot rebase` will rebase this PR - `@dependabot recreate` will recreate this PR, overwriting any edits that have been made to it - `@dependabot merge` will merge this PR after your CI passes on it - `@dependabot squash and merge` will squash and merge this PR after your CI passes on it - `@dependabot cancel merge` will cancel a previously requested merge and block automerging - `@dependabot reopen` will reopen this PR if it is closed - `@dependabot close` will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually - `@dependabot ignore this major version` will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this minor version` will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself) - `@dependabot ignore this dependency` will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself) - `@dependabot use these labels` will set the current labels as the default for future PRs for this repo and language - `@dependabot use these reviewers` will set the current reviewers as the default for future PRs for this repo and language - `@dependabot use these assignees` will set the current assignees as the default for future PRs for this repo and language - `@dependabot use this milestone` will set the current milestone as the default for future PRs for this repo and language You can disable automated security fix PRs for this repo from the [Security Alerts page](https://github.com/data-8/Gofer-Grader/network/alerts).

NF+TST: add setup and teardown

opened on 2019-10-26 15:34:32 by matthew-brett

Add setup, teardown by prefixing, suffixing to test code.

Closes #25.

MRG: refactor to allow return of test results

opened on 2019-10-24 19:36:34 by matthew-brett

Break up testing function to allow return of individual test results.

Is this a reasonable way to go?

I'm making a command line interface, and I want to give a report for all the tests, with their test failures.

Test setup / teardown - easy to add?

opened on 2019-10-24 14:24:38 by matthew-brett

I have some existing assessments that use test setup, but I notice that the grading code disallows these at the moment:

https://github.com/data-8/Gofer-Grader/blob/master/gofer/ok.py#L126

Is there some structural reason for this? How difficult would it be to add support for these? (I'm happy to work on it, if it's feasible).

Conflict with Pandas

opened on 2019-09-23 13:29:59 by jkuruzovich

I have a strange error that I've been able to reproduce on 2 machines but it doesn't occur on others. For example, I have the issue running on my local laptop but then not with other environments.

The behavior is such that Gofer Grader runs fine until pandas is imported. Then the issue below occurs every time you run the grading.

```

KeyError Traceback (most recent call last) in () ----> 1 _ = ok.grade('q21')

~/anaconda3/envs/auto/lib/python3.6/site-packages/client/api/notebook.py in grade(self, question, global_env) 56 # inspect trick to pass in its parents' global env. 57 global_env = inspect.currentframe().f_back.f_globals ---> 58 result = check(path, global_env) 59 # We display the output if we're in IPython. 60 # This keeps backwards compatibility with okpy's grade method

~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/ok.py in check(test_file_path, global_env) 294 # inspect trick to pass in its parents' global env. 295 global_env = inspect.currentframe().f_back.f_globals --> 296 return tests.run(global_env, include_grade=False)

~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/ok.py in run(self, global_environment, include_grade) 143 failed_tests = [] 144 for t in self.tests: --> 145 passed, hint = t.run(global_environment) 146 if passed: 147 passed_tests.append(t)

~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/ok.py in run(self, global_environment) 85 def run(self, global_environment): 86 for i, t in enumerate(self.tests): ---> 87 passed, result = run_doctest(self.name + ' ' + str(i), t, global_environment) 88 if not passed: 89 return False, OKTest.result_fail_template.render(

~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/ok.py in run_doctest(name, doctest_string, global_environment) 43 runresults = io.StringIO() 44 with redirect_stdout(runresults), redirect_stderr(runresults), hide_outputs(): ---> 45 doctestrunner.run(test, clear_globs=False) 46 with open('/dev/null', 'w') as f, redirect_stderr(f), redirect_stdout(f): 47 result = doctestrunner.summarize(verbose=True)

~/anaconda3/envs/auto/lib/python3.6/contextlib.py in exit(self, type, value, traceback) 86 if type is None: 87 try: ---> 88 next(self.gen) 89 except StopIteration: 90 return False

~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/utils.py in hide_outputs() 46 yield 47 finally: ---> 48 flush_inline_matplotlib_plots() 49 ipy.display_formatter.formatters = old_formatters

~/anaconda3/envs/auto/lib/python3.6/site-packages/gofer/utils.py in flush_inline_matplotlib_plots() 21 try: 22 import matplotlib as mpl ---> 23 from ipykernel.pylab.backend_inline import flush_figures 24 except ImportError: 25 return

~/anaconda3/envs/auto/lib/python3.6/site-packages/ipykernel/pylab/backend_inline.py in () 167 ip.events.register('post_run_cell', configure_once) 168 --> 169 _enable_matplotlib_integration() 170 171 def _fetch_figure_metadata(fig):

~/anaconda3/envs/auto/lib/python3.6/site-packages/ipykernel/pylab/backend_inline.py in _enable_matplotlib_integration() 158 try: 159 activate_matplotlib(backend) --> 160 configure_inline_support(ip, backend) 161 except (ImportError, AttributeError): 162 # bugs may cause a circular import on Python 2

~/anaconda3/envs/auto/lib/python3.6/site-packages/IPython/core/pylabtools.py in configure_inline_support(shell, backend) 409 if new_backend_name != cur_backend: 410 # Setup the default figure format --> 411 select_figure_formats(shell, cfg.figure_formats, **cfg.print_figure_kwargs) 412 configure_inline_support.current_backend = new_backend_name

~/anaconda3/envs/auto/lib/python3.6/site-packages/IPython/core/pylabtools.py in select_figure_formats(shell, formats, **kwargs) 215 from matplotlib.figure import Figure 216 --> 217 svg_formatter = shell.display_formatter.formatters['image/svg+xml'] 218 png_formatter = shell.display_formatter.formatters['image/png'] 219 jpg_formatter = shell.display_formatter.formatters['image/jpeg']

KeyError: 'image/svg+xml' ```

Data Science 8

The Foundations of Data Science course at UC Berkeley

GitHub Repository Homepage