Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for a remote evaluator #345

Merged
merged 30 commits into from
Apr 13, 2020
Merged

Add support for a remote evaluator #345

merged 30 commits into from
Apr 13, 2020

Conversation

trestletech
Copy link
Contributor

@trestletech trestletech commented Mar 31, 2020

Note that this PR is based on the save-setup branch, not master. So we'll either need to be considerate of the merge order or rebase this one once #346 lands on master.

This change adds a new evaluator that supports running exercises remotely. It presumes a remote HTTP server that can receive R exercises that returns appropriately formatted responses.

This breaks expectations a bit, in that our current evaluators always inherit state from the parent RMD doc. Meaning that if you run library() somewhere in the initialization of your learnr doc, that package would already be loaded when your user goes to run an exercise. That will no longer be the case in this model. Instead, we rely on explicit use of the setup chunks for each exercise which must be used in order to provision context for an exercise.

Changes

  • Added a remote evaluator. This can be opted into by either setting the tutorial.remote.host option, setting the TUTORIAL_REMOTE_EVALUATOR_HOST environment variable, or using new_remote_evaluator() and assigning the result to the tutorial.exercise.evaluator option. The last option gives you the most control about how the remote evaluator is constructed.
  • Adds two new arguments that get passed into the evaluator constructor; suggests that new evaluators accept ... for future-proofing.
  • Allows evaluate-exercise to opt-in to running the global setup chunk that's attached to the exercise prior to running exercise checkers. This allows remote sessions to have calls like library(gradethis) in the global setup and then leverage gradethis functions inside of their checkers.
  • Updates docs and adds the OpenAPI schema for the remote evaluator.
  • Adds an example called evaluator-tests.Rmd that exercises all the different configurations that I had in mind when working on this feature. This points to a server running on localhost; if you want to run against a remote server, ask me and I can send you the URL privately (avoiding posting the server publicly for now.)
  • adds tests for the remote evaluator that runs an httpuv server in the background to more easily mock the responses and then measures the interactions the remote evaluator has with the server.

Testing & Validation

Ask me for the URL of the server that can be used for testing.

Install this branch, if it hasn't already been merged remotes::install_github("rstudio/learnr@remote")

I think the bulk of the testing is just going to be trying a variety of inputs on the exercises with different configurations -- with and without setup chunks, with different time limits, and with various errors like invalid syntax, referencing variables that don't exist, etc. I've tried to capture all of the scenarios that I had in mind and that I tested in an RMD which I've included in this PR (evaluator-tests.Rmd). If you just adjust the host to point to the remote server, you should be able to just run through those and confirm that the output is what you expected. It might be interesting to compare the behavior/output of that RMD when running locally (i.e. comment out the options(tutorial.remote.host line at the top of the test RMD) versus running remotely. The only known difference I've seen is that invalid syntax has a bit more text in the error message: Error in parse(text = x, keep.source = TRUE): <text>:4:8: unexpected input 3: 4: asdf + _ ^ vs <text>:4:8: unexpected input 3: 4: asdf + _ ^.

I don't expect much variation on different browsers, as there's not any new front-end code here. I'd like to test on different versions of R, but unfortunately the remote evaluator currently only has one version of R, so we're only going to be able to change the version of R on the learnr side, not the evaluation side. Probably still worth testing, though.

@trestletech trestletech changed the base branch from master to save-setup April 3, 2020 16:06
@trestletech trestletech requested a review from schloerke April 8, 2020 14:31
@trestletech trestletech changed the title [WIP] Add support for a remote evaluator Add support for a remote evaluator Apr 10, 2020
R/evaluators.R Outdated Show resolved Hide resolved
R/evaluators.R Outdated Show resolved Hide resolved
Co-Authored-By: Barret Schloerke <barret@rstudio.com>
@schloerke schloerke self-requested a review April 13, 2020 20:06
@trestletech trestletech merged commit 34fbe10 into save-setup Apr 13, 2020
@trestletech trestletech deleted the remote branch April 13, 2020 20:09
trestletech added a commit that referenced this pull request Apr 13, 2020
* Save setup chunks into learnr object.

Would allow us to later retrieve these setup chunks without having to write them to the client.

* Stash setup chunks

* Avoid storing the setup chunk if it's unused

We don't want to risk exposing a setup chunk which might be sensitive in memory to users running exercises. Here, we just clobber the same value with the appropriate setup chunk so we don't risk exposing anything inadvertantly.

* Rearrange

* Enable remote evaluators that choose to use this function to include global_setup in the `evaluate_exercise` expression

* Uninstall our knitr source hook

* Add tests for new functions

* Add htmltools to remotes

* Test the source knitr hook

* Try Rcpp step

* Review fixes

* Add a clear function

* Ensure that empty setup-global-exercise chunks still get acknowledged

* Update tests

* Add support for a remote evaluator (#345)

* Pass in the raw exercise to evaluators

Previously, we only got the expression associated with the exercise to evaluate, but couldn't easily access the exercise or its metadata.

* Add remote evaluator

* More consistent error handling and env var vs option integration

* Rearrange

* Enable remote evaluators that choose to use this function to include global_setup in the `evaluate_exercise` expression

* Uninstall our knitr source hook

* Add a clear function

* Test blocking session initiation

* Make initiate more robust, more tests

* Refactor remote to make async

* Add testing for remote evaluator

* Add NEWS

* Regen roxygen

* Add docs for remote evaluator.

* Fix null encoding in JSON and misnamed callback.

* Work around some edge cases when serializing, update swagger

* Run global setup prior to the checker.

* Include an RMD that has a series of tests that can vet the remote evaluator

* Update R/evaluators.R

Co-Authored-By: Barret Schloerke <barret@rstudio.com>

* new_remote_evaluator -> remote_evaluator

* Remove JSON OpenAPI spec

* remote_evaluator -> external_evaluator

* Note that external evaluators are experimental

* Added usethis lifecycle dependencies.

Co-authored-by: Barret Schloerke <barret@rstudio.com>

* Add dev dependencies

* Fix build NOTE by fulfilling missing usethis step

* Remove lifecycle to get R 3.2 working again

* Remove remaining lifecycle reference

Co-authored-by: Barret Schloerke <barret@rstudio.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy