-
Notifications
You must be signed in to change notification settings - Fork 239
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for a remote evaluator #345
Merged
Merged
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Previously, we only got the expression associated with the exercise to evaluate, but couldn't easily access the exercise or its metadata.
…global_setup in the `evaluate_exercise` expression
trestletech
changed the title
[WIP] Add support for a remote evaluator
Add support for a remote evaluator
Apr 10, 2020
schloerke
reviewed
Apr 10, 2020
schloerke
reviewed
Apr 10, 2020
schloerke
reviewed
Apr 10, 2020
schloerke
reviewed
Apr 10, 2020
schloerke
reviewed
Apr 10, 2020
Co-Authored-By: Barret Schloerke <barret@rstudio.com>
schloerke
approved these changes
Apr 13, 2020
trestletech
added a commit
that referenced
this pull request
Apr 13, 2020
* Save setup chunks into learnr object. Would allow us to later retrieve these setup chunks without having to write them to the client. * Stash setup chunks * Avoid storing the setup chunk if it's unused We don't want to risk exposing a setup chunk which might be sensitive in memory to users running exercises. Here, we just clobber the same value with the appropriate setup chunk so we don't risk exposing anything inadvertantly. * Rearrange * Enable remote evaluators that choose to use this function to include global_setup in the `evaluate_exercise` expression * Uninstall our knitr source hook * Add tests for new functions * Add htmltools to remotes * Test the source knitr hook * Try Rcpp step * Review fixes * Add a clear function * Ensure that empty setup-global-exercise chunks still get acknowledged * Update tests * Add support for a remote evaluator (#345) * Pass in the raw exercise to evaluators Previously, we only got the expression associated with the exercise to evaluate, but couldn't easily access the exercise or its metadata. * Add remote evaluator * More consistent error handling and env var vs option integration * Rearrange * Enable remote evaluators that choose to use this function to include global_setup in the `evaluate_exercise` expression * Uninstall our knitr source hook * Add a clear function * Test blocking session initiation * Make initiate more robust, more tests * Refactor remote to make async * Add testing for remote evaluator * Add NEWS * Regen roxygen * Add docs for remote evaluator. * Fix null encoding in JSON and misnamed callback. * Work around some edge cases when serializing, update swagger * Run global setup prior to the checker. * Include an RMD that has a series of tests that can vet the remote evaluator * Update R/evaluators.R Co-Authored-By: Barret Schloerke <barret@rstudio.com> * new_remote_evaluator -> remote_evaluator * Remove JSON OpenAPI spec * remote_evaluator -> external_evaluator * Note that external evaluators are experimental * Added usethis lifecycle dependencies. Co-authored-by: Barret Schloerke <barret@rstudio.com> * Add dev dependencies * Fix build NOTE by fulfilling missing usethis step * Remove lifecycle to get R 3.2 working again * Remove remaining lifecycle reference Co-authored-by: Barret Schloerke <barret@rstudio.com>
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Note that this PR is based on the
save-setup
branch, not master. So we'll either need to be considerate of the merge order or rebase this one once #346 lands on master.This change adds a new evaluator that supports running exercises remotely. It presumes a remote HTTP server that can receive R exercises that returns appropriately formatted responses.
This breaks expectations a bit, in that our current evaluators always inherit state from the parent RMD doc. Meaning that if you run
library()
somewhere in the initialization of your learnr doc, that package would already be loaded when your user goes to run an exercise. That will no longer be the case in this model. Instead, we rely on explicit use of thesetup
chunks for each exercise which must be used in order to provision context for an exercise.Changes
tutorial.remote.host
option, setting theTUTORIAL_REMOTE_EVALUATOR_HOST
environment variable, or usingnew_remote_evaluator()
and assigning the result to thetutorial.exercise.evaluator
option. The last option gives you the most control about how the remote evaluator is constructed....
for future-proofing.evaluate-exercise
to opt-in to running the global setup chunk that's attached to the exercise prior to running exercise checkers. This allows remote sessions to have calls likelibrary(gradethis)
in the global setup and then leveragegradethis
functions inside of their checkers.evaluator-tests.Rmd
that exercises all the different configurations that I had in mind when working on this feature. This points to a server running on localhost; if you want to run against a remote server, ask me and I can send you the URL privately (avoiding posting the server publicly for now.)Testing & Validation
Ask me for the URL of the server that can be used for testing.
Install this branch, if it hasn't already been merged
remotes::install_github("rstudio/learnr@remote")
I think the bulk of the testing is just going to be trying a variety of inputs on the exercises with different configurations -- with and without setup chunks, with different time limits, and with various errors like invalid syntax, referencing variables that don't exist, etc. I've tried to capture all of the scenarios that I had in mind and that I tested in an RMD which I've included in this PR (
evaluator-tests.Rmd
). If you just adjust the host to point to the remote server, you should be able to just run through those and confirm that the output is what you expected. It might be interesting to compare the behavior/output of that RMD when running locally (i.e. comment out theoptions(tutorial.remote.host
line at the top of the test RMD) versus running remotely. The only known difference I've seen is that invalid syntax has a bit more text in the error message:Error in parse(text = x, keep.source = TRUE): <text>:4:8: unexpected input 3: 4: asdf + _ ^
vs<text>:4:8: unexpected input 3: 4: asdf + _ ^
.I don't expect much variation on different browsers, as there's not any new front-end code here. I'd like to test on different versions of R, but unfortunately the remote evaluator currently only has one version of R, so we're only going to be able to change the version of R on the learnr side, not the evaluation side. Probably still worth testing, though.