Skip to content

Persist setup chunks #346

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 19 commits into from
Apr 13, 2020
Merged

Persist setup chunks #346

merged 19 commits into from
Apr 13, 2020

Conversation

trestletech
Copy link
Contributor

@trestletech trestletech commented Apr 3, 2020

This stashes the setup chunks in an environment called setup_chunks. This will enable two things:

  1. It will allow remote evaluators to access the setup chunk so that they can run the relevant setup code before running user exercises. e.g. imagine that all of your exercises presume some package is loaded.
  2. It will later allow us to keep the exercise-specific -setup chunks private instead of serializing them through the client. This work has not been taken on yet, though.

You'll find the following changes:

  • The cache gets cleared every time a tutorial is initialized, so there's no risk of inheriting setup chunks from a previous rmarkdown::run.
  • Exercise-specific chunks are saved by the corresponding exercise chunk name (i.e. trim the -setup suffix.
  • The global setup chunk is stored under a special name __setup__ (to avoid conflicts with a chunk called setup-setup.
  • This PR introduces a new special chunk named setup-global-exercise which -- if it exists -- will take the place of the setup chunk in this setup_chunks environment. We do this because a.) the global setup chunk may include initialization code that takes time and isn't relevant to the exercises, and b.) the global setup chunk may include sensitive information that we don't want to leak into the user's exercise environment. (I presume that by sending a setup chunk to run with arbitrary user code, you're effectively giving access to that code.) Therefore users can opt to define a setup-global-exercise chunk which only prepares the specific environment that their exercises will need. Exercise-specific -setup chunks continue to behave as they did in the past.
  • We add a new knitr hook for source. These hooks are documented here. See below for description.

TODO

  • tests
  • NEWS (no user-visible changes)
  • remove hook
  • testing & validation notes
  • docs (no user-visible changes, the corresponding docs will be folded into the next PR)

knitr hooks

Currently, we set a global knitr option to set tutorial = TRUE: https://github.com/rstudio/learnr/blob/master/R/knitr-hooks.R#L7

This has the same effect as writing tutorial=TRUE as an option on top of each of the code chunks in the learnr document. Then when we want to act on the chunks in the document, we can just use the tutorial hook to do so, since all chunks have that options set to TRUE. https://github.com/rstudio/learnr/blob/master/R/knitr-hooks.R#L118

Unfortunately, setting tutorial to TRUE only affects latter chunks, not the current one. From ?knitr::opts_chunk:

Normally we set up the global options once in the first code chunk in a document using opts_chunk$set(), so that all latter chunks will use these options. Note the global options set in one chunk will not affect the options in this chunk itself, and that is why we often need to set global options in a separate chunk.

Since the knitr hook gets defined in .onAttach (see here), this means that whatever chunk calls library(learnr) will invoke this code, and it will impact all the chunks below that one -- but not the current one. That's a problem for us, since library(learnr) usually gets called in the setup chunk.

Therefore we can't just hook into the tutorial option like we've done in the past -- instead we have to hook onto something that globally has access to all chunks including the setup chunk. Therefore, we add a new hook on source = which handles that. Unfortunately, these hooks are all singletons so we clobber any existing hooks when we assign. Therefore you'll see that we take precaution to preserve the existing hook and invoke it later at the end of our hook. So this hook only has side-effects of stashing setup chunks, but the return value continues to be from whatever hook was defined before us.

Testing & Validation

There are no user-visible changes in this PR, but we can still test a few things.

Run remotes::install_github("rstudio/learnr@save-setup")

Confirm we didn't break exercise-specific setup chunks

  • In the RStudio IDE, click New > R Markdown > From Template > Interactive Tutorial to get a learnr template that you can play with

  • above the chunk named two-plus-two, add a new chunk named two-plus-two-setup. This is exercise-specific setup code that should be run ahead of whatever code you provide in the exercise. In this -setup chunk, define a variable named x.
    Screen Shot 2020-04-03 at 9 25 17 AM

  • click Run Document and in the first exercise, run x. You should see it print out whatever value you assigned in the -setup chunk.

  • In another exercise chunk, set the code to x and observe the error: object 'x' not found.

This confirms that the exercise-specific setup chunks still work. They run only for the exercise whose name they match, but not for any others.

Test that setup becomes the global setup chunk

The desired behavior is that the global exercise setup would come from a chunk named setup-global-exercise. If no chunk by that name exists, then it would use the chunk named setup.

  1. In the modified learnr tutorial from above, with the tutorial running, run the following code in one of the exercise blocks:
ls(learnr:::setup_chunks) # print all known setup chunks
get("__setup__", envir = learnr:::setup_chunks) # print the contents of the global setup chunk
  1. You should see two values printed for the first call ([1] "__setup__" "two-plus-two") and the second call should print out whatever you have in your setup chunk at the top of your document. You can modify that chunk and rerun the tutorial to confirm that it's updated.
  2. Add a chunk named setup-global-exercise with some code inside of it. When you rerun the tutorial, you should now see that the above code, when run in an exercise, prints the contents of this new code chunk instead of the original setup chunk.
  3. Confirm that even if setup-global-exercise chunk is empty with no code in it, it's still preferred as the global setup chunk (this exercises a different code path than when it has content in it).

Would allow us to later retrieve these setup chunks without having to write them to the client.
We don't want to risk exposing a setup chunk which might be sensitive in memory to users running exercises. Here, we just clobber the same value with the appropriate setup chunk so we don't risk exposing anything inadvertantly.
@trestletech trestletech requested a review from schloerke April 3, 2020 14:05
@trestletech trestletech changed the title [WIP] Persist setup chunks Persist setup chunks Apr 3, 2020
@trestletech
Copy link
Contributor Author

@schloerke just pushed a commit that addresses the feedback. Thanks for catching those!

Copy link
Collaborator

@schloerke schloerke left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If we could add the "reset setup chunks" as a function, then it looks good to me.

trestletech and others added 5 commits April 13, 2020 20:09
* Pass in the raw exercise to evaluators

Previously, we only got the expression associated with the exercise to evaluate, but couldn't easily access the exercise or its metadata.

* Add remote evaluator

* More consistent error handling and env var vs option integration

* Rearrange

* Enable remote evaluators that choose to use this function to include global_setup in the `evaluate_exercise` expression

* Uninstall our knitr source hook

* Add a clear function

* Test blocking session initiation

* Make initiate more robust, more tests

* Refactor remote to make async

* Add testing for remote evaluator

* Add NEWS

* Regen roxygen

* Add docs for remote evaluator.

* Fix null encoding in JSON and misnamed callback.

* Work around some edge cases when serializing, update swagger

* Run global setup prior to the checker.

* Include an RMD that has a series of tests that can vet the remote evaluator

* Update R/evaluators.R

Co-Authored-By: Barret Schloerke <barret@rstudio.com>

* new_remote_evaluator -> remote_evaluator

* Remove JSON OpenAPI spec

* remote_evaluator -> external_evaluator

* Note that external evaluators are experimental

* Added usethis lifecycle dependencies.

Co-authored-by: Barret Schloerke <barret@rstudio.com>
@trestletech trestletech merged commit 5190453 into master Apr 13, 2020
@trestletech trestletech deleted the save-setup branch April 13, 2020 21:55
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants
pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy