Shiny meets Pavlovia Part Two

Using a Shiny-App to monitor your Pavlovia-Project

Luke Bölling (FernUniversität Hagen (Department of General Psychology))


Welcome to part two of this “Shiny meets Pavlovia”-Series. After using an API-based approach to download data from the Pavlovia-Gitlab-Repository directly, we can now utilize this connection by incorporating it in a hosted Shiny app. At the end, we have a Experiment-Dashboard that gives us basic information about the progress of the experiment and e.g. allows us to download pre-processed experiment-data.

This tutorial isn’t a introduction into Shiny and its capabilities. You can get more information about Shiny in general here.

Let’s dive in.

The Shiny-App

Tl;dr: Here is a live-demo of the whole app hosted on Live-Demo PW: example

Our Goal is a Shiny-App that allows us to:

I have used this kind of Shiny-Dashboard to give students access to their current data in a pre-processed form (merging and wrangling can be difficult with e.g. SPSS)

All you need are 150 lines of code and everything in datasetInput <- reactiveVal(\{...\}) you have already seen in my first blog-post.

Here is the app:

Show code

# library(rsconnect)
# deployApp()

ui <- dashboardPage(
  # Application title
  dashboardHeader(title = "pavloviaShinyApp"),
      align = "center", offset = 0,
      tags$style(".skin-blue .sidebar a { color: #444; }"),
      textInput("password", label = h3("Password"), placeholder = "Enter Password to get access to Data..."),

      # Button
        condition = "input.password == 'example'",
        downloadButton("downloadData", "Download", ) %>% withSpinner(color = "#0dc5c1")

  # Main panel for displaying outputs ----
      condition = "input.password == 'example'",
      condition = "input.password == 'example'",

# Define server logic
server <- function(input, output) {
  datasetInput <- reactiveVal({
    token <- read_file("token") # Personal Access Token for the Project
    project_id <- 149 # Project ID
    gitlabPavloviaURL <- paste0("", project_id, "/repository/") # API - URL to download whole repository
    r <- GET(gitlabPavloviaURL, add_headers("PRIVATE-TOKEN" = token)) # Getting Archive

    bin <- content(r, "raw") # Writing Binary

    temp <- tempfile() # Init Tempfile

    writeBin(bin, temp) # Write Binary of Archive to Tempfile

    listofFiles <- unzip(
      zipfile = temp, overwrite = T,
      junkpaths = T, list = T
    ) # Unzip only list of all files in the file

    csvFiles <- grep("*.csv", x = listofFiles$Name, value = T) # Grep only the csv Files (Pattern can be extended to get only data-csv file)

      zipfile = temp, overwrite = T,
      junkpaths = T, files = csvFiles[1:100], exdir = "temp"
    ) # Unzip the csv Files in the temp-file

    csvFilesPaths <- list.files("temp/", full.names = T) # Get the unzipped csv-Files in the temp-directory

    # To get only Valid CSV-Files and enable us to filter by DateTime of the File we can parse the files standard date-time string in the Pavlovia-Default FileNames
    dateTimeOfFiles <- tibble(filepaths = csvFilesPaths) %>%
      mutate(dateTime = str_extract(filepaths, "[0-9]{4}-[0-9]{2}-[0-9]{2}_[0-9]{2}h[0-9]{2}")) %>%
      filter(! %>%
      mutate(dateTime = parse_datetime(dateTime, "%Y-%m-%d_%Hh%M"))
    # %>%  filter(dateTime > parse_datetime("2019-02-01_15h00", "%Y-%m-%d_%Hh%M")) # This can be used to Filter by a specific time

    # Purrr Magic  - Thanks to

    # Now the read the desired data Files with purrr:
    data <- data_frame(filename = dateTimeOfFiles$filepaths) %>% # create a data frame
      # holding the file names
        file_contents = map(
          filename, # read files into
          ~ read_csv(file.path(.))
        ) # a new data column

    # Unlink temp because we don't need it anymore
    unlink("temp", recursive = T)

  output$plotTest <- renderPlot({
    ggplot(dataMerged(), aes(y = resp.rt, x = congruent), color = participant) +
      stat_summary(geom = "point", fun = "mean") +
      stat_summary(geom = "errorbar", = mean_se)

  # Table of selected dataset ----
  output$dataOverview <- DT::renderDataTable({
    DT::datatable(datasetInput() %>%
      rowwise() %>%
      mutate(participant = list(file_contents$participant[1]), fileDim = paste0("Rows:", dim(file_contents)[1], " Vars:", dim(file_contents)[2])[1]) %>%
    options = list(scrollX = TRUE)

  observeEvent(input$password, {
    if (input$password != "example") hide("table") else show("table")
  observeEvent(input$password, {
    if (input$password != "example") disable("downloadData") else enable("downloadData")

  dataMerged <- reactive({
    # Read in all available data in a single tibble
    datasetInput() %>% select(file_contents) %>% # remove filenames, not needed anynmore
      unnest(cols = c(file_contents))

  output$downloadData <- downloadHandler(
    filename = function() {
      paste("Test_Data", ".csv", sep = "")
    content = function(file) {
      write.csv(dataMerged(), file, row.names = FALSE)

# Run the application
shinyApp(ui = ui, server = server)

You can get the whole app by forking the Github-Repository here: Github-Repository.

I encourage you to try out the demo code by yourself.

ATTENTION: You have to place a file named “token” into the directory with your own access-token from See my previous blog-post for more information.

Here is a live-demo of the whole app hosted on

Live-Demo PW: example

Blog post 3 will dive into the data-processing and interactive options you will get by using this Shiny-App.

Get the demo running

For this post my goal is to enable you to get this example-app running on your own

Clone the Repository

Open app.R File

Use RStudio and any R-Version around 4.X to publish the Shiny App. If you have open the app.R-File you are ready to publish the example app to your account.

Publish on

Just Use the Publish Button to get the App to

You will need to create a and generate an Access-Token. RStudio is providing a very detailed explanation for the process.

At the end, make sure your “token”-File is correctly prepared and uploaded.

You’re done you have created a simple dashboard for the

Stroop-demo with a simple chart and in-depth information about the data-files. (Note: With ...junkpaths = T, files = csvFiles[1:100], exdir = "temp"... I only unpack the first 100 CSV-Files because this repo is huge)

Please feel free contact me, if you have any questions.

For Part 3 I am still looking for a collaborator to generate a flexible and interactive dashboard-app-starter-kit for Shiny-beginners.

Next Part