Universal-Pavlovia-Shiny-Dashboard and Shiny-Study-Portal

Exploring your Pavlovia Projects with a Universal-Shiny-Dashboard-App and Shiny-Clone of VESPR-Study-Portal

Luke Bölling https://psychfactors.org (FernUniversität Hagen (Department of General Psychology))https://www.fernuni-hagen.de/psychologie-urteilen-entscheiden/team/luke.boelling.shtml
08-19-2021

Part 3 of the Blog-Series

For the last part of the blog-series, I wanted to show you two practical examples to use R-Shiny im combination with the Pavlovia-Gitlab-Infrastructure. The main idea is to extend to Pavlovia-Ecosystem with a tool to improve the data-analysis and -monitoring-workflow.

R and R-Shiny are great tools to achieve complex solutions with minimal programming afford. You can use my templates to find working solutions for your experiments. Especially, the Part-2-App gives a good starter for your own Shiny-Dashboards.

In this part of the blog-series I tried to build a universal Pavlovia-Dashboard-App (UPS-Dashboard) so you can explore all your data in all your different Pavlovia-Projects. Although, universality comes with limitation, I wanted to offer you an easy starting point of doing basic exploratory data analysis without downloading data to your computer. This will become handy, If your working in education and research and doing collaborative data analysis with students and project members.

The bonus app will be my approach to mimic the VESPR-Study-Portal with a Shiny-App. You need to host your own server at the moment to use app but I plan to make it work with Shinyapps.io in the next weeks.

Let’s dive in.

The Universal-Pavlovia-Shiny-Dashboard (UPS-Dashboard)

See the demo here: UPS-Dashboard

Because of the relative high complexity of the app, I have chosen to use only the Github-Repository to share the code.

The app workflow can be described like this:

You should not use the app during a running experiment. It does not do any harm on a technical level but in terms of “scientific method” it is problematic to get insights about your incoming data during a phase, you could probably still interfere with the data acquistion process. So: You might be tented to e.g. “stop data acquiring” because your data “looks good”.

Main Use-Cases for the UPS-Dashboard

Use the app e.g. for following purposes:

Just think of it: The UPS-Dashboard could be extended to export APA graphics and tables for all your experiments without any hassle. Nevertheless, I am a fan of doing the analysis in the same repository as the experiment itself: Keep your experiment, your data and your analysis as well as your documentation at the same place! But the app gives you a way to check multiple running experiments at the same time, without jumping through all your repository folders. That’s all in all a nice thing to have.

The UPS-Dashboard will be public on Github and I encourage everyone to join in extending it’s features by open up new branches and pull-requests or just write an issue with a feature request.

And what about the VESPR-Clone?

Shiny-Study-Portal

I loved the idea of the VESPR-Study-Portal by Wakefield Morys-Carter to enable Pavlovia-Users to perform complex designs online (e.g. to give UIDs). Unfortunately, I have run into a problem that could not have been solved by VESPR.

I was running an experiment with a complex mixed design online. We wanted to have a fixed number of participants in all eight between conditions. Two of the conditions were relatively hard at the beginning, so the drop out for these conditions was huge.

The planned to have around 20 participants per Condition. By using Unipark (Qualtrics) and a running id for the participants, we distributed the participants to each condition (No. 1 to condition 1, 2 to 2, …, 8 to 8, 9 to 1 nd so on).

After some time, we ended up having around 180 participants, but in the two “hard” conditions were only around 5-10 complete data-set and checking the data, revealing that many participants haven`t really “participated” (high error rate, no reactions) in the condition.

So we ended up, having too many participants in the first six conditions and almost no data in the last two conditions. Requiring us to fixing the experiment to the last two conditions and rerun it for a few weeks…

Wouldn’t it be nice to monitor which participant enters which condition, completes it and produces “good enough” data?

The VESPR-Study portal achieves the first two aspects. But the part on “good enough” data can not be addressed.

With my Gitlab-Data-Code and a few Shiny-Hacks I was able to produce 300 lines of code that can be used to do exactly this. You could not only redirect the participants to the desired conditions (and monitor which conditions are currently running or were incomplete) but also do a check on the just generated data to prove that the participant was e.g. not “only passive”.

You can see a dynamically changeable demo of the app on my server.

Demo-Shiny-Study-Portal Please note: Although you could adapt the app on my server to work for your experiment - Do not use it “in production”. I have added a background script to reset the app every 24 hours.

Right now, I am working on using Google-Spreadsheets to store the information you see. Currently, I use a local Excel-File that functions as a database. This will not work on Shinyapps.io! This is because Shinyapps.io recreates the Excel-File for each user - we need to use a remote database to get Shinyapps.io to use the same database for all participants.

This project will be public on Github, as well. I could use some help to rewrite the app to be useful for non-advanced-users (Google Sheets Support), too.

Conclusion

Considering that Pavlovia is highly used by researchers within psychology, R is probably an often used “next-step” for the Pavlovia-generated data. Integrating Pavlovia and R (Shiny) was therefore logical. I would love to see more open source repositories on Github combining psychological experiments with data visualization, giving new users and students a feeling for e.g. Trial-by-Trial data. This could be a great symbiosis for teaching experiment design and data analysis. Especially, distributing data to multiple non-Pavlovia-Users is a good use case for Shiny-Dashboards at the moment. I plan to further improve the Pavlovia/PsychoPy-Workflow with R-/Shiny and make my scripts, tips and tricks public to help the PsychoPy-Community.

My next projects are:

Thank you for reading the blog-series. I’ll hope I have enough time to keep on writing smaller posts about a few simple scripts and designs.

Feel free to contact me on PsychoPy-Discourse or by E-Mail.

Best,

Luke