0% found this document useful (0 votes)
3 views

JASP Tutorial

Uploaded by

Disa Almira
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
3 views

JASP Tutorial

Uploaded by

Disa Almira
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 106

Bayesian data analysis using JASP

Dani Navarro

compcogscisydney.com/jasp-tute.html
Part 1:Theory Part 2: Practice
• Philosophy of probability • Introducing JASP
• Introducing Bayes rule • Bayesian ANOVA
• Bayesian reasoning • Bayesian t-test
• A simple example • Bayesian regression
• Bayesian hypothesis testing • Bayesian contingency tables
• Bayesian binomial test
1.1 Philosophy of probability
Idea #1: “Aleatory” processes

Probability is an objective characteristic associated with


physical processes, defined by counting the relative frequencies
of different kinds of events when that process is invoked
“Aleatory” processes
Frequentist statistics

Coin flipping is an
aleatory process, and can
be repeated as many
times as you like
The probability of a
head is defined as the
long-run frequency
Frequentist statistics

A particle physics experiment is A scientific theory is not a


a repeatable procedure, and repeatable procedure, and
thus a frequentist probability cannot be assigned a
can be constructed to describe probability: there is no such
its outcomes thing as “the probability that my
theory is true”
Idea #2: “Epistemic” uncertainty
Probability is an subjective characteristic
associated with rational agents, defined by ? ?
assessing the strength of belief that the agent
holds in different propositions ?

?
“Bayesian” statistics

Probabilities can be A particle physics experiment A scientific theory contains a


attached to any generates observable events set of propositions about which
proposition that an about which a rational agent a rational agent might hold
agent can believe might hold beliefs beliefs
1.2 Introducing Bayes rule
Roll two dice…
Thirty six possible cases
Three cases where
the dice add up to 4
All 36 cases organised by outcome

The three cases where


the result adds up to 4
Roll 2 3 4 5 6 7 8 9 10 11 12
N 1 2 3 4 5 6 5 4 3 2 1
Roll 2 3 4 5 6 7 8 9 10 11 12
N 1 2 3 4 5 6 5 4 3 2 1
Prob .028 .056 .083 .111 .139 .167 .139 .111 .083 .056 .028

Probability = 3/36 = .083


A: “at least one die has a value of 2”

11
P (A) = = .31
36
B: “the total is at least six”

26
P (B) = = .72
36
= 26/36

Probability that the total is at least 6

P (B) ⇥ P (A|B)
P (B|A) =
P (A)
= 26/36

Probability that the total is at least 6

P (B) ⇥ P (A|B)
P (B|A) =
P (A)

Probability that at least one die has a 2

= 11/36
= 6/26
= 26/36

Probability that the total is at least 6 Probability that at least one die has a
2 given that the total is at least 6

P (B) ⇥ P (A|B)
P (B|A) =
P (A)

Probability that at least one die has a 2

= 11/36
= 6/26
= 26/36

Probability that the total is at least 6 Probability that at least one die has a
2 given that the total is at least 6

P (B) ⇥ P (A|B)
P (B|A) =
P (A)
Probability that the total is at least 6
given that at least one die has a 2 Probability that at least one die has a 2

= 6/11 = 11/36
= 6/26
= 26/36

Probability that the total is at least 6 Probability that at least one die has a
2 given that the total is at least 6

P (B) ⇥ P (A|B)
P (B|A) =
P (A)
Probability that the total is at least 6
given that at least one die has a 2 Probability that at least one die has a 2

= 6/11 = 11/36
Let’s check that:

P (A|B) P (B|A)

26 6 11 26 6 36 6
⇥ ÷ = ⇥ ⇥ =
36 26 36 36 26 11 11

P (B) P (A)
Let’s check that:

P (A|B) P (B|A)

26 6 11 26 6 36 6
⇥ ÷ = ⇥ ⇥ =
36 26 36 36 26 11 11

P (B) P (A)
1.3 Bayesian reasoning
Bayes’ rule is a mathematical fact
that probabilities must obey
26/36
6/26

P (B) ⇥ P (A|B)
P (B|A) =
P (A)
6/11
11/36
Bayesian reasoning happens when we
combine this mathematical rule with
epistemic probability

P (B) ⇥ P (A|B)
P (B|A) =
P (A)
For example…

h = A hypothesis about the world

d = Some observable data


How strongly should I believe … given that I have
in this hypothesis… observed these data?
h|d

The posterior probability that my


hypothesis is true given that I have
observed these data…

P (d|h) ⇥ P (h)
P (h|d) =
P (d)
h

The prior probability that I


assigned to this hypothesis
before observing the data

P (d|h) ⇥ P (h)
P (h|d) =
P (d)
d|h

The likelihood that I would have observed


these data if the hypothesis is true

P (d|h) ⇥ P (h)
P (h|d) =
P (d)
P (d|h) ⇥ P (h)
P (h|d) =
P (d)
d

The “marginal” probability of


observing these particular
data (more on this shortly)
Belief revision!

Prior beliefs Posterior beliefs

Data
P(d|h) : the likelihood of
observing d if h is true P(h) : the prior probability
that h is true
P(h|d) : the posterior
probability that h is true

P(d) : discussed later


1.4 Example of Bayesian reasoning
Many possibilities

dropped a wine glass broke a window psychic explosion

etc…

earthquake a wizard did it


Let’s compare two of them

I dropped a wine glass Kids broke the window


“Prior odds”

P (h1 )
= = 0.1
P (h2 )
Before learning anything
else I think “wine glass
dropping” is 10 times
more plausible than
“broken window”
Some data

There is a cricket ball


next to the broken glass
Likelihood of the data

When I drop a wine glass…

… It’s very unlikely that I


just happen to do so right
next to a cricket ball
P(d|h) = 0.001
Likelihood of the data

When the kids break a window…

… It’s not at all uncommon


for a cricket ball to end up
near the glass
P(d|h) = 0.15
Bayes factor
(a.k.a. likelihood ratio)

P (d|h1 ) 0.15
= = = 150
P (d|h2 ) 0.001

I think it is 150 times more likely that I


would find a cricket ball when a window
breaks than when a wine glass is broken
Posterior odds
P (h1 |d) P (d|h1 ) P (h1 )
= ⇥
P (h2 |d) P (d|h2 ) P (h2 )
Posterior odds Likelihood ratio Prior odds
= 15 = 150 = .1

In light of the evidence, I


now think the window-
breaking hypothesis is 15
times more likely than the
wine-glass hypothesis
1.5 Bayesian hypothesis testing
8 red Is this roulette
wheel unbalanced?

2 black

We’re ignoring the zero


Null model, h0
8 red

The roulette wheel has an


equal probability of producing
red and black

2 black
Null model, h0
8 red

The roulette wheel has an


equal probability of producing
red and black

2 black Alternative model, h1


The roulette wheel has a
bias, but we don’t know
what it is
Null hypothesis

P (✓|h0 )

The null model


places all its
prior belief on
P(red) = .5
We think of each
hypothesis as a
Bayesian who holds
0 0.5 1 prior beliefs that map
onto the hypothesis
P(red)

Let’s pretend that there’s no such thing as


“continuous numbers”, and act as if the only
possible values for P(red) are 0, 0.1, 0.2, …, 1.0 J
Null hypothesis Alternative hypothesis

P (✓|h0 ) P (✓|h1 )

The null model The alternative model


places all its spreads its prior belief
prior belief on equally across all possibilities
P(red) = .5

0 0.5 1 0 0.5 1
P(red) P(red)
Likelihoods … the probability of the
data given every possible value of P(red)

P (d|✓)

0 0.8 1
P(red)
Prior Likelihood
P (✓|h) P (d|✓)

Null

h0 ⇥ =

… it contributes nothing to
the a priori “prediction”
made by the null

The null hypothesis assigns … so even though it


prior probability 0 to the assigns highest likelihood
possibility that P(red) = 0.8 … to the observed data ….
Prior Likelihood
P (✓|h) P (d|✓)

Null

h0 ⇥ =

… it is the only contributor to the


prediction made by this model

The null hypothesis assigns … so even though it


prior probability 1 to the assigns a pretty small
possibility that P(red) = 0.5 … likelihood to the
observed data ….
Summing these values gives
Prior Likelihood the marginal probability
of the data under the null
hypothesis...
Null

⇥ =

i.e., how likely did the null


model “think” we were to
observe this specific
pattern of data?
Marginal probability of
Prior Likelihood
the data according to
P (✓|h) P (d|✓) both models

Null
P (d|h0 )
h0 ⇥ =

Alternative
h1
⇥ =
P (d|h1 )
Data Models

8 red Null model h0 Bayes factor


The roulette wheel has P (d|h0 )
an equal probability of
producing red and black

… evidence of
about 2:1 in
favour of the
alternative
2 black Alternative model h1
The roulette wheel has
a bias, but we don’t P (d|h1 )
know what it is

P
P (d|h1 ) P (d|✓) ⇥ P (✓|h0 )
BF10 = = P✓ = 1.87
P (d|h0 ) ✓ P (d|✓) ⇥ P (✓|h1 )
2.1 Just another stats package
https://jasp-stats.org
Illustrating the JASP workflow

What? Where?

open a CSV file File > Open


descriptive statisitics Common > Descriptives
run a frequentist ANOVA Common > ANOVA > ANOVA
save data and results to JASP file File > Save As
Here’s a real data set with many variables!

JASP isn’t (currently?) good for


computing new variables, so it’s best to
do that in Excel or whatever you prefer
tutedataall.xlsx
For simplicity I’ll use small
CSV files with only the
relevant variables

tutedata1.csv
File > Open
Common
Common > Descriptives
Common > ANOVA
Common > ANOVA > ANOVA
Common > ANOVA > ANOVA
Common > ANOVA > ANOVA > Descriptive Plots
Common > ANOVA > ANOVA > Descriptive Plots
Common
Common
Common
File > Save As
File > Export Results
2.2 Bayesian ANOVA
Common > ANOVA > Bayesian ANOVA
Common > ANOVA > Bayesian ANOVA
Common > ANOVA > Bayesian ANOVA
2.3 Bayesian t-test
Planned analysis #1:
Null effect under category sampling?

tutedata2.csv
Common > T-Test > Bayesian Independent Samples T-Test
Common > T-Test > Bayesian Independent Samples T-Test
Planned analysis #2:
large < small under property sampling

tutedata2.csv
Common > T-Test > Bayesian Independent Samples T-Test
Common > T-Test > Bayesian Independent Samples T-Test
2.4 Bayesian regression
tutedata5.csv
Common > Regression > Bayesian Linear Regression
Common > Regression > Bayesian Linear Regression
Common > Regression > Bayesian Linear Regression
2.5 Bayesian contingency tables
tutedata5.csv
Common > Frequencies > Bayesian Contingency Tables
Common > Frequencies > Bayesian Contingency Tables
Common > Frequencies > Bayesian Contingency Tables
2.6 Bayesian binomial test
Common > Frequencies > Bayesian Binomial Test
Prior Likelihood
Bayes factor

Null
⇥ =

Alternative
⇥ =

Wait… we got 1.87 for this


Bayes factor and JASP says 2.07
Prior Likelihood
Bayes factor

Null
⇥ =

Alternative
⇥ =

It’s just an approximation error… if


we use finer-grained approximation
to “continuous numbers” we get 2.05
2.7 Beyond basics

JASP Stan …R

… to be added at a later stage!


Done!

You might also like

pFad - Phonifier reborn

Pfad - The Proxy pFad of © 2024 Garber Painting. All rights reserved.

Note: This service is not intended for secure transactions such as banking, social media, email, or purchasing. Use at your own risk. We assume no liability whatsoever for broken pages.


Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy