Skip to main content
Version: 🚧 Alpha 🚧

4. Stochastic Analysis

Before applying any sensitivity analysis method, there is a fundamental question to answer: how much of the variability in nb_recovered is due to the random seed, and how many replications are needed to produce robust results?

If stochasticity dominates, then a single run per parameter configuration is meaningless — the output reflects the seed more than the parameters. The stochanalyse method answers this question systematically before committing to the more expensive Morris or Sobol analyses.

In particular, this step presents:

  • What the stochanalyse method measures and how to interpret its report
  • How repeat and sample interact in this context
  • How to use the permanent block to visualise stochastic spread directly in GAMA
  • How the result informs the repeat: value to use in subsequent batch experiments

The model file for this step can be found in: Library models/Tutorials/SIRAnalysis/models/SIR_Stochanalysis.gaml


How stochanalyse works​

For each of the sample randomly drawn parameter points in the space, the method runs the simulation repeat times with different seeds. It then computes three indices by progressively increasing the number of replications from 2 up to repeat:

IndexMeaning
CorrelationAverage Pearson correlation between pairs of replicate outputs. Close to 1 means replicas agree — few replications needed.
CVCoefficient of variation across replicas. Close to 0 means low stochastic variability.
Neyman-PearsonEstimated minimum number of replications needed to avoid Type I and Type II errors at standard confidence levels.

The key quantity is the Neyman-Pearson estimate — it directly tells you what value to use for repeat: in the Morris, Sobol and Beta^d experiments that follow.

Total runs = sample × repeat = 100 × 25 = 2500 runs.


The method stochanalyse statement​

The method is declared exactly like the others. The outputs: facet lists the global variables to analyse. The report: facet writes the human-readable indices to a .txt file. The sample: facet controls how many distinct parameter points are probed.

method stochanalyse
outputs: ["nb_recovered"]
report: "../results/stoch_report.txt"
results: "../results/stoch_raw.csv"
sampling: uniform
sample: 100;

Note: The default sampling strategy is factorial - precise the sampling method if you want to use another strategy.

--

Reading the report​

Below is the report generated after our experiment :

Stochanalysis report

The report is structured around three complementary measures, each giving a recommended minimum number of replications for the output nb_recovered:

Coefficient of variation and Standard error — both report the minimum, maximum and average number of replications needed across the 100 sampled parameter points, for three decreasing thresholds (0.05, 0.01, 0.001). In our case both measures agree: an average of 5–7 replications is sufficient at the 0.05 threshold, rising to 10 at the stricter 0.001 threshold. The spread (min = 2, max = 24) shows that some parameter configurations are considerably more stochastic than others.

Critical effect size — this test estimates the number of replications needed to reliably detect an effect of a given standardised size, controlling for both Type I (false positive, α = 0.01) and Type II (false negative, β = 0.05) errors. The results are reported for six conventional effect sizes:

Effect sizeLabelReplications needed (average)
0.01ultra-micro25 — requires more than current repeat
0.05micro23
0.1small23
0.2medium22
0.4large22
0.8huge21

The theoretical minimum for ultra-micro effects is 173 replications — far beyond what is practical. For effects of small size or larger, around 22–23 replications are needed, which exceeds the current repeat: 25 budget by a small margin for the most demanding configurations.

Practical conclusion for this model: the CV and standard error tests suggest that 10 replications provide robust aggregated outputs at strict thresholds. The critical effect size analysis confirms that 22–25 replications are needed to reliably detect even small effects. For the Morris, Sobol and Beta^d experiments that follow, setting repeat: 10 is a reasonable compromise between robustness and computational cost for medium to large effects.


Complete model​

model SIR

global {
int nb_agents <- 200 min: 50 max: 500;
int nb_infected_init <- 5 min: 1 max: 50;
float infection_rate <- 0.5 min: 0.0 max: 1.0;
int infection_distance <- 5 min: 1 max: 20;
int recovery_time <- 50 min: 10 max: 200;
int max_steps <- 1000;

int nb_susceptible <- 0;
int nb_infected <- 0;
int nb_recovered <- 0;

init {
create person number: nb_agents;
ask nb_infected_init among (person as list) {
status <- "infected";
infection_timer <- recovery_time;
}
}

reflex update_counts {
nb_susceptible <- person count (each.status = "susceptible");
nb_infected <- person count (each.status = "infected");
nb_recovered <- person count (each.status = "recovered");
}

reflex stop when: cycle >= max_steps or nb_infected = 0 {
do halt;
}
}

species person skills: [moving] {
string status <- "susceptible";
int infection_timer <- 0;
float speed <- 1.0;

reflex move {
do wander();
}

reflex infect when: status = "infected" {
ask (person at_distance infection_distance) where (each.status = "susceptible") {
if flip(infection_rate) {
status <- "infected";
infection_timer <- recovery_time;
}
}
infection_timer <- infection_timer - 1;
if infection_timer <= 0 {
status <- "recovered";
}
}

aspect base {
draw circle(1) at: location color:
(status = "infected") ? #red : ((status = "recovered") ? #blue : #green);
}
}

// ── GUI experiment ─────────────────────────────────────────────────────────────
experiment SIR_gui type: gui {
parameter "Number of agents" var: nb_agents;
parameter "Initially infected" var: nb_infected_init;
parameter "Infection rate" var: infection_rate;
parameter "Infection distance" var: infection_distance;
parameter "Recovery time" var: recovery_time;

output {
display "Population" type: java2D {
species person aspect: base;
}
display "SIR Chart" type: java2D {
chart "SIR dynamics" type: series {
data "Susceptible" value: nb_susceptible color: #green;
data "Infected" value: nb_infected color: #red;
data "Recovered" value: nb_recovered color: #blue;
}
}
monitor "Susceptible" value: nb_susceptible;
monitor "Infected" value: nb_infected;
monitor "Recovered" value: nb_recovered;
monitor "Cycle" value: cycle;
}
}

// ── Stochastic analysis experiment ────────────────────────────────────────────
// Total runs = repeat × sample = 20 × 10 = 200
experiment SIR_Stochanalyse type: batch
repeat: 20
keep_seed: false
until: (cycle >= max_steps or nb_infected = 0) {

parameter "Number of agents" var: nb_agents min: 50 max: 500;
parameter "Initially infected" var: nb_infected_init min: 1 max: 50;
parameter "Infection rate" var: infection_rate min: 0.0 max: 1.0;
parameter "Infection distance" var: infection_distance min: 1 max: 20;
parameter "Recovery time" var: recovery_time min: 10 max: 200;

method stochanalyse
outputs: ["nb_recovered"]
report: "../results/stoch_report.txt"
results: "../results/stoch_raw.csv"
sampling: uniform
sample: 10;
}

What's next?​

Now that we know the stochastic contribution of the model, we can move on to sensitivity analysis. Step 5 introduces the Morris method, which screens parameters by their influence on nb_recovered at a low computational cost.