Analysing code performance
It is a common mistake when trying to optimize any piece of code to go in head first and try to optimize everything possible. The problem with this approach is that everything could be optimized in a computer program, and the task of optimizing itself can be very time-consuming. So before even optimizing a model, it is important to analyse it, in order to know which part has the most impact on the simulation duration and to have precise data on your initial execution time to be able to assess your progression.
Some general concepts and tips before starting​
Randomness​
In GAML, many operators have an (implicit) use of randomness (for example, the one_of operator), and this can, of course, impact the simulation duration. Depending on your case, you may need to set the random generator to a certain seed during the analysis part, to make sure that every time the simulation is run, it will get exactly the same data and operations done, and thus comparisons of execution time are more fair/stable. You can do so by adding this line of code to your experiment or your global:
float seed <- 1.0; // put any number you want apart from 0, which would mean "pick a random (different) seed for every simulation"
In other cases, randomness plays a "real" role in your model, and you may want to take that randomness into account in your optimization. In this case, the right approach would be to repeat the tests for a certain amount of time that you think is reasonable to "neutralize" the effect of randomness and to get an idea of what would be the "average" behaviour, or what could be the extremes.
In any case, it is important to keep in mind that randomness exists in GAMA and to be mindful of it when analysing execution time, as it can have an impact.