ID:
509981
Durata (ore):
36
CFU:
6
SSD:
PSICOLOGIA GENERALE
Stato approvazione:
Bozza
Anno:
2024
Dati Generali
Periodo di attività
Secondo Semestre (12/02/2025 - 31/05/2025)
Syllabus
Obiettivi Formativi
This course aims at developing knowledge and understanding in several key areas of applied Research methods:
a) Advanced comprehension of statistical models and the interpretation of experimental data with reference to Research methods.
b) Profound understanding of the methods and experimental techniques employed within the field of Research Methods.
c) Ethical and deontological awareness necessary for conducting experimental procedures in the area of Research methods responsibly.
Furthermore, the course aims to cultivate the ability to apply this knowledge and understanding effectively by:
a) enhancing proficiency in executing and assessing applications within experimental contexts in the area of Research methods
b) advancing students’ competences in executing and evaluating applications within clinical environments in the area of Research methods.
c) promoting of critical thinking, analytical prowess, and the synthesis of ideas in the area of Research methods
d) using ethical principles in practical applications and research endeavors in Research methods.
a) Advanced comprehension of statistical models and the interpretation of experimental data with reference to Research methods.
b) Profound understanding of the methods and experimental techniques employed within the field of Research Methods.
c) Ethical and deontological awareness necessary for conducting experimental procedures in the area of Research methods responsibly.
Furthermore, the course aims to cultivate the ability to apply this knowledge and understanding effectively by:
a) enhancing proficiency in executing and assessing applications within experimental contexts in the area of Research methods
b) advancing students’ competences in executing and evaluating applications within clinical environments in the area of Research methods.
c) promoting of critical thinking, analytical prowess, and the synthesis of ideas in the area of Research methods
d) using ethical principles in practical applications and research endeavors in Research methods.
Prerequisiti
The course has no prerequisites but will benefit from some familiarity with rudiments in research methods and statistics.
Metodi didattici
The course will adopt a variety of teaching methods to provide a comprehensive and engaging learning experience.
Each lecture will combine a lecturing component and a practical/hands-on component involving data analyses.
During the lecture component, the lecturer will introduce the reasoning and computations involved in widely used data analyses approaches, especially in psychological and experimental science. All lecture components will be highly interactive, involving digital tools to actively and simultaneously engage all students.
In the practical component, students will learn how to implement each of those methods in R: a freely available and widely used software for basic and advanced statistical computing, as well as effective data visualisation. Students will be asked to install R on their machines before the course begins. The practical component of the course will involve some initial R guidance from the lecturer but will especially focus on developing the students’ ability to adapt code, eventually using it for their own purposes. During these practical components of the class, students will work in small groups on applied research methods problems and the solutions will be then provided on the next day.
There will be no differentiation of the curriculum between attending and non-attending students. All students will have the opportunity to access the course content. However, especially because of the practical component of the course, students are strongly encouraged to attend.
Each lecture will combine a lecturing component and a practical/hands-on component involving data analyses.
During the lecture component, the lecturer will introduce the reasoning and computations involved in widely used data analyses approaches, especially in psychological and experimental science. All lecture components will be highly interactive, involving digital tools to actively and simultaneously engage all students.
In the practical component, students will learn how to implement each of those methods in R: a freely available and widely used software for basic and advanced statistical computing, as well as effective data visualisation. Students will be asked to install R on their machines before the course begins. The practical component of the course will involve some initial R guidance from the lecturer but will especially focus on developing the students’ ability to adapt code, eventually using it for their own purposes. During these practical components of the class, students will work in small groups on applied research methods problems and the solutions will be then provided on the next day.
There will be no differentiation of the curriculum between attending and non-attending students. All students will have the opportunity to access the course content. However, especially because of the practical component of the course, students are strongly encouraged to attend.
Verifica Apprendimento
The exam will be written, and in presence. It will involve writing about a given research question that will be provided on the day.
Critically, the lecturer will also provide data related to that question. Students will have to decide on a data analysis approach, justify it, and conduct it, preferably in R, but this is not mandatory. Students will then report and interpret the results, as one would in a scientific paper.
Critically, the lecturer will also provide data related to that question. Students will have to decide on a data analysis approach, justify it, and conduct it, preferably in R, but this is not mandatory. Students will then report and interpret the results, as one would in a scientific paper.
Testi
Slides and commented R code. A link to this material is provided on Kiro.
Field, A., Miles, J., & Field, Z. (2012). Discovering statistics using R. Sage publications.
Field, A., Miles, J., & Field, Z. (2012). Discovering statistics using R. Sage publications.
Contenuti
- Introductory principles of inferential statistics
Three intuitions about samples and populations: 1. We only see samples, but we are (spontaneously) interested in the inferences we can draw from them. 2. Being interested in populations involves (mentally) filling the gaps in our data (using a probability distribution). 3. Among the many probability distributions, the normal distribution is special thanks to the central limit theorem (CLT). The CLT ultimately allows us to quantify the error in our estimates, using the standard error (SE).
- Fisher’s approach to inferential tests
We put the CLT and SE to use in our first inferential tests. We will introduce Fisher’s approach to null-hypothesis significance testing, starting from the z-test: a test to compare a sample mean to a population mean with known standard deviation. This approach emphasizes p-values: the probability of our data if the null-hypothesis is true.
- Pearson and Neyman’s approach to inferential tests
We will integrate Fisher’s approach with Pearson and Neyman’s approach, by introducing the reasoning and computations behind confidence intervals and effect sizes. We’ll focus on one effect size you probably are already familiar with but did not know was a measure of effect size: Pearson’s correlation coefficient, and we will exploit the opportunity go over the notion of covariance. We’ll also focus on the confidence intervals and effect sizes of a z-test (e.g., Cohen’s d). Finally, we’ll see how that reasoning and those computations easily extend to the statistical tests we can use when we do not know that standard deviation of the population. We’ll go over the one-sample t-test, the independent sample t-test (or Welch test) and paired t-test, as well as their assumptions.
- Omnibus tests
The tests we went over so far involved a single comparison (e.g., a subtraction between a sample and a population mean or between 2 samples means). This approach cannot work if we have more than 2 groups we aim to compare. Despite this, we will learn how there is a broad but common logic underlying both types of test. This broad logic involves comparing signal to noise, or systematic to unsystematic variance. By partitioning variance in particular ways, the analysis of variance (or anova) is a hypothesis test that compares the means of any number of groups simultaneously. Given it is 1 test for comparing many means, it is called an omnibus test. The test hinges on the F ratio, a value that can be calculated using both sums of squared deviations and degrees of freedom (aka mean squares). We’ll learn about independent and repeated measures anovas, about their assumptions, and how to test these assumptions.
- Post-hoc tests
Anova’s F-test is an omnibus test: it is one test to compare many means. As a result, if our F-test is significant, we will certainly want to know more about which means differ from which. If we have no clear a priori hypothesis, we can conduct several types of post-hoc tests to address this. We will introduce the problem of alpha inflation and see how post-hoc tests allow us to conduct multiple comparisons whilst protecting from this (e.g., Bonferroni correction, Holm’s method and Benjamini-Hochberg). Once we know which means differ from which, we’ll learn about the reasoning and computations behind confidence intervals and effect sizes in the ANOVA framework (e.g., η2). Ultimately, we are in fact not only interested only in which means differed, but in obtaining a measure of the precision of our estimate, and the magnitude of the effect.
-Factorial anova
We earlier learned that anova can deal with 1 predictor involving as many groups as we like. However, what if we have two predictors, or more. Today we’ll extend the anova logic and approach to cases involving multiple predictors, each with as many groups as we like. A main effect is the effect of one predictor when averaging over the levels of the others. An interaction addresses the question of whether the role of one predictor depends on the levels of the others. With factorial anova, anovas become a much more general approach to comparing means and assessing the contribution of separate explanatory variables, and their interactions.
- Introducing regression analysis
So far, we have mainly focused on categorical/nominal explanatory variables, or groups. Today we will learn how to handle continuous predictors. Critically, we will learn how regression allows us to make predictions about yet unseen data (see intuition 1), and we’ll learn how to compute and visualize our predictions in R. We’ll also learn how easily regression analysis can be extended to multiple predictors.
- Everything is regression
We will learn how regression can also deal with categorical variables by “dummy-coding” them. In fact, we’ll see how a t-test and anovas are just different types of regression. As a result of this, linear regression can also easily subsume types of anova that we did not go over, such as an anovas involving both categorical and continuous predictors (and their interactions).
- Logistic regression and mixed effect models
We extend the logic of regression to non-normally distributed data. As an important example, we will focus on logistic regression, a method that allows us to make predictions about binary outcome data (e.g., outcomes that are either yes or no, black or white, pass or fail, alive or deceased etc.). We will also learn how linear regression can be extended to a number of other non-normally distributed outcome variables, using the general linear model framework. Finally, we will briefly introduce a particularly broad and increasingly popular extension of the general linear model approach. One that can additionally account for hierarchically stratified data, such as data coming from the same participants (e.g., repeated measures), which might be nested in different classrooms, which in turn could be nested within different schools etc. Each of these factors could play a role in explaining outcome variables of interest. By distinguishing between fixed and random effects, mixed models allow to model these complex inter-dependencies.
- Pitfalls of null-hypothesis significance testing and possible remedies
The methods we learned about during the course are powerful but need to be used ethically and responsibly. We will touch on various methodological issues that have contributed to the replication crises in psychological science. This involves conducting under-powered studies and different forms of p-hacking or harking. We will go over currently recommended best-practices in research methods.
Three intuitions about samples and populations: 1. We only see samples, but we are (spontaneously) interested in the inferences we can draw from them. 2. Being interested in populations involves (mentally) filling the gaps in our data (using a probability distribution). 3. Among the many probability distributions, the normal distribution is special thanks to the central limit theorem (CLT). The CLT ultimately allows us to quantify the error in our estimates, using the standard error (SE).
- Fisher’s approach to inferential tests
We put the CLT and SE to use in our first inferential tests. We will introduce Fisher’s approach to null-hypothesis significance testing, starting from the z-test: a test to compare a sample mean to a population mean with known standard deviation. This approach emphasizes p-values: the probability of our data if the null-hypothesis is true.
- Pearson and Neyman’s approach to inferential tests
We will integrate Fisher’s approach with Pearson and Neyman’s approach, by introducing the reasoning and computations behind confidence intervals and effect sizes. We’ll focus on one effect size you probably are already familiar with but did not know was a measure of effect size: Pearson’s correlation coefficient, and we will exploit the opportunity go over the notion of covariance. We’ll also focus on the confidence intervals and effect sizes of a z-test (e.g., Cohen’s d). Finally, we’ll see how that reasoning and those computations easily extend to the statistical tests we can use when we do not know that standard deviation of the population. We’ll go over the one-sample t-test, the independent sample t-test (or Welch test) and paired t-test, as well as their assumptions.
- Omnibus tests
The tests we went over so far involved a single comparison (e.g., a subtraction between a sample and a population mean or between 2 samples means). This approach cannot work if we have more than 2 groups we aim to compare. Despite this, we will learn how there is a broad but common logic underlying both types of test. This broad logic involves comparing signal to noise, or systematic to unsystematic variance. By partitioning variance in particular ways, the analysis of variance (or anova) is a hypothesis test that compares the means of any number of groups simultaneously. Given it is 1 test for comparing many means, it is called an omnibus test. The test hinges on the F ratio, a value that can be calculated using both sums of squared deviations and degrees of freedom (aka mean squares). We’ll learn about independent and repeated measures anovas, about their assumptions, and how to test these assumptions.
- Post-hoc tests
Anova’s F-test is an omnibus test: it is one test to compare many means. As a result, if our F-test is significant, we will certainly want to know more about which means differ from which. If we have no clear a priori hypothesis, we can conduct several types of post-hoc tests to address this. We will introduce the problem of alpha inflation and see how post-hoc tests allow us to conduct multiple comparisons whilst protecting from this (e.g., Bonferroni correction, Holm’s method and Benjamini-Hochberg). Once we know which means differ from which, we’ll learn about the reasoning and computations behind confidence intervals and effect sizes in the ANOVA framework (e.g., η2). Ultimately, we are in fact not only interested only in which means differed, but in obtaining a measure of the precision of our estimate, and the magnitude of the effect.
-Factorial anova
We earlier learned that anova can deal with 1 predictor involving as many groups as we like. However, what if we have two predictors, or more. Today we’ll extend the anova logic and approach to cases involving multiple predictors, each with as many groups as we like. A main effect is the effect of one predictor when averaging over the levels of the others. An interaction addresses the question of whether the role of one predictor depends on the levels of the others. With factorial anova, anovas become a much more general approach to comparing means and assessing the contribution of separate explanatory variables, and their interactions.
- Introducing regression analysis
So far, we have mainly focused on categorical/nominal explanatory variables, or groups. Today we will learn how to handle continuous predictors. Critically, we will learn how regression allows us to make predictions about yet unseen data (see intuition 1), and we’ll learn how to compute and visualize our predictions in R. We’ll also learn how easily regression analysis can be extended to multiple predictors.
- Everything is regression
We will learn how regression can also deal with categorical variables by “dummy-coding” them. In fact, we’ll see how a t-test and anovas are just different types of regression. As a result of this, linear regression can also easily subsume types of anova that we did not go over, such as an anovas involving both categorical and continuous predictors (and their interactions).
- Logistic regression and mixed effect models
We extend the logic of regression to non-normally distributed data. As an important example, we will focus on logistic regression, a method that allows us to make predictions about binary outcome data (e.g., outcomes that are either yes or no, black or white, pass or fail, alive or deceased etc.). We will also learn how linear regression can be extended to a number of other non-normally distributed outcome variables, using the general linear model framework. Finally, we will briefly introduce a particularly broad and increasingly popular extension of the general linear model approach. One that can additionally account for hierarchically stratified data, such as data coming from the same participants (e.g., repeated measures), which might be nested in different classrooms, which in turn could be nested within different schools etc. Each of these factors could play a role in explaining outcome variables of interest. By distinguishing between fixed and random effects, mixed models allow to model these complex inter-dependencies.
- Pitfalls of null-hypothesis significance testing and possible remedies
The methods we learned about during the course are powerful but need to be used ethically and responsibly. We will touch on various methodological issues that have contributed to the replication crises in psychological science. This involves conducting under-powered studies and different forms of p-hacking or harking. We will go over currently recommended best-practices in research methods.
Lingua Insegnamento
INGLESE
Corsi
Corsi
PSYCHOLOGY, NEUROSCIENCE AND HUMAN SCIENCES
Laurea Magistrale
2 anni
No Results Found