Logo for M Libraries Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

9.1 Overview of Survey Research

Learning objectives.

  • Define what survey research is, including its two important characteristics.
  • Describe several different ways that survey research can be used and give some examples.

What Is Survey Research?

Survey research is a quantitative approach that has two important characteristics. First, the variables of interest are measured using self-reports. In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviors. Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. In fact, survey research may be the only approach in psychology in which random sampling is routinely used. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers.

Most survey research is nonexperimental. It is used to describe single variables (e.g., the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population) and also to assess statistical relationships between variables (e.g., the relationship between income and health). But surveys can also be experimental. The study by Lerner and her colleagues is a good example. Their use of self-report measures and a large national sample identifies their work as survey research. But their manipulation of an independent variable (anger vs. fear) to assess its effect on a dependent variable (risk judgments) also identifies their work as experimental.

History and Uses of Survey Research

Survey research may have its roots in English and American “social surveys” conducted around the turn of the 20th century by researchers and reformers who wanted to document the extent of social problems such as poverty (Converse, 1987). By the 1930s, the US government was conducting surveys to document economic and social conditions in the country. The need to draw conclusions about the entire population helped spur advances in sampling procedures. At about the same time, several researchers who had already made a name for themselves in market research, studying consumer preferences for American businesses, turned their attention to election polling. A watershed event was the presidential election of 1936 between Alf Landon and Franklin Roosevelt. A magazine called Literary Digest conducted a survey by sending ballots (which were also subscription requests) to millions of Americans. Based on this “straw poll,” the editors predicted that Landon would win in a landslide. At the same time, the new pollsters were using scientific methods with much smaller samples to predict just the opposite—that Roosevelt would win in a landslide. In fact, one of them, George Gallup, publicly criticized the methods of Literary Digest before the election and all but guaranteed that his prediction would be correct. And of course it was. (We will consider the reasons that Gallup was right later in this chapter.)

From market research and election polling, survey research made its way into several academic fields, including political science, sociology, and public health—where it continues to be one of the primary approaches to collecting new data. Beginning in the 1930s, psychologists made important advances in questionnaire design, including techniques that are still used today, such as the Likert scale. (See “What Is a Likert Scale?” in Section 9.2 “Constructing Survey Questionnaires” .) Survey research has a strong historical association with the social psychological study of attitudes, stereotypes, and prejudice. Early attitude researchers were also among the first psychologists to seek larger and more diverse samples than the convenience samples of college students that were routinely used in psychology (and still are).

Survey research continues to be important in psychology today. For example, survey data have been instrumental in estimating the prevalence of various mental disorders and identifying statistical relationships among those disorders and with various other factors. The National Comorbidity Survey is a large-scale mental health survey conducted in the United States (see http://www.hcp.med.harvard.edu/ncs ). In just one part of this survey, nearly 10,000 adults were given a structured mental health interview in their homes in 2002 and 2003. Table 9.1 “Some Lifetime Prevalence Results From the National Comorbidity Survey” presents results on the lifetime prevalence of some anxiety, mood, and substance use disorders. (Lifetime prevalence is the percentage of the population that develops the problem sometime in their lifetime.) Obviously, this kind of information can be of great use both to basic researchers seeking to understand the causes and correlates of mental disorders and also to clinicians and policymakers who need to understand exactly how common these disorders are.

Table 9.1 Some Lifetime Prevalence Results From the National Comorbidity Survey

And as the opening example makes clear, survey research can even be used to conduct experiments to test specific hypotheses about causal relationships between variables. Such studies, when conducted on large and diverse samples, can be a useful supplement to laboratory studies conducted on college students. Although this is not a typical use of survey research, it certainly illustrates the flexibility of this approach.

Key Takeaways

  • Survey research is a quantitative approach that features the use of self-report measures on carefully selected samples. It is a flexible approach that can be used to study a wide variety of basic and applied research questions.
  • Survey research has its roots in applied social research, market research, and election polling. It has since become an important approach in many academic disciplines, including political science, sociology, public health, and, of course, psychology.

Discussion: Think of a question that each of the following professionals might try to answer using survey research.

  • a social psychologist
  • an educational researcher
  • a market researcher who works for a supermarket chain
  • the mayor of a large city
  • the head of a university police force

Converse, J. M. (1987). Survey research in the United States: Roots and emergence, 1890–1960 . Berkeley, CA: University of California Press.

Research Methods in Psychology Copyright © 2016 by University of Minnesota is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Logo for Kwantlen Polytechnic University

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Survey Research

34 Overview of Survey Research

Learning objectives.

  • Define what survey research is, including its two important characteristics.
  • Describe several different ways that survey research can be used and give some examples.

What Is Survey Research?

Survey research  is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports (using questionnaires or interviews). In essence, survey researchers ask their participants (who are often called respondents  in survey research) to report directly on their own thoughts, feelings, and behaviors. Second, considerable attention is paid to the issue of sampling. In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. In fact, survey research may be the only approach in psychology in which random sampling is routinely used. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers.  Although survey data are often analyzed using statistics, there are many questions that lend themselves to more qualitative analysis.

Most survey research is non-experimental. It is used to describe single variables (e.g., the percentage of voters who prefer one presidential candidate or another, the prevalence of schizophrenia in the general population, etc.) and also to assess statistical relationships between variables (e.g., the relationship between income and health). But surveys can also be used within experimental research. The study by Lerner and her colleagues is a good example. Their use of self-report measures and a large national sample identifies their work as survey research. But their manipulation of an independent variable (anger vs. fear) to assess its effect on a dependent variable (risk judgments) also identifies their work as experimental.

History and Uses of Survey Research

Survey research may have its roots in English and American “social surveys” conducted around the turn of the 20th century by researchers and reformers who wanted to document the extent of social problems such as poverty (Converse, 1987) [1] . By the 1930s, the US government was conducting surveys to document economic and social conditions in the country. The need to draw conclusions about the entire population helped spur advances in sampling procedures. At about the same time, several researchers who had already made a name for themselves in market research, studying consumer preferences for American businesses, turned their attention to election polling. A watershed event was the presidential election of 1936 between Alf Landon and Franklin Roosevelt. A magazine called  Literary Digest  conducted a survey by sending ballots (which were also subscription requests) to millions of Americans. Based on this “straw poll,” the editors predicted that Landon would win in a landslide. At the same time, the new pollsters were using scientific methods with much smaller samples to predict just the opposite—that Roosevelt would win in a landslide. In fact, one of them, George Gallup, publicly criticized the methods of Literary Digest before the election and all but guaranteed that his prediction would be correct. And of course, it was, demonstrating the effectiveness of careful survey methodology (We will consider the reasons that Gallup was right later in this chapter). Gallup’s demonstration of the power of careful survey methods led later researchers to to local, and in 1948, the first national election survey by the Survey Research Center at the University of Michigan. This work eventually became the American National Election Studies ( https://electionstudies.org/ ) as a collaboration of Stanford University and the University of Michigan, and these studies continue today.

From market research and election polling, survey research made its way into several academic fields, including political science, sociology, and public health—where it continues to be one of the primary approaches to collecting new data. Beginning in the 1930s, psychologists made important advances in questionnaire design, including techniques that are still used today, such as the Likert scale. (See “What Is a Likert Scale?” in  Section 7.2 “Constructing Survey Questionnaires” .) Survey research has a strong historical association with the social psychological study of attitudes, stereotypes, and prejudice. Early attitude researchers were also among the first psychologists to seek larger and more diverse samples than the convenience samples of university students that were routinely used in psychology (and still are).

Survey research continues to be important in psychology today. For example, survey data have been instrumental in estimating the prevalence of various mental disorders and identifying statistical relationships among those disorders and with various other factors. The National Comorbidity Survey is a large-scale mental health survey conducted in the United States (see http://www.hcp.med.harvard.edu/ncs ). In just one part of this survey, nearly 10,000 adults were given a structured mental health interview in their homes in 2002 and 2003.  Table 7.1  presents results on the lifetime prevalence of some anxiety, mood, and substance use disorders. (Lifetime prevalence is the percentage of the population that develops the problem sometime in their lifetime.) Obviously, this kind of information can be of great use both to basic researchers seeking to understand the causes and correlates of mental disorders as well as to clinicians and policymakers who need to understand exactly how common these disorders are.

And as the opening example makes clear, survey research can even be used as a data collection method within experimental research to test specific hypotheses about causal relationships between variables. Such studies, when conducted on large and diverse samples, can be a useful supplement to laboratory studies conducted on university students. Survey research is thus a flexible approach that can be used to study a variety of basic and applied research questions.

  • Converse, J. M. (1987). Survey research in the United States: Roots and emergence, 1890–1960 . Berkeley, CA: University of California Press. ↵

A quantitative and qualitative method with two important characteristics; variables are measured using self-reports and considerable attention is paid to the issue of sampling.

Participants in a survey or study.

Research Methods in Psychology Copyright © 2019 by Rajiv S. Jhangiani, I-Chant A. Chiang, Carrie Cuttler, & Dana C. Leighton is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

U.S. flag

An official website of the United States government

The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

  • Publications
  • Account settings

Preview improvements coming to the PMC website in October 2024. Learn More or Try it out now .

  • Advanced Search
  • Journal List
  • Front Res Metr Anal

Logo of frontrma

The Use of Research Methods in Psychological Research: A Systematised Review

Salomé elizabeth scholtz.

1 Community Psychosocial Research (COMPRES), School of Psychosocial Health, North-West University, Potchefstroom, South Africa

Werner de Klerk

Leon t. de beer.

2 WorkWell Research Institute, North-West University, Potchefstroom, South Africa

Research methods play an imperative role in research quality as well as educating young researchers, however, the application thereof is unclear which can be detrimental to the field of psychology. Therefore, this systematised review aimed to determine what research methods are being used, how these methods are being used and for what topics in the field. Our review of 999 articles from five journals over a period of 5 years indicated that psychology research is conducted in 10 topics via predominantly quantitative research methods. Of these 10 topics, social psychology was the most popular. The remainder of the conducted methodology is described. It was also found that articles lacked rigour and transparency in the used methodology which has implications for replicability. In conclusion this article, provides an overview of all reported methodologies used in a sample of psychology journals. It highlights the popularity and application of methods and designs throughout the article sample as well as an unexpected lack of rigour with regard to most aspects of methodology. Possible sample bias should be considered when interpreting the results of this study. It is recommended that future research should utilise the results of this study to determine the possible impact on the field of psychology as a science and to further investigation into the use of research methods. Results should prompt the following future research into: a lack or rigour and its implication on replication, the use of certain methods above others, publication bias and choice of sampling method.

Introduction

Psychology is an ever-growing and popular field (Gough and Lyons, 2016 ; Clay, 2017 ). Due to this growth and the need for science-based research to base health decisions on (Perestelo-Pérez, 2013 ), the use of research methods in the broad field of psychology is an essential point of investigation (Stangor, 2011 ; Aanstoos, 2014 ). Research methods are therefore viewed as important tools used by researchers to collect data (Nieuwenhuis, 2016 ) and include the following: quantitative, qualitative, mixed method and multi method (Maree, 2016 ). Additionally, researchers also employ various types of literature reviews to address research questions (Grant and Booth, 2009 ). According to literature, what research method is used and why a certain research method is used is complex as it depends on various factors that may include paradigm (O'Neil and Koekemoer, 2016 ), research question (Grix, 2002 ), or the skill and exposure of the researcher (Nind et al., 2015 ). How these research methods are employed is also difficult to discern as research methods are often depicted as having fixed boundaries that are continuously crossed in research (Johnson et al., 2001 ; Sandelowski, 2011 ). Examples of this crossing include adding quantitative aspects to qualitative studies (Sandelowski et al., 2009 ), or stating that a study used a mixed-method design without the study having any characteristics of this design (Truscott et al., 2010 ).

The inappropriate use of research methods affects how students and researchers improve and utilise their research skills (Scott Jones and Goldring, 2015 ), how theories are developed (Ngulube, 2013 ), and the credibility of research results (Levitt et al., 2017 ). This, in turn, can be detrimental to the field (Nind et al., 2015 ), journal publication (Ketchen et al., 2008 ; Ezeh et al., 2010 ), and attempts to address public social issues through psychological research (Dweck, 2017 ). This is especially important given the now well-known replication crisis the field is facing (Earp and Trafimow, 2015 ; Hengartner, 2018 ).

Due to this lack of clarity on method use and the potential impact of inept use of research methods, the aim of this study was to explore the use of research methods in the field of psychology through a review of journal publications. Chaichanasakul et al. ( 2011 ) identify reviewing articles as the opportunity to examine the development, growth and progress of a research area and overall quality of a journal. Studies such as Lee et al. ( 1999 ) as well as Bluhm et al. ( 2011 ) review of qualitative methods has attempted to synthesis the use of research methods and indicated the growth of qualitative research in American and European journals. Research has also focused on the use of research methods in specific sub-disciplines of psychology, for example, in the field of Industrial and Organisational psychology Coetzee and Van Zyl ( 2014 ) found that South African publications tend to consist of cross-sectional quantitative research methods with underrepresented longitudinal studies. Qualitative studies were found to make up 21% of the articles published from 1995 to 2015 in a similar study by O'Neil and Koekemoer ( 2016 ). Other methods in health psychology, such as Mixed methods research have also been reportedly growing in popularity (O'Cathain, 2009 ).

A broad overview of the use of research methods in the field of psychology as a whole is however, not available in the literature. Therefore, our research focused on answering what research methods are being used, how these methods are being used and for what topics in practice (i.e., journal publications) in order to provide a general perspective of method used in psychology publication. We synthesised the collected data into the following format: research topic [areas of scientific discourse in a field or the current needs of a population (Bittermann and Fischer, 2018 )], method [data-gathering tools (Nieuwenhuis, 2016 )], sampling [elements chosen from a population to partake in research (Ritchie et al., 2009 )], data collection [techniques and research strategy (Maree, 2016 )], and data analysis [discovering information by examining bodies of data (Ktepi, 2016 )]. A systematised review of recent articles (2013 to 2017) collected from five different journals in the field of psychological research was conducted.

Grant and Booth ( 2009 ) describe systematised reviews as the review of choice for post-graduate studies, which is employed using some elements of a systematic review and seldom more than one or two databases to catalogue studies after a comprehensive literature search. The aspects used in this systematised review that are similar to that of a systematic review were a full search within the chosen database and data produced in tabular form (Grant and Booth, 2009 ).

Sample sizes and timelines vary in systematised reviews (see Lowe and Moore, 2014 ; Pericall and Taylor, 2014 ; Barr-Walker, 2017 ). With no clear parameters identified in the literature (see Grant and Booth, 2009 ), the sample size of this study was determined by the purpose of the sample (Strydom, 2011 ), and time and cost constraints (Maree and Pietersen, 2016 ). Thus, a non-probability purposive sample (Ritchie et al., 2009 ) of the top five psychology journals from 2013 to 2017 was included in this research study. Per Lee ( 2015 ) American Psychological Association (APA) recommends the use of the most up-to-date sources for data collection with consideration of the context of the research study. As this research study focused on the most recent trends in research methods used in the broad field of psychology, the identified time frame was deemed appropriate.

Psychology journals were only included if they formed part of the top five English journals in the miscellaneous psychology domain of the Scimago Journal and Country Rank (Scimago Journal & Country Rank, 2017 ). The Scimago Journal and Country Rank provides a yearly updated list of publicly accessible journal and country-specific indicators derived from the Scopus® database (Scopus, 2017b ) by means of the Scimago Journal Rank (SJR) indicator developed by Scimago from the algorithm Google PageRank™ (Scimago Journal & Country Rank, 2017 ). Scopus is the largest global database of abstracts and citations from peer-reviewed journals (Scopus, 2017a ). Reasons for the development of the Scimago Journal and Country Rank list was to allow researchers to assess scientific domains, compare country rankings, and compare and analyse journals (Scimago Journal & Country Rank, 2017 ), which supported the aim of this research study. Additionally, the goals of the journals had to focus on topics in psychology in general with no preference to specific research methods and have full-text access to articles.

The following list of top five journals in 2018 fell within the abovementioned inclusion criteria (1) Australian Journal of Psychology, (2) British Journal of Psychology, (3) Europe's Journal of Psychology, (4) International Journal of Psychology and lastly the (5) Journal of Psychology Applied and Interdisciplinary.

Journals were excluded from this systematised review if no full-text versions of their articles were available, if journals explicitly stated a publication preference for certain research methods, or if the journal only published articles in a specific discipline of psychological research (for example, industrial psychology, clinical psychology etc.).

The researchers followed a procedure (see Figure 1 ) adapted from that of Ferreira et al. ( 2016 ) for systematised reviews. Data collection and categorisation commenced on 4 December 2017 and continued until 30 June 2019. All the data was systematically collected and coded manually (Grant and Booth, 2009 ) with an independent person acting as co-coder. Codes of interest included the research topic, method used, the design used, sampling method, and methodology (the method used for data collection and data analysis). These codes were derived from the wording in each article. Themes were created based on the derived codes and checked by the co-coder. Lastly, these themes were catalogued into a table as per the systematised review design.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0001.jpg

Systematised review procedure.

According to Johnston et al. ( 2019 ), “literature screening, selection, and data extraction/analyses” (p. 7) are specifically tailored to the aim of a review. Therefore, the steps followed in a systematic review must be reported in a comprehensive and transparent manner. The chosen systematised design adhered to the rigour expected from systematic reviews with regard to full search and data produced in tabular form (Grant and Booth, 2009 ). The rigorous application of the systematic review is, therefore discussed in relation to these two elements.

Firstly, to ensure a comprehensive search, this research study promoted review transparency by following a clear protocol outlined according to each review stage before collecting data (Johnston et al., 2019 ). This protocol was similar to that of Ferreira et al. ( 2016 ) and approved by three research committees/stakeholders and the researchers (Johnston et al., 2019 ). The eligibility criteria for article inclusion was based on the research question and clearly stated, and the process of inclusion was recorded on an electronic spreadsheet to create an evidence trail (Bandara et al., 2015 ; Johnston et al., 2019 ). Microsoft Excel spreadsheets are a popular tool for review studies and can increase the rigour of the review process (Bandara et al., 2015 ). Screening for appropriate articles for inclusion forms an integral part of a systematic review process (Johnston et al., 2019 ). This step was applied to two aspects of this research study: the choice of eligible journals and articles to be included. Suitable journals were selected by the first author and reviewed by the second and third authors. Initially, all articles from the chosen journals were included. Then, by process of elimination, those irrelevant to the research aim, i.e., interview articles or discussions etc., were excluded.

To ensure rigourous data extraction, data was first extracted by one reviewer, and an independent person verified the results for completeness and accuracy (Johnston et al., 2019 ). The research question served as a guide for efficient, organised data extraction (Johnston et al., 2019 ). Data was categorised according to the codes of interest, along with article identifiers for audit trails such as authors, title and aims of articles. The categorised data was based on the aim of the review (Johnston et al., 2019 ) and synthesised in tabular form under methods used, how these methods were used, and for what topics in the field of psychology.

The initial search produced a total of 1,145 articles from the 5 journals identified. Inclusion and exclusion criteria resulted in a final sample of 999 articles ( Figure 2 ). Articles were co-coded into 84 codes, from which 10 themes were derived ( Table 1 ).

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0002.jpg

Journal article frequency.

Codes used to form themes (research topics).

These 10 themes represent the topic section of our research question ( Figure 3 ). All these topics except, for the final one, psychological practice , were found to concur with the research areas in psychology as identified by Weiten ( 2010 ). These research areas were chosen to represent the derived codes as they provided broad definitions that allowed for clear, concise categorisation of the vast amount of data. Article codes were categorised under particular themes/topics if they adhered to the research area definitions created by Weiten ( 2010 ). It is important to note that these areas of research do not refer to specific disciplines in psychology, such as industrial psychology; but to broader fields that may encompass sub-interests of these disciplines.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0003.jpg

Topic frequency (international sample).

In the case of developmental psychology , researchers conduct research into human development from childhood to old age. Social psychology includes research on behaviour governed by social drivers. Researchers in the field of educational psychology study how people learn and the best way to teach them. Health psychology aims to determine the effect of psychological factors on physiological health. Physiological psychology , on the other hand, looks at the influence of physiological aspects on behaviour. Experimental psychology is not the only theme that uses experimental research and focuses on the traditional core topics of psychology (for example, sensation). Cognitive psychology studies the higher mental processes. Psychometrics is concerned with measuring capacity or behaviour. Personality research aims to assess and describe consistency in human behaviour (Weiten, 2010 ). The final theme of psychological practice refers to the experiences, techniques, and interventions employed by practitioners, researchers, and academia in the field of psychology.

Articles under these themes were further subdivided into methodologies: method, sampling, design, data collection, and data analysis. The categorisation was based on information stated in the articles and not inferred by the researchers. Data were compiled into two sets of results presented in this article. The first set addresses the aim of this study from the perspective of the topics identified. The second set of results represents a broad overview of the results from the perspective of the methodology employed. The second set of results are discussed in this article, while the first set is presented in table format. The discussion thus provides a broad overview of methods use in psychology (across all themes), while the table format provides readers with in-depth insight into methods used in the individual themes identified. We believe that presenting the data from both perspectives allow readers a broad understanding of the results. Due a large amount of information that made up our results, we followed Cichocka and Jost ( 2014 ) in simplifying our results. Please note that the numbers indicated in the table in terms of methodology differ from the total number of articles. Some articles employed more than one method/sampling technique/design/data collection method/data analysis in their studies.

What follows is the results for what methods are used, how these methods are used, and which topics in psychology they are applied to . Percentages are reported to the second decimal in order to highlight small differences in the occurrence of methodology.

Firstly, with regard to the research methods used, our results show that researchers are more likely to use quantitative research methods (90.22%) compared to all other research methods. Qualitative research was the second most common research method but only made up about 4.79% of the general method usage. Reviews occurred almost as much as qualitative studies (3.91%), as the third most popular method. Mixed-methods research studies (0.98%) occurred across most themes, whereas multi-method research was indicated in only one study and amounted to 0.10% of the methods identified. The specific use of each method in the topics identified is shown in Table 2 and Figure 4 .

Research methods in psychology.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0004.jpg

Research method frequency in topics.

Secondly, in the case of how these research methods are employed , our study indicated the following.

Sampling −78.34% of the studies in the collected articles did not specify a sampling method. From the remainder of the studies, 13 types of sampling methods were identified. These sampling methods included broad categorisation of a sample as, for example, a probability or non-probability sample. General samples of convenience were the methods most likely to be applied (10.34%), followed by random sampling (3.51%), snowball sampling (2.73%), and purposive (1.37%) and cluster sampling (1.27%). The remainder of the sampling methods occurred to a more limited extent (0–1.0%). See Table 3 and Figure 5 for sampling methods employed in each topic.

Sampling use in the field of psychology.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0005.jpg

Sampling method frequency in topics.

Designs were categorised based on the articles' statement thereof. Therefore, it is important to note that, in the case of quantitative studies, non-experimental designs (25.55%) were often indicated due to a lack of experiments and any other indication of design, which, according to Laher ( 2016 ), is a reasonable categorisation. Non-experimental designs should thus be compared with experimental designs only in the description of data, as it could include the use of correlational/cross-sectional designs, which were not overtly stated by the authors. For the remainder of the research methods, “not stated” (7.12%) was assigned to articles without design types indicated.

From the 36 identified designs the most popular designs were cross-sectional (23.17%) and experimental (25.64%), which concurred with the high number of quantitative studies. Longitudinal studies (3.80%), the third most popular design, was used in both quantitative and qualitative studies. Qualitative designs consisted of ethnography (0.38%), interpretative phenomenological designs/phenomenology (0.28%), as well as narrative designs (0.28%). Studies that employed the review method were mostly categorised as “not stated,” with the most often stated review designs being systematic reviews (0.57%). The few mixed method studies employed exploratory, explanatory (0.09%), and concurrent designs (0.19%), with some studies referring to separate designs for the qualitative and quantitative methods. The one study that identified itself as a multi-method study used a longitudinal design. Please see how these designs were employed in each specific topic in Table 4 , Figure 6 .

Design use in the field of psychology.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0006.jpg

Design frequency in topics.

Data collection and analysis —data collection included 30 methods, with the data collection method most often employed being questionnaires (57.84%). The experimental task (16.56%) was the second most preferred collection method, which included established or unique tasks designed by the researchers. Cognitive ability tests (6.84%) were also regularly used along with various forms of interviewing (7.66%). Table 5 and Figure 7 represent data collection use in the various topics. Data analysis consisted of 3,857 occurrences of data analysis categorised into ±188 various data analysis techniques shown in Table 6 and Figures 1 – 7 . Descriptive statistics were the most commonly used (23.49%) along with correlational analysis (17.19%). When using a qualitative method, researchers generally employed thematic analysis (0.52%) or different forms of analysis that led to coding and the creation of themes. Review studies presented few data analysis methods, with most studies categorising their results. Mixed method and multi-method studies followed the analysis methods identified for the qualitative and quantitative studies included.

Data collection in the field of psychology.

An external file that holds a picture, illustration, etc.
Object name is frma-05-00001-g0007.jpg

Data collection frequency in topics.

Data analysis in the field of psychology.

Results of the topics researched in psychology can be seen in the tables, as previously stated in this article. It is noteworthy that, of the 10 topics, social psychology accounted for 43.54% of the studies, with cognitive psychology the second most popular research topic at 16.92%. The remainder of the topics only occurred in 4.0–7.0% of the articles considered. A list of the included 999 articles is available under the section “View Articles” on the following website: https://methodgarden.xtrapolate.io/ . This website was created by Scholtz et al. ( 2019 ) to visually present a research framework based on this Article's results.

This systematised review categorised full-length articles from five international journals across the span of 5 years to provide insight into the use of research methods in the field of psychology. Results indicated what methods are used how these methods are being used and for what topics (why) in the included sample of articles. The results should be seen as providing insight into method use and by no means a comprehensive representation of the aforementioned aim due to the limited sample. To our knowledge, this is the first research study to address this topic in this manner. Our discussion attempts to promote a productive way forward in terms of the key results for method use in psychology, especially in the field of academia (Holloway, 2008 ).

With regard to the methods used, our data stayed true to literature, finding only common research methods (Grant and Booth, 2009 ; Maree, 2016 ) that varied in the degree to which they were employed. Quantitative research was found to be the most popular method, as indicated by literature (Breen and Darlaston-Jones, 2010 ; Counsell and Harlow, 2017 ) and previous studies in specific areas of psychology (see Coetzee and Van Zyl, 2014 ). Its long history as the first research method (Leech et al., 2007 ) in the field of psychology as well as researchers' current application of mathematical approaches in their studies (Toomela, 2010 ) might contribute to its popularity today. Whatever the case may be, our results show that, despite the growth in qualitative research (Demuth, 2015 ; Smith and McGannon, 2018 ), quantitative research remains the first choice for article publication in these journals. Despite the included journals indicating openness to articles that apply any research methods. This finding may be due to qualitative research still being seen as a new method (Burman and Whelan, 2011 ) or reviewers' standards being higher for qualitative studies (Bluhm et al., 2011 ). Future research is encouraged into the possible biasness in publication of research methods, additionally further investigation with a different sample into the proclaimed growth of qualitative research may also provide different results.

Review studies were found to surpass that of multi-method and mixed method studies. To this effect Grant and Booth ( 2009 ), state that the increased awareness, journal contribution calls as well as its efficiency in procuring research funds all promote the popularity of reviews. The low frequency of mixed method studies contradicts the view in literature that it's the third most utilised research method (Tashakkori and Teddlie's, 2003 ). Its' low occurrence in this sample could be due to opposing views on mixing methods (Gunasekare, 2015 ) or that authors prefer publishing in mixed method journals, when using this method, or its relative novelty (Ivankova et al., 2016 ). Despite its low occurrence, the application of the mixed methods design in articles was methodologically clear in all cases which were not the case for the remainder of research methods.

Additionally, a substantial number of studies used a combination of methodologies that are not mixed or multi-method studies. Perceived fixed boundaries are according to literature often set aside, as confirmed by this result, in order to investigate the aim of a study, which could create a new and helpful way of understanding the world (Gunasekare, 2015 ). According to Toomela ( 2010 ), this is not unheard of and could be considered a form of “structural systemic science,” as in the case of qualitative methodology (observation) applied in quantitative studies (experimental design) for example. Based on this result, further research into this phenomenon as well as its implications for research methods such as multi and mixed methods is recommended.

Discerning how these research methods were applied, presented some difficulty. In the case of sampling, most studies—regardless of method—did mention some form of inclusion and exclusion criteria, but no definite sampling method. This result, along with the fact that samples often consisted of students from the researchers' own academic institutions, can contribute to literature and debates among academics (Peterson and Merunka, 2014 ; Laher, 2016 ). Samples of convenience and students as participants especially raise questions about the generalisability and applicability of results (Peterson and Merunka, 2014 ). This is because attention to sampling is important as inappropriate sampling can debilitate the legitimacy of interpretations (Onwuegbuzie and Collins, 2017 ). Future investigation into the possible implications of this reported popular use of convenience samples for the field of psychology as well as the reason for this use could provide interesting insight, and is encouraged by this study.

Additionally, and this is indicated in Table 6 , articles seldom report the research designs used, which highlights the pressing aspect of the lack of rigour in the included sample. Rigour with regards to the applied empirical method is imperative in promoting psychology as a science (American Psychological Association, 2020 ). Omitting parts of the research process in publication when it could have been used to inform others' research skills should be questioned, and the influence on the process of replicating results should be considered. Publications are often rejected due to a lack of rigour in the applied method and designs (Fonseca, 2013 ; Laher, 2016 ), calling for increased clarity and knowledge of method application. Replication is a critical part of any field of scientific research and requires the “complete articulation” of the study methods used (Drotar, 2010 , p. 804). The lack of thorough description could be explained by the requirements of certain journals to only report on certain aspects of a research process, especially with regard to the applied design (Laher, 20). However, naming aspects such as sampling and designs, is a requirement according to the APA's Journal Article Reporting Standards (JARS-Quant) (Appelbaum et al., 2018 ). With very little information on how a study was conducted, authors lose a valuable opportunity to enhance research validity, enrich the knowledge of others, and contribute to the growth of psychology and methodology as a whole. In the case of this research study, it also restricted our results to only reported samples and designs, which indicated a preference for certain designs, such as cross-sectional designs for quantitative studies.

Data collection and analysis were for the most part clearly stated. A key result was the versatile use of questionnaires. Researchers would apply a questionnaire in various ways, for example in questionnaire interviews, online surveys, and written questionnaires across most research methods. This may highlight a trend for future research.

With regard to the topics these methods were employed for, our research study found a new field named “psychological practice.” This result may show the growing consciousness of researchers as part of the research process (Denzin and Lincoln, 2003 ), psychological practice, and knowledge generation. The most popular of these topics was social psychology, which is generously covered in journals and by learning societies, as testaments of the institutional support and richness social psychology has in the field of psychology (Chryssochoou, 2015 ). The APA's perspective on 2018 trends in psychology also identifies an increased amount of psychology focus on how social determinants are influencing people's health (Deangelis, 2017 ).

This study was not without limitations and the following should be taken into account. Firstly, this study used a sample of five specific journals to address the aim of the research study, despite general journal aims (as stated on journal websites), this inclusion signified a bias towards the research methods published in these specific journals only and limited generalisability. A broader sample of journals over a different period of time, or a single journal over a longer period of time might provide different results. A second limitation is the use of Excel spreadsheets and an electronic system to log articles, which was a manual process and therefore left room for error (Bandara et al., 2015 ). To address this potential issue, co-coding was performed to reduce error. Lastly, this article categorised data based on the information presented in the article sample; there was no interpretation of what methodology could have been applied or whether the methods stated adhered to the criteria for the methods used. Thus, a large number of articles that did not clearly indicate a research method or design could influence the results of this review. However, this in itself was also a noteworthy result. Future research could review research methods of a broader sample of journals with an interpretive review tool that increases rigour. Additionally, the authors also encourage the future use of systematised review designs as a way to promote a concise procedure in applying this design.

Our research study presented the use of research methods for published articles in the field of psychology as well as recommendations for future research based on these results. Insight into the complex questions identified in literature, regarding what methods are used how these methods are being used and for what topics (why) was gained. This sample preferred quantitative methods, used convenience sampling and presented a lack of rigorous accounts for the remaining methodologies. All methodologies that were clearly indicated in the sample were tabulated to allow researchers insight into the general use of methods and not only the most frequently used methods. The lack of rigorous account of research methods in articles was represented in-depth for each step in the research process and can be of vital importance to address the current replication crisis within the field of psychology. Recommendations for future research aimed to motivate research into the practical implications of the results for psychology, for example, publication bias and the use of convenience samples.

Ethics Statement

This study was cleared by the North-West University Health Research Ethics Committee: NWU-00115-17-S1.

Author Contributions

All authors listed have made a substantial, direct and intellectual contribution to the work, and approved it for publication.

Conflict of Interest

The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.

  • Aanstoos C. M. (2014). Psychology . Available online at: http://eds.a.ebscohost.com.nwulib.nwu.ac.za/eds/detail/detail?sid=18de6c5c-2b03-4eac-94890145eb01bc70%40sessionmgr4006&vid$=$1&hid$=$4113&bdata$=$JnNpdGU9ZWRzL~WxpdmU%3d#AN$=$93871882&db$=$ers
  • American Psychological Association (2020). Science of Psychology . Available online at: https://www.apa.org/action/science/
  • Appelbaum M., Cooper H., Kline R. B., Mayo-Wilson E., Nezu A. M., Rao S. M. (2018). Journal article reporting standards for quantitative research in psychology: the APA Publications and Communications Board task force report . Am. Psychol. 73 :3. 10.1037/amp0000191 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bandara W., Furtmueller E., Gorbacheva E., Miskon S., Beekhuyzen J. (2015). Achieving rigor in literature reviews: insights from qualitative data analysis and tool-support . Commun. Ass. Inform. Syst. 37 , 154–204. 10.17705/1CAIS.03708 [ CrossRef ] [ Google Scholar ]
  • Barr-Walker J. (2017). Evidence-based information needs of public health workers: a systematized review . J. Med. Libr. Assoc. 105 , 69–79. 10.5195/JMLA.2017.109 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Bittermann A., Fischer A. (2018). How to identify hot topics in psychology using topic modeling . Z. Psychol. 226 , 3–13. 10.1027/2151-2604/a000318 [ CrossRef ] [ Google Scholar ]
  • Bluhm D. J., Harman W., Lee T. W., Mitchell T. R. (2011). Qualitative research in management: a decade of progress . J. Manage. Stud. 48 , 1866–1891. 10.1111/j.1467-6486.2010.00972.x [ CrossRef ] [ Google Scholar ]
  • Breen L. J., Darlaston-Jones D. (2010). Moving beyond the enduring dominance of positivism in psychological research: implications for psychology in Australia . Aust. Psychol. 45 , 67–76. 10.1080/00050060903127481 [ CrossRef ] [ Google Scholar ]
  • Burman E., Whelan P. (2011). Problems in / of Qualitative Research . Maidenhead: Open University Press/McGraw Hill. [ Google Scholar ]
  • Chaichanasakul A., He Y., Chen H., Allen G. E. K., Khairallah T. S., Ramos K. (2011). Journal of Career Development: a 36-year content analysis (1972–2007) . J. Career. Dev. 38 , 440–455. 10.1177/0894845310380223 [ CrossRef ] [ Google Scholar ]
  • Chryssochoou X. (2015). Social Psychology . Inter. Encycl. Soc. Behav. Sci. 22 , 532–537. 10.1016/B978-0-08-097086-8.24095-6 [ CrossRef ] [ Google Scholar ]
  • Cichocka A., Jost J. T. (2014). Stripped of illusions? Exploring system justification processes in capitalist and post-Communist societies . Inter. J. Psychol. 49 , 6–29. 10.1002/ijop.12011 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Clay R. A. (2017). Psychology is More Popular Than Ever. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trends-popular
  • Coetzee M., Van Zyl L. E. (2014). A review of a decade's scholarly publications (2004–2013) in the South African Journal of Industrial Psychology . SA. J. Psychol . 40 , 1–16. 10.4102/sajip.v40i1.1227 [ CrossRef ] [ Google Scholar ]
  • Counsell A., Harlow L. (2017). Reporting practices and use of quantitative methods in Canadian journal articles in psychology . Can. Psychol. 58 , 140–147. 10.1037/cap0000074 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Deangelis T. (2017). Targeting Social Factors That Undermine Health. Monitor on Psychology: Trends Report . Available online at: https://www.apa.org/monitor/2017/11/trend-social-factors
  • Demuth C. (2015). New directions in qualitative research in psychology . Integr. Psychol. Behav. Sci. 49 , 125–133. 10.1007/s12124-015-9303-9 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Denzin N. K., Lincoln Y. (2003). The Landscape of Qualitative Research: Theories and Issues , 2nd Edn. London: Sage. [ Google Scholar ]
  • Drotar D. (2010). A call for replications of research in pediatric psychology and guidance for authors . J. Pediatr. Psychol. 35 , 801–805. 10.1093/jpepsy/jsq049 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Dweck C. S. (2017). Is psychology headed in the right direction? Yes, no, and maybe . Perspect. Psychol. Sci. 12 , 656–659. 10.1177/1745691616687747 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Earp B. D., Trafimow D. (2015). Replication, falsification, and the crisis of confidence in social psychology . Front. Psychol. 6 :621. 10.3389/fpsyg.2015.00621 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ezeh A. C., Izugbara C. O., Kabiru C. W., Fonn S., Kahn K., Manderson L., et al.. (2010). Building capacity for public and population health research in Africa: the consortium for advanced research training in Africa (CARTA) model . Glob. Health Action 3 :5693. 10.3402/gha.v3i0.5693 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ferreira A. L. L., Bessa M. M. M., Drezett J., De Abreu L. C. (2016). Quality of life of the woman carrier of endometriosis: systematized review . Reprod. Clim. 31 , 48–54. 10.1016/j.recli.2015.12.002 [ CrossRef ] [ Google Scholar ]
  • Fonseca M. (2013). Most Common Reasons for Journal Rejections . Available online at: http://www.editage.com/insights/most-common-reasons-for-journal-rejections
  • Gough B., Lyons A. (2016). The future of qualitative research in psychology: accentuating the positive . Integr. Psychol. Behav. Sci. 50 , 234–243. 10.1007/s12124-015-9320-8 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grant M. J., Booth A. (2009). A typology of reviews: an analysis of 14 review types and associated methodologies . Health Info. Libr. J. 26 , 91–108. 10.1111/j.1471-1842.2009.00848.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Grix J. (2002). Introducing students to the generic terminology of social research . Politics 22 , 175–186. 10.1111/1467-9256.00173 [ CrossRef ] [ Google Scholar ]
  • Gunasekare U. L. T. P. (2015). Mixed research method as the third research paradigm: a literature review . Int. J. Sci. Res. 4 , 361–368. Available online at: https://ssrn.com/abstract=2735996 [ Google Scholar ]
  • Hengartner M. P. (2018). Raising awareness for the replication crisis in clinical psychology by focusing on inconsistencies in psychotherapy Research: how much can we rely on published findings from efficacy trials? Front. Psychol. 9 :256. 10.3389/fpsyg.2018.00256 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Holloway W. (2008). Doing intellectual disagreement differently . Psychoanal. Cult. Soc. 13 , 385–396. 10.1057/pcs.2008.29 [ CrossRef ] [ Google Scholar ]
  • Ivankova N. V., Creswell J. W., Plano Clark V. L. (2016). Foundations and Approaches to mixed methods research , in First Steps in Research , 2nd Edn. K. Maree (Pretoria: Van Schaick Publishers; ), 306–335. [ Google Scholar ]
  • Johnson M., Long T., White A. (2001). Arguments for British pluralism in qualitative health research . J. Adv. Nurs. 33 , 243–249. 10.1046/j.1365-2648.2001.01659.x [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Johnston A., Kelly S. E., Hsieh S. C., Skidmore B., Wells G. A. (2019). Systematic reviews of clinical practice guidelines: a methodological guide . J. Clin. Epidemiol. 108 , 64–72. 10.1016/j.jclinepi.2018.11.030 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Ketchen D. J., Jr., Boyd B. K., Bergh D. D. (2008). Research methodology in strategic management: past accomplishments and future challenges . Organ. Res. Methods 11 , 643–658. 10.1177/1094428108319843 [ CrossRef ] [ Google Scholar ]
  • Ktepi B. (2016). Data Analytics (DA) . Available online at: https://eds-b-ebscohost-com.nwulib.nwu.ac.za/eds/detail/detail?vid=2&sid=24c978f0-6685-4ed8-ad85-fa5bb04669b9%40sessionmgr101&bdata=JnNpdGU9ZWRzLWxpdmU%3d#AN=113931286&db=ers
  • Laher S. (2016). Ostinato rigore: establishing methodological rigour in quantitative research . S. Afr. J. Psychol. 46 , 316–327. 10.1177/0081246316649121 [ CrossRef ] [ Google Scholar ]
  • Lee C. (2015). The Myth of the Off-Limits Source . Available online at: http://blog.apastyle.org/apastyle/research/
  • Lee T. W., Mitchell T. R., Sablynski C. J. (1999). Qualitative research in organizational and vocational psychology, 1979–1999 . J. Vocat. Behav. 55 , 161–187. 10.1006/jvbe.1999.1707 [ CrossRef ] [ Google Scholar ]
  • Leech N. L., Anthony J., Onwuegbuzie A. J. (2007). A typology of mixed methods research designs . Sci. Bus. Media B. V Qual. Quant 43 , 265–275. 10.1007/s11135-007-9105-3 [ CrossRef ] [ Google Scholar ]
  • Levitt H. M., Motulsky S. L., Wertz F. J., Morrow S. L., Ponterotto J. G. (2017). Recommendations for designing and reviewing qualitative research in psychology: promoting methodological integrity . Qual. Psychol. 4 , 2–22. 10.1037/qup0000082 [ CrossRef ] [ Google Scholar ]
  • Lowe S. M., Moore S. (2014). Social networks and female reproductive choices in the developing world: a systematized review . Rep. Health 11 :85. 10.1186/1742-4755-11-85 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Maree K. (2016). Planning a research proposal , in First Steps in Research , 2nd Edn, ed Maree K. (Pretoria: Van Schaik Publishers; ), 49–70. [ Google Scholar ]
  • Maree K., Pietersen J. (2016). Sampling , in First Steps in Research, 2nd Edn , ed Maree K. (Pretoria: Van Schaik Publishers; ), 191–202. [ Google Scholar ]
  • Ngulube P. (2013). Blending qualitative and quantitative research methods in library and information science in sub-Saharan Africa . ESARBICA J. 32 , 10–23. Available online at: http://hdl.handle.net/10500/22397 . [ Google Scholar ]
  • Nieuwenhuis J. (2016). Qualitative research designs and data-gathering techniques , in First Steps in Research , 2nd Edn, ed Maree K. (Pretoria: Van Schaik Publishers; ), 71–102. [ Google Scholar ]
  • Nind M., Kilburn D., Wiles R. (2015). Using video and dialogue to generate pedagogic knowledge: teachers, learners and researchers reflecting together on the pedagogy of social research methods . Int. J. Soc. Res. Methodol. 18 , 561–576. 10.1080/13645579.2015.1062628 [ CrossRef ] [ Google Scholar ]
  • O'Cathain A. (2009). Editorial: mixed methods research in the health sciences—a quiet revolution . J. Mix. Methods 3 , 1–6. 10.1177/1558689808326272 [ CrossRef ] [ Google Scholar ]
  • O'Neil S., Koekemoer E. (2016). Two decades of qualitative research in psychology, industrial and organisational psychology and human resource management within South Africa: a critical review . SA J. Indust. Psychol. 42 , 1–16. 10.4102/sajip.v42i1.1350 [ CrossRef ] [ Google Scholar ]
  • Onwuegbuzie A. J., Collins K. M. (2017). The role of sampling in mixed methods research enhancing inference quality . Köln Z Soziol. 2 , 133–156. 10.1007/s11577-017-0455-0 [ CrossRef ] [ Google Scholar ]
  • Perestelo-Pérez L. (2013). Standards on how to develop and report systematic reviews in psychology and health . Int. J. Clin. Health Psychol. 13 , 49–57. 10.1016/S1697-2600(13)70007-3 [ CrossRef ] [ Google Scholar ]
  • Pericall L. M. T., Taylor E. (2014). Family function and its relationship to injury severity and psychiatric outcome in children with acquired brain injury: a systematized review . Dev. Med. Child Neurol. 56 , 19–30. 10.1111/dmcn.12237 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Peterson R. A., Merunka D. R. (2014). Convenience samples of college students and research reproducibility . J. Bus. Res. 67 , 1035–1041. 10.1016/j.jbusres.2013.08.010 [ CrossRef ] [ Google Scholar ]
  • Ritchie J., Lewis J., Elam G. (2009). Designing and selecting samples , in Qualitative Research Practice: A Guide for Social Science Students and Researchers , 2nd Edn, ed Ritchie J., Lewis J. (London: Sage; ), 1–23. [ Google Scholar ]
  • Sandelowski M. (2011). When a cigar is not just a cigar: alternative perspectives on data and data analysis . Res. Nurs. Health 34 , 342–352. 10.1002/nur.20437 [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Sandelowski M., Voils C. I., Knafl G. (2009). On quantitizing . J. Mix. Methods Res. 3 , 208–222. 10.1177/1558689809334210 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Scholtz S. E., De Klerk W., De Beer L. T. (2019). A data generated research framework for conducting research methods in psychological research .
  • Scimago Journal & Country Rank (2017). Available online at: http://www.scimagojr.com/journalrank.php?category=3201&year=2015
  • Scopus (2017a). About Scopus . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).
  • Scopus (2017b). Document Search . Available online at: https://www.scopus.com/home.uri (accessed February 01, 2017).
  • Scott Jones J., Goldring J. E. (2015). ‘I' m not a quants person'; key strategies in building competence and confidence in staff who teach quantitative research methods . Int. J. Soc. Res. Methodol. 18 , 479–494. 10.1080/13645579.2015.1062623 [ CrossRef ] [ Google Scholar ]
  • Smith B., McGannon K. R. (2018). Developing rigor in quantitative research: problems and opportunities within sport and exercise psychology . Int. Rev. Sport Exerc. Psychol. 11 , 101–121. 10.1080/1750984X.2017.1317357 [ CrossRef ] [ Google Scholar ]
  • Stangor C. (2011). Introduction to Psychology . Available online at: http://www.saylor.org/books/
  • Strydom H. (2011). Sampling in the quantitative paradigm , in Research at Grass Roots; For the Social Sciences and Human Service Professions , 4th Edn, eds de Vos A. S., Strydom H., Fouché C. B., Delport C. S. L. (Pretoria: Van Schaik Publishers; ), 221–234. [ Google Scholar ]
  • Tashakkori A., Teddlie C. (2003). Handbook of Mixed Methods in Social & Behavioural Research . Thousand Oaks, CA: SAGE publications. [ Google Scholar ]
  • Toomela A. (2010). Quantitative methods in psychology: inevitable and useless . Front. Psychol. 1 :29. 10.3389/fpsyg.2010.00029 [ PMC free article ] [ PubMed ] [ CrossRef ] [ Google Scholar ]
  • Truscott D. M., Swars S., Smith S., Thornton-Reid F., Zhao Y., Dooley C., et al.. (2010). A cross-disciplinary examination of the prevalence of mixed methods in educational research: 1995–2005 . Int. J. Soc. Res. Methodol. 13 , 317–328. 10.1080/13645570903097950 [ CrossRef ] [ Google Scholar ]
  • Weiten W. (2010). Psychology Themes and Variations . Belmont, CA: Wadsworth. [ Google Scholar ]

Online Psychology Degrees Logo

How To Write a Good Survey for Psychological Research

  • By Cliff Stamp, BS Psychology, MS Rehabilitation Counseling
  • Published November 21, 2019
  • Last Updated November 14, 2023
  • Read Time 7 mins

psychology survey examples

Posted November 2019 by Clifton Stamp, B.S. Psychology; M.A. Rehabilitation Counseling, M.A. English; 10 updates since. Reading time: 7 min. Reading level: Grade 9+. Questions on surveys for psychological research? Email Toni at: [email protected] .

Surveys are a common and powerful tool in psychological research. They’re an essential data collection tool that gathers self-report data in an experiment. Surveys follow two patterns: the questionnaire method and the structured interview. The questionnaire method is the more common of the two, with participants completing the survey without the researcher present. In a structured interview, the researcher is present and asks participants questions.

Using Surveys for Psychological Research

Researchers use surveys to investigate the opinions, behaviors, demographics and other characteristics of a group of people. The demographic information that’s collected can be brief or exhaustive, with information like ethnicity, sex, religion, political affiliation among the most commonly collected items. Hypothetical situations are also common survey questions, with researchers asking respondents what they would do in particular situations.

Surveys have a lot of advantages. They can be administered rapidly, with information collected over the phone, through the web, in person, or by mail.  They allow a great deal of data to be gathered, fairly rapidly. However, for surveys to work well, they must be constructed correctly. Surveys must have high reliability and validity . Reliability refers to a study’s ability to produce the same results across multiple administrations. In general terms, validity refers to a study’s ability to measure what it’s supposed to measure. Surveys can be invalid but produce reliable–although incorrect results. A valid survey will also produce reliable results. Reliability and validity are determined through intensive testing and statistical analysis. These procedures help make sure that surveys produce data that can be properly used in evaluating hypotheses.

Writing Great Psychology Surveys

Surveys, also called questionnaires, have many advantages, but they carry some potential problems too. People maybe be unduly influenced by the way questions are worded, the order in which response choices are presented, even the nature of the question itself. These extraneous influences limit the reliability and validity of a survey, which is why the best psychology surveys are crafted to avoid those issues.

To avoid these issues, make sure you eliminate biased language from your surveys. For example, the phrase “car wreck” may imply a serious collision, where “automobile accident” is much more neutral. Be aware of cultural and regional factors if you plan to make wide generalizations from your sampling.

All psychology experimentation is reliant on researchers having very clear ideas of what they’re investigating and how those questions are treated. The process of turning big-picture concepts into measures that can be tested, into questions that can be asked is called operationalization. Operationalizing a research question and all the survey items is an essential part of the process.

When it comes to wording, some essential rules to follow are:

  • Write each question in simple to read, easy to understand language. A good rule of thumb is to write questions at the 6th to 8th grade reading level.
  • Ask your question immediately, using unambiguous words. Be careful of words like “very,” “many,” “a lot,” and so forth.” They can introduce ambiguity.
  • Avoid dual, or double-barreled questions. A survey question that states “This product worked well and was easy to use” is an example of a double-barreled question, as there may be two very different possibilities for this single question.
  • Allow for a “does not apply” or “don’t know” response, but be aware sometimes they aren’t needed. For example,  “How easy was it making your purchase?” wouldn’t require a “does not apply” response option.

The importance of the questionnaire design and contents is utterly crucial. Each psychology survey question must focus on the variable or variables you’ve chosen to study. Survey design needs to be mindful of the following factors:

  • Surveys need to be fairly brief, taking no more than 15 minutes . The longer surveys are, the fewer people think carefully about their responses. They go faster in responding to get the survey finished as rapidly as possible.
  • Use a Likert-type scale as often as possible for measures of agreement, satisfaction and approval. Likert scales ranging from 5 to 7 points, with a middle “neutral” position. Likert scales allow for fine degrees of statistical analysis to be performed on data. Although there is occasional controversy about allowing middle positions (“slightly agree” or “slightly disagree”), research indicates that middle positions prevent over-polarized choices .
  • Keep coding consistent.  All survey responses are coded numerically for analysis.  All questions must be coded the same. That is, code responses so that the most positive outcome on each question is always given 5 points, for example, or however many points you choose. The critical point is that all survey questions have the same point value across questions. If “strongly disagree” on one question accrues a score of 1, all “strongly disagree” responses should be coded as a 1. This allows statistics to be generated on a survey rapidly, which is a big help in drawing conclusions.
  • Close-ended questions tend to be easier and faster to score and quantify than open-ended questions. Open-ended survey questions, on the other hand, give more complete information, yet require more complex analysis .

Steps in Carrying Out a Psychological Survey

  • Define your variables . This is part of the aforementioned operationalization of your study. You have to have a clear and quantifiable definition for each variable in your survey.
  • Develop a general hypothesis : Your hypothesis is a general idea about a testable situation. However, a hypothesis must be paired with a null hypothesis.  A null hypothesis is a prediction that the survey will yield no significant associations between your tested variables. If your study does show significant relationships, then you have rejected the null hypothesis. This becomes crucial in the statistical analysis phase, after collecting all your survey data.
  • Perform a literature review.  A literature review is a thorough investigation of all research about one’s hypothesis that has been published in scholarly outlets, like professional journals. A thorough literature review helps researchers avoid pitfalls while pointing out ways to improve one’s psychology survey topics.
  • Design survey . A survey design must be tailored to the needs of the experiment, particularly the nature of the variables.
  • Choose your participants. Who do you want to study and what do you want to investigate about your chosen population? A portion of a population is called a sample and choosing a sample is a topic of its own. The larger a sample is, the more accurately it reflects its population so that results can be generalized. However, it’s hard, if not impossible, to survey tens of thousands of people at one time. Sampling is a way to draw information that can then be analyzed by sophisticated statistical means to make statements about populations.
  • Conduct the survey . Consider how you’re going to administer your survey. Phone surveys and internet-based surveys work differently from in-person interviews. In-person psychology surveys require a precise, standardized way of interacting with people taking the survey.
  • Analyze results . The first pass through a data set is conducted by statistics programs. Interesting relationships may then be highlighted and examined. It’s important to note that a single survey doesn’t have a tremendous amount of generalizability. If a study is investigation a broad demographic, for example political standings among African-American men from age 18 to 25, and one’s sample size is 500, it’s going to be problematic to attempt to apply findings from a small sample size.

Surveys work best when they are thoughtfully constructed and given to large sample sizes. By paying attention when the psychology survey topics are chosen and during the survey’s design phase, researchers can build an instrument that will yield data that’s reliable, valid and generalizable.

More Articles of Interest:

  • Tips For Designing Psychology Experiments
  • What Happens in a Psychology Laboratory?
  • How Psychologists Use Deception
  • Top Ten Online Resources for Teaching Psychology
  • How Do I Study Psychology? Tips for the First-Year Student

Trending now

Logo for BCcampus Open Publishing

Want to create or adapt books like this? Learn more about how Pressbooks supports open publishing practices.

Chapter 9: Survey Research

Constructing Survey Questionnaires

Learning Objectives

  • Describe the cognitive processes involved in responding to a survey item.
  • Explain what a context effect is and give some examples.
  • Create a simple survey questionnaire based on principles of effective item writing and organization.

The heart of any survey research project is the survey questionnaire itself. Although it is easy to think of interesting questions to ask people, constructing a good survey questionnaire is not easy at all. The problem is that the answers people give can be influenced in unintended ways by the wording of the items, the order of the items, the response options provided, and many other factors. At best, these influences add noise to the data. At worst, they result in systematic biases and misleading results. In this section, therefore, we consider some principles for constructing survey questionnaires to minimize these unintended effects and thereby maximize the reliability and validity of respondents’ answers.

Survey Responding as a Psychological Process

Before looking at specific principles of survey questionnaire construction, it will help to consider survey responding as a psychological process.

A Cognitive Model

Figure 9.1  presents a model of the cognitive processes that people engage in when responding to a survey item (Sudman, Bradburn, & Schwarz, 1996) [1] . Respondents must interpret the question, retrieve relevant information from memory, form a tentative judgment, convert the tentative judgment into one of the response options provided (e.g., a rating on a 1-to-7 scale), and finally edit their response as necessary.

Flowchart. Long description available.

Consider, for example, the following questionnaire item:

How many alcoholic drinks do you consume in a typical day?

  • _____ a lot more than average
  • _____ somewhat more than average
  • _____ average
  • _____ somewhat fewer than average
  • _____ a lot fewer than average

Although this item at first seems straightforward, it poses several difficulties for respondents. First, they must interpret the question. For example, they must decide whether “alcoholic drinks” include beer and wine (as opposed to just hard liquor) and whether a “typical day” is a typical weekday, typical weekend day, or both . Even though Chang and Krosnick (2003) [2] found that asking about “typical” behaviour has been shown to be more valid than asking about “past” behaviour, their study compared “typical week” to “past week” and may be different when considering typical weekdays or weekend days) . Once they have interpreted the question, they must retrieve relevant information from memory to answer it. But what information should they retrieve, and how should they go about retrieving it? They might think vaguely about some recent occasions on which they drank alcohol, they might carefully try to recall and count the number of alcoholic drinks they consumed last week, or they might retrieve some existing beliefs that they have about themselves (e.g., “I am not much of a drinker”). Then they must use this information to arrive at a tentative judgment about how many alcoholic drinks they consume in a typical day. For example, this  mental calculation  might mean dividing the number of alcoholic drinks they consumed last week by seven to come up with an average number per day. Then they must format this tentative answer in terms of the response options actually provided. In this case, the options pose additional problems of interpretation. For example, what does “average” mean, and what would count as “somewhat more” than average? Finally, they must decide whether they want to report the response they have come up with or whether they want to edit it in some way. For example, if they believe that they drink much more than average, they might not want to report th e higher number  for fear of looking bad in the eyes of the researcher.

From this perspective, what at first appears to be a simple matter of asking people how much they drink (and receiving a straightforward answer from them) turns out to be much more complex.

Context Effects on Questionnaire Responses

Again, this complexity can lead to unintended influences on respondents’ answers. These are often referred to as  context effects  because they are not related to the content of the item but to the context in which the item appears (Schwarz & Strack, 1990) [3] . For example, there is an  item-order effect  when the order in which the items are presented affects people’s responses. One item can change how participants interpret a later item or change the information that they retrieve to respond to later items. For example, researcher Fritz Strack and his colleagues asked college students about both their general life satisfaction and their dating frequency (Strack, Martin, & Schwarz, 1988) [4] . When the life satisfaction item came first, the correlation between the two was only −.12, suggesting that the two variables are only weakly related. But when the dating frequency item came first, the correlation between the two was +.66, suggesting that those who date more have a strong tendency to be more satisfied with their lives. Reporting the dating frequency first made that information more accessible in memory so that they were more likely to base their life satisfaction rating on it.

The response options provided can also have unintended effects on people’s responses (Schwarz, 1999) [5] . For example, when people are asked how often they are “really irritated” and given response options ranging from “less than once a year” to “more than once a month,” they tend to think of major irritations and report being irritated infrequently. But when they are given response options ranging from “less than once a day” to “several times a month,” they tend to think of minor irritations and report being irritated frequently. People also tend to assume that middle response options represent what is normal or typical. So if they think of themselves as normal or typical, they tend to choose middle response options. For example, people are likely to report watching more television when the response options are centred on a middle option of 4 hours than when centred on a middle option of 2 hours.  To mitigate against order effects, rotate questions and response items when there is no natural order. Counterbalancing is a good practice for survey questions and can reduce response order effects which show that among undecided voters, the first candidate listed in a ballot receives a 2.5% boost simply by virtue of being listed first [6] !

Writing Survey Questionnaire Items

Types of items.

Questionnaire items can be either open-ended or closed-ended.  Open-ended items  simply ask a question and allow participants to answer in whatever way they choose. The following are examples of open-ended questionnaire items.

  • “What is the most important thing to teach children to prepare them for life?”
  • “Please describe a time when you were discriminated against because of your age.”
  • “Is there anything else you would like to tell us about?”

Open-ended items are useful when researchers do not know how participants might respond or want to avoid influencing their responses. They tend to be used when researchers have more vaguely defined research questions—often in the early stages of a research project. Open-ended items are relatively easy to write because there are no response options to worry about. However, they take more time and effort on the part of participants, and they are more difficult for the researcher to analy z e because the answers must be transcribed, coded, and submitted to some form of qualitative analysis, such as content analysis.  The advantage to open-ended items is that they are unbiased and do not provide respondents with expectations of what the researcher might be looking for. Open-ended items are also more valid and more reliable. The disadvantage is that respondents are more likely to skip open-ended items because they take longer to answer. It is best to use open-ended questions when the answer is unsure and for quantities which can easily be converted to categories later in the analysis.

Closed-ended items  ask a question and provide a set of response options for participants to choose from. The alcohol item just mentioned is an example, as are the following:

  How old are you?

  • _____ Under 18
  • _____ 18 to 34
  • _____ 35 to 49
  • _____ 50 to 70
  • _____ Over 70

On a scale of 0 (no pain at all) to 10 (worst pain ever experienced), how much pain are you in right now?

Have you ever in your adult life been depressed for a period of 2 weeks or more?

Closed-ended items are used when researchers have a good idea of the different responses that participants might make. They are also used when researchers are interested in a well-defined variable or construct such as participants’ level of agreement with some statement, perceptions of risk, or frequency of a particular behaviour. Closed-ended items are more difficult to write because they must include an appropriate set of response options. However, they are relatively quick and easy for participants to complete. They are also much easier for researchers to analyze because the responses can be easily converted to numbers and entered into a spreadsheet. For these reasons, closed-ended items are much more common.

All closed-ended items include a set of response options from which a participant must choose. For categorical variables like sex, race, or political party preference, the categories are usually listed and participants choose the one (or ones) that they belong to. For quantitative variables, a rating scale is typically provided. A  rating scale  is an ordered set of responses that participants must choose from.  Figure 9.2  shows several examples. The number of response options on a typical rating scale ranges from three to 11—although five and seven are probably most common. Five-point scales are best for unipolar scales where only one construct is tested, such as frequency (Never, Rarely, Sometimes, Often, Always). Seven-point scales are best for bipolar scales where there is a dichotomous spectrum, such as liking (Like very much, Like somewhat, Like slightly, Neither like nor dislike, Dislike slightly, Dislike somewhat, Dislike very much). For bipolar questions, it is useful to offer an earlier question that branches them into an area of the scale; if asking about liking ice cream, first ask “Do you generally like or dislike ice cream?” Once the respondent chooses like or dislike, refine it by offering them one of choices from the seven-point scale.  Branching improves both reliability and validity  (Krosnick & Berent, 1993) [7] .  Although you often see scales with numerical labels, it is best to only present verbal labels to the respondents but convert them to numerical values in the analyses. Avoid partial labels or length or overly specific labels. In some cases, the verbal labels can be supplemented with (or even replaced by) meaningful graphics. The last rating scale shown in  Figure 9.2  is a visual-analog scale, on which participants make a mark somewhere along the horizontal line to indicate the magnitude of their response.

Three different rating scales for survey questions. Long description available.

What is a Likert Scale?

In reading about psychological research, you are likely to encounter the term  Likert scale . Although this term is sometimes used to refer to almost any rating scale (e.g., a 0-to-10 life satisfaction scale), it has a much more precise meaning.

In the 1930s, researcher Rensis Likert (pronounced LICK-ert) created a new approach for measuring people’s attitudes (Likert, 1932) [8] . It involves presenting people with several statements—including both favourable and unfavourable statements—about some person, group, or idea. Respondents then express their agreement or disagreement with each statement on a 5-point scale:  Strongly Agree ,  Agree ,  Neither Agree nor Disagree ,  Disagree , Strongly Disagree . Numbers are assigned to each response (with reverse coding as necessary) and then summed across all items to produce a score representing the attitude toward the person, group, or idea. The entire set of items came to be called a Likert scale.

Thus unless you are measuring people’s attitude toward something by assessing their level of agreement with several statements about it, it is best to avoid calling it a Likert scale. You are probably just using a “rating scale.”

Writing Effective Items

We can now consider some principles of writing questionnaire items that minimize unintended context effects and maximize the reliability and validity of participants’ responses. A rough guideline for writing questionnaire items is provided by the BRUSO model (Peterson, 2000) [9] . An acronym,  BRUSO  stands for “brief,” “relevant,” “unambiguous,” “specific,” and “objective.” Effective questionnaire items are  brief  and to the point. They avoid long, overly technical, or unnecessary words. This brevity makes them easier for respondents to understand and faster for them to complete. Effective questionnaire items are also  relevant  to the research question. If a respondent’s sexual orientation, marital status, or income is not relevant, then items on them should probably not be included. Again, this makes the questionnaire faster to complete, but it also avoids annoying respondents with what they will rightly perceive as irrelevant or even “nosy” questions. Effective questionnaire items are also unambiguous ; they can be interpreted in only one way. Part of the problem with the alcohol item presented earlier in this section is that different respondents might have different ideas about what constitutes “an alcoholic drink” or “a typical day.” Effective questionnaire items are also  specific ,  so that it is clear to respondents what their response  should  be about and clear to researchers what it  is  about. A common problem here is closed-ended items that are “double barrelled.” They ask about two conceptually separate issues but allow only one response. For example, “Please rate the extent to which you have been feeling anxious and depressed.” This item should probably be split into two separate items—one about anxiety and one about depression. Finally, effective questionnaire items are  objective  in the sense that they do not reveal the researcher’s own opinions or lead participants to answer in a particular way. Table 9.2  shows some examples of poor and effective questionnaire items based on the BRUSO criteria. The best way to know how people interpret the wording of the question is to conduct pre-tests and ask a few people to explain how they interpreted the question.

For closed-ended items, it is also important to create an appropriate response scale. For categorical variables, the categories presented should generally be mutually exclusive and exhaustive. Mutually exclusive categories do not overlap. For a religion item, for example, the categories of  Christian  and Catholic  are not mutually exclusive but  Protestant  and  Catholic are. Exhaustive categories cover all possible responses.

Although  Protestant  and  Catholic  are mutually exclusive, they are not exhaustive because there are many other religious categories that a respondent might select:  Jewish ,  Hindu ,  Buddhist , and so on. In many cases, it is not feasible to include every possible category, in which case an  Other  category, with a space for the respondent to fill in a more specific response, is a good solution. If respondents could belong to more than one category (e.g., race), they should be instructed to choose all categories that apply.

For rating scales, five or seven response options generally allow about as much precision as respondents are capable of. However, numerical scales with more options can sometimes be appropriate. For dimensions such as attractiveness, pain, and likelihood, a 0-to-10 scale will be familiar to many respondents and easy for them to use. Regardless of the number of response options, the most extreme ones should generally be “balanced” around a neutral or modal midpoint. An example of an unbalanced rating scale measuring perceived likelihood might look like this:

Unlikely  |  Somewhat Likely  |  Likely  |  Very Likely  |  Extremely Likely

A balanced version might look like this:

Extremely Unlikely  |  Somewhat Unlikely  |  As Likely as Not  |  Somewhat Likely  | Extremely Likely

 Note, however, that a middle or neutral response option does not have to be included. Researchers sometimes choose to leave it out because they want to encourage respondents to think more deeply about their response and not simply choose the middle option by default. Including middle alternatives on bipolar dimensions is useful to allow people to genuinely choose an option that is neither.

Formatting the Questionnaire

Writing effective items is only one part of constructing a survey questionnaire. For one thing, every survey questionnaire should have a written or spoken introduction that serves two basic functions (Peterson, 2000) [10] . One is to encourage respondents to participate in the survey. In many types of research, such encouragement is not necessary either because participants do not know they are in a study (as in naturalistic observation) or because they are part of a subject pool and have already shown their willingness to participate by signing up and showing up for the study. Survey research usually catches respondents by surprise when they answer their phone, go to their mailbox, or check their e-mail—and the researcher must make a good case for why they should agree to participate. Thus the introduction should briefly explain the purpose of the survey and its importance, provide information about the sponsor of the survey (university-based surveys tend to generate higher response rates), acknowledge the importance of the respondent’s participation, and describe any incentives for participating.

The second function of the introduction is to establish informed consent. Remember that this aim means describing to respondents everything that might affect their decision to participate. This includes the topics covered by the survey, the amount of time it is likely to take, the respondent’s option to withdraw at any time, confidentiality issues, and so on. Written consent forms are not typically used in survey research, so it is important that this part of the introduction be well documented and presented clearly and in its entirety to every respondent.

The introduction should be followed by the substantive questionnaire items. But first, it is important to present clear instructions for completing the questionnaire, including examples of how to use any unusual response scales. Remember that the introduction is the point at which respondents are usually most interested and least fatigued, so it is good practice to start with the most important items for purposes of the research and proceed to less important items. Items should also be grouped by topic or by type. For example, items using the same rating scale (e.g., a 5-point agreement scale) should be grouped together if possible to make things faster and easier for respondents. Demographic items are often presented last because they are least interesting to participants but also easy to answer in the event respondents have become tired or bored. Of course, any survey should end with an expression of appreciation to the respondent.

Key Takeaways

  • Responding to a survey item is itself a complex cognitive process that involves interpreting the question, retrieving information, making a tentative judgment, putting that judgment into the required response format, and editing the response.
  • Survey questionnaire responses are subject to numerous context effects due to question wording, item order, response options, and other factors. Researchers should be sensitive to such effects when constructing surveys and interpreting survey results.
  • Survey questionnaire items are either open-ended or closed-ended. Open-ended items simply ask a question and allow respondents to answer in whatever way they want. Closed-ended items ask a question and provide several response options that respondents must choose from.
  • Use verbal labels instead of numerical labels although the responses can be converted to numerical data in the analyses.
  • According to the BRUSO model, questionnaire items should be brief, relevant, unambiguous, specific, and objective.
  • Discussion: Write a survey item and then write a short description of how someone might respond to that item based on the cognitive model of survey responding (or choose any item on the Rosenberg Self-Esteem Scale .
  • How much does the respondent use Facebook?
  • How much exercise does the respondent get?
  • How likely does the respondent think it is that the incumbent will be re-elected in the next presidential election?
  • To what extent does the respondent experience “road rage”?

Long Descriptions

Figure 9.1 long description: Flowchart modelling the cognitive processes involved in responding to a survey item. In order, these processes are:

  • Question Interpretation
  • Information Retrieval
  • Judgment Formation
  • Response Formatting
  • Response Editing

[Return to Figure 9.1]

Figure 9.2 long description: Three different rating scales for survey questions. The first scale provides a choice between “strongly agree,” “agree,” “neither agree nor disagree,” “disagree,” and “strongly disagree.” The second is a scale from 1 to 7, with 1 being “extremely unlikely” and 7 being “extremely likely.” The third is a sliding scale, with one end marked “extremely unfriendly” and the other “extremely friendly.” [Return to Figure 9.2]

Figure 9.3 long description: A note reads, “Dear Isaac. Do you like me?” with two check boxes reading “yes” or “no.” Someone has added a third check box, which they’ve checked, that reads, “There is as yet insufficient data for a meaningful answer.” [Return to Figure 9.3]

Media Attributions

  • Study  by XKCD  CC BY-NC (Attribution NonCommercial)
  • Sudman, S., Bradburn, N. M., & Schwarz, N. (1996). Thinking about answers: The application of cognitive processes to survey methodology . San Francisco, CA: Jossey-Bass. ↵
  • Chang, L., & Krosnick, J.A. (2003). Measuring the frequency of regular behaviors: Comparing the ‘typical week’ to the ‘past week’. Sociological Methodology, 33 , 55-80. ↵
  • Schwarz, N., & Strack, F. (1990). Context effects in attitude surveys: Applying cognitive theory to social research. In W. Stroebe & M. Hewstone (Eds.), European review of social psychology (Vol. 2, pp. 31–50). Chichester, UK: Wiley. ↵
  • Strack, F., Martin, L. L., & Schwarz, N. (1988). Priming and communication: The social determinants of information use in judgments of life satisfaction. European Journal of Social Psychology, 18 , 429–442. ↵
  • Schwarz, N. (1999). Self-reports: How the questions shape the answers. American Psychologist, 54 , 93–105. ↵
  • Miller, J.M. & Krosnick, J.A. (1998). The impact of candidate name order on election outcomes. Public Opinion Quarterly, 62 (3), 291-330. ↵
  • Krosnick, J.A. & Berent, M.K. (1993). Comparisons of party identification and policy preferences: The impact of survey question format. American Journal of Political Science, 27 (3), 941-964. ↵
  • Likert, R. (1932). A technique for the measurement of attitudes. Archives of Psychology,140 , 1–55. ↵
  • Peterson, R. A. (2000). Constructing effective questionnaires . Thousand Oaks, CA: Sage. ↵

Being tested in one condition can also change how participants perceive stimuli or interpret their task in later conditions.

The order in which the items are presented affects people’s responses.

A questionnaire item that allows participants to answer in whatever way they choose.

A questionnaire item that asks a question and provides a set of response options for participants to choose from.

An ordered set of responses that participants must choose from.

A guideline for questionnaire items that suggests they should be brief, relevant, specific, and objective.

Research Methods in Psychology - 2nd Canadian Edition Copyright © 2015 by Paul C. Price, Rajiv Jhangiani, & I-Chant A. Chiang is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License , except where otherwise noted.

Share This Book

psychology survey research paper

61 intriguing psychology research topics to explore

Last updated

11 January 2024

Reviewed by

Brittany Ferri, PhD, OTR/L

Psychology is an incredibly diverse, critical, and ever-changing area of study in the medical and health industries. Because of this, it’s a common area of study for students and healthcare professionals.

We’re walking you through picking the perfect topic for your upcoming paper or study. Keep reading for plenty of example topics to pique your interest and curiosity.

  • How to choose a psychology research topic

Exploring a psychology-based topic for your research project? You need to pick a specific area of interest to collect compelling data. 

Use these tips to help you narrow down which psychology topics to research:

Focus on a particular area of psychology

The most effective psychological research focuses on a smaller, niche concept or disorder within the scope of a study. 

Psychology is a broad and fascinating area of science, including everything from diagnosed mental health disorders to sports performance mindset assessments. 

This gives you plenty of different avenues to explore. Having a hard time choosing? Check out our list of 61 ideas further down in this article to get started.

Read the latest clinical studies

Once you’ve picked a more niche topic to explore, you need to do your due diligence and explore other research projects on the same topic. 

This practice will help you learn more about your chosen topic, ask more specific questions, and avoid covering existing projects. 

For the best results, we recommend creating a research folder of associated published papers to reference throughout your project. This makes it much easier to cite direct references and find inspiration down the line.

Find a topic you enjoy and ask questions

Once you’ve spent time researching and collecting references for your study, you finally get to explore. 

Whether this research project is for work, school, or just for fun, having a passion for your research will make the project much more enjoyable. (Trust us, there will be times when that is the only thing that keeps you going.) 

Now you’ve decided on the topic, ask more nuanced questions you might want to explore. 

If you can, pick the direction that interests you the most to make the research process much more enjoyable.

  • 61 psychology topics to research in 2024

Need some extra help starting your psychology research project on the right foot? Explore our list of 61 cutting-edge, in-demand psychology research topics to use as a starting point for your research journey.

  • Psychology research topics for university students

As a university student, it can be hard to pick a research topic that fits the scope of your classes and is still compelling and unique. 

Here are a few exciting topics we recommend exploring for your next assigned research project:

Mental health in post-secondary students

Seeking post-secondary education is a stressful and overwhelming experience for most students, making this topic a great choice to explore for your in-class research paper. 

Examples of post-secondary mental health research topics include:

Student mental health status during exam season

Mental health disorder prevalence based on study major

The impact of chronic school stress on overall quality of life

The impacts of cyberbullying

Cyberbullying can occur at all ages, starting as early as elementary school and carrying through into professional workplaces. 

Examples of cyberbullying-based research topics you can study include:

The impact of cyberbullying on self-esteem

Common reasons people engage in cyberbullying 

Cyberbullying themes and commonly used terms

Cyberbullying habits in children vs. adults

The long-term effects of cyberbullying

  • Clinical psychology research topics

If you’re looking to take a more clinical approach to your next project, here are a few topics that involve direct patient assessment for you to consider:

Chronic pain and mental health

Living with chronic pain dramatically impacts every aspect of a person’s life, including their mental and emotional health. 

Here are a few examples of in-demand pain-related psychology research topics:

The connection between diabetic neuropathy and depression

Neurological pain and its connection to mental health disorders

Efficacy of meditation and mindfulness for pain management

The long-term effects of insomnia

Insomnia is where you have difficulty falling or staying asleep. It’s a common health concern that impacts millions of people worldwide. 

This is an excellent topic because insomnia can have a variety of causes, offering many research possibilities. 

Here are a few compelling psychology research topics about insomnia you could investigate:

The prevalence of insomnia based on age, gender, and ethnicity

Insomnia and its impact on workplace productivity

The connection between insomnia and mental health disorders

Efficacy and use of melatonin supplements for insomnia

The risks and benefits of prescription insomnia medications

Lifestyle options for managing insomnia symptoms

The efficacy of mental health treatment options

Management and treatment of mental health conditions is an ever-changing area of study. If you can witness or participate in mental health therapies, this can make a great research project. 

Examples of mental health treatment-related psychology research topics include:

The efficacy of cognitive behavioral therapy (CBT) for patients with severe anxiety

The benefits and drawbacks of group vs. individual therapy sessions

Music therapy for mental health disorders

Electroconvulsive therapy (ECT) for patients with depression 

  • Controversial psychology research paper topics

If you are looking to explore a more cutting-edge or modern psychology topic, you can delve into a variety of controversial and topical options:

The impact of social media and digital platforms

Ever since access to internet forums and video games became more commonplace, there’s been growing concern about the impact these digital platforms have on mental health. 

Examples of social media and video game-related psychology research topics include:

The effect of edited images on self-confidence

How social media platforms impact social behavior

Video games and their impact on teenage anger and violence

Digital communication and the rapid spread of misinformation

The development of digital friendships

Psychotropic medications for mental health

In recent years, the interest in using psychoactive medications to treat and manage health conditions has increased despite their inherently controversial nature. 

Examples of psychotropic medication-related research topics include:

The risks and benefits of using psilocybin mushrooms for managing anxiety

The impact of marijuana on early-onset psychosis

Childhood marijuana use and related prevalence of mental health conditions

Ketamine and its use for complex PTSD (C-PTSD) symptom management

The effect of long-term psychedelic use and mental health conditions

  • Mental health disorder research topics

As one of the most popular subsections of psychology, studying mental health disorders and how they impact quality of life is an essential and impactful area of research. 

While studies in these areas are common, there’s always room for additional exploration, including the following hot-button topics:

Anxiety and depression disorders

Anxiety and depression are well-known and heavily researched mental health disorders. 

Despite this, we still don’t know many things about these conditions, making them great candidates for psychology research projects:

Social anxiety and its connection to chronic loneliness

C-PTSD symptoms and causes

The development of phobias

Obsessive-compulsive disorder (OCD) behaviors and symptoms

Depression triggers and causes

Self-care tools and resources for depression

The prevalence of anxiety and depression in particular age groups or geographic areas

Bipolar disorder

Bipolar disorder is a complex and multi-faceted area of psychology research. 

Use your research skills to learn more about this condition and its impact by choosing any of the following topics:

Early signs of bipolar disorder

The incidence of bipolar disorder in young adults

The efficacy of existing bipolar treatment options

Bipolar medication side effects

Cognitive behavioral therapy for people with bipolar 

Schizoaffective disorder

Schizoaffective disorder is often stigmatized, and less common mental health disorders are a hotbed for new and exciting research. 

Here are a few examples of interesting research topics related to this mental health disorder:

The prevalence of schizoaffective disorder by certain age groups or geographic locations

Risk factors for developing schizoaffective disorder

The prevalence and content of auditory and visual hallucinations

Alternative therapies for schizoaffective disorder

  • Societal and systematic psychology research topics

Modern society’s impact is deeply enmeshed in our mental and emotional health on a personal and community level. 

Here are a few examples of societal and systemic psychology research topics to explore in more detail:

Access to mental health services

While mental health awareness has risen over the past few decades, access to quality mental health treatment and resources is still not equitable. 

This can significantly impact the severity of a person’s mental health symptoms, which can result in worse health outcomes if left untreated. 

Explore this crucial issue and provide information about the need for improved mental health resource access by studying any of the following topics:

Rural vs. urban access to mental health resources

Access to crisis lines by location

Wait times for emergency mental health services

Inequities in mental health access based on income and location

Insurance coverage for mental health services

Systemic racism and mental health

Societal systems and the prevalence of systemic racism heavily impact every aspect of a person’s overall health.

Researching these topics draws attention to existing problems and contributes valuable insights into ways to improve access to care moving forward.

Examples of systemic racism-related psychology research topics include: 

Access to mental health resources based on race

The prevalence of BIPOC mental health therapists in a chosen area

The impact of systemic racism on mental health and self-worth

Racism training for mental health workers

The prevalence of mental health disorders in discriminated groups

LGBTQIA+ mental health concerns

Research about LGBTQIA+ people and their mental health needs is a unique area of study to explore for your next research project. It’s a commonly overlooked and underserved community.

Examples of LGBTQIA+ psychology research topics to consider include:

Mental health supports for queer teens and children

The impact of queer safe spaces on mental health

The prevalence of mental health disorders in the LGBTQIA+ community

The benefits of queer mentorship and found family

Substance misuse in LQBTQIA+ youth and adults

  • Collect data and identify trends with Dovetail

Psychology research is an exciting and competitive study area, making it the perfect choice for projects or papers.

Take the headache out of analyzing your data and instantly access the insights you need to complete your next psychology research project by teaming up with Dovetail today.

Should you be using a customer insights hub?

Do you want to discover previous research faster?

Do you share your research findings with others?

Do you analyze research data?

Start for free today, add your research, and get to key insights faster

Editor’s picks

Last updated: 11 January 2024

Last updated: 15 January 2024

Last updated: 17 January 2024

Last updated: 12 May 2023

Last updated: 30 April 2024

Last updated: 18 May 2023

Last updated: 25 November 2023

Last updated: 13 May 2024

Latest articles

Related topics, .css-je19u9{-webkit-align-items:flex-end;-webkit-box-align:flex-end;-ms-flex-align:flex-end;align-items:flex-end;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-flex-direction:row;-ms-flex-direction:row;flex-direction:row;-webkit-box-flex-wrap:wrap;-webkit-flex-wrap:wrap;-ms-flex-wrap:wrap;flex-wrap:wrap;-webkit-box-pack:center;-ms-flex-pack:center;-webkit-justify-content:center;justify-content:center;row-gap:0;text-align:center;max-width:671px;}@media (max-width: 1079px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}}@media (max-width: 799px){.css-je19u9{max-width:400px;}.css-je19u9>span{white-space:pre;}} decide what to .css-1kiodld{max-height:56px;display:-webkit-box;display:-webkit-flex;display:-ms-flexbox;display:flex;-webkit-align-items:center;-webkit-box-align:center;-ms-flex-align:center;align-items:center;}@media (max-width: 1079px){.css-1kiodld{display:none;}} build next, decide what to build next.

psychology survey research paper

Users report unexpectedly high data usage, especially during streaming sessions.

psychology survey research paper

Users find it hard to navigate from the home page to relevant playlists in the app.

psychology survey research paper

It would be great to have a sleep timer feature, especially for bedtime listening.

psychology survey research paper

I need better filters to find the songs or artists I’m looking for.

Log in or sign up

Get started for free

  • Search This Site All UCSD Sites Faculty/Staff Search Term
  • Contact & Directions
  • Climate Statement
  • Cognitive Behavioral Neuroscience
  • Cognitive Psychology
  • Developmental Psychology
  • Social Psychology
  • Adjunct Faculty
  • Non-Senate Instructors
  • Researchers
  • Psychology Grads
  • Affiliated Grads
  • New and Prospective Students
  • Honors Program
  • Experiential Learning
  • Programs & Events
  • Psi Chi / Psychology Club
  • Prospective PhD Students
  • Current PhD Students
  • Area Brown Bags
  • Colloquium Series
  • Anderson Distinguished Lecture Series
  • Speaker Videos
  • Undergraduate Program
  • Academic and Writing Resources

Writing Research Papers

  • Research Paper Structure

Whether you are writing a B.S. Degree Research Paper or completing a research report for a Psychology course, it is highly likely that you will need to organize your research paper in accordance with American Psychological Association (APA) guidelines.  Here we discuss the structure of research papers according to APA style.

Major Sections of a Research Paper in APA Style

A complete research paper in APA style that is reporting on experimental research will typically contain a Title page, Abstract, Introduction, Methods, Results, Discussion, and References sections. 1  Many will also contain Figures and Tables and some will have an Appendix or Appendices.  These sections are detailed as follows (for a more in-depth guide, please refer to " How to Write a Research Paper in APA Style ”, a comprehensive guide developed by Prof. Emma Geller). 2

What is this paper called and who wrote it? – the first page of the paper; this includes the name of the paper, a “running head”, authors, and institutional affiliation of the authors.  The institutional affiliation is usually listed in an Author Note that is placed towards the bottom of the title page.  In some cases, the Author Note also contains an acknowledgment of any funding support and of any individuals that assisted with the research project.

One-paragraph summary of the entire study – typically no more than 250 words in length (and in many cases it is well shorter than that), the Abstract provides an overview of the study.

Introduction

What is the topic and why is it worth studying? – the first major section of text in the paper, the Introduction commonly describes the topic under investigation, summarizes or discusses relevant prior research (for related details, please see the Writing Literature Reviews section of this website), identifies unresolved issues that the current research will address, and provides an overview of the research that is to be described in greater detail in the sections to follow.

What did you do? – a section which details how the research was performed.  It typically features a description of the participants/subjects that were involved, the study design, the materials that were used, and the study procedure.  If there were multiple experiments, then each experiment may require a separate Methods section.  A rule of thumb is that the Methods section should be sufficiently detailed for another researcher to duplicate your research.

What did you find? – a section which describes the data that was collected and the results of any statistical tests that were performed.  It may also be prefaced by a description of the analysis procedure that was used. If there were multiple experiments, then each experiment may require a separate Results section.

What is the significance of your results? – the final major section of text in the paper.  The Discussion commonly features a summary of the results that were obtained in the study, describes how those results address the topic under investigation and/or the issues that the research was designed to address, and may expand upon the implications of those findings.  Limitations and directions for future research are also commonly addressed.

List of articles and any books cited – an alphabetized list of the sources that are cited in the paper (by last name of the first author of each source).  Each reference should follow specific APA guidelines regarding author names, dates, article titles, journal titles, journal volume numbers, page numbers, book publishers, publisher locations, websites, and so on (for more information, please see the Citing References in APA Style page of this website).

Tables and Figures

Graphs and data (optional in some cases) – depending on the type of research being performed, there may be Tables and/or Figures (however, in some cases, there may be neither).  In APA style, each Table and each Figure is placed on a separate page and all Tables and Figures are included after the References.   Tables are included first, followed by Figures.   However, for some journals and undergraduate research papers (such as the B.S. Research Paper or Honors Thesis), Tables and Figures may be embedded in the text (depending on the instructor’s or editor’s policies; for more details, see "Deviations from APA Style" below).

Supplementary information (optional) – in some cases, additional information that is not critical to understanding the research paper, such as a list of experiment stimuli, details of a secondary analysis, or programming code, is provided.  This is often placed in an Appendix.

Variations of Research Papers in APA Style

Although the major sections described above are common to most research papers written in APA style, there are variations on that pattern.  These variations include: 

  • Literature reviews – when a paper is reviewing prior published research and not presenting new empirical research itself (such as in a review article, and particularly a qualitative review), then the authors may forgo any Methods and Results sections. Instead, there is a different structure such as an Introduction section followed by sections for each of the different aspects of the body of research being reviewed, and then perhaps a Discussion section. 
  • Multi-experiment papers – when there are multiple experiments, it is common to follow the Introduction with an Experiment 1 section, itself containing Methods, Results, and Discussion subsections. Then there is an Experiment 2 section with a similar structure, an Experiment 3 section with a similar structure, and so on until all experiments are covered.  Towards the end of the paper there is a General Discussion section followed by References.  Additionally, in multi-experiment papers, it is common for the Results and Discussion subsections for individual experiments to be combined into single “Results and Discussion” sections.

Departures from APA Style

In some cases, official APA style might not be followed (however, be sure to check with your editor, instructor, or other sources before deviating from standards of the Publication Manual of the American Psychological Association).  Such deviations may include:

  • Placement of Tables and Figures  – in some cases, to make reading through the paper easier, Tables and/or Figures are embedded in the text (for example, having a bar graph placed in the relevant Results section). The embedding of Tables and/or Figures in the text is one of the most common deviations from APA style (and is commonly allowed in B.S. Degree Research Papers and Honors Theses; however you should check with your instructor, supervisor, or editor first). 
  • Incomplete research – sometimes a B.S. Degree Research Paper in this department is written about research that is currently being planned or is in progress. In those circumstances, sometimes only an Introduction and Methods section, followed by References, is included (that is, in cases where the research itself has not formally begun).  In other cases, preliminary results are presented and noted as such in the Results section (such as in cases where the study is underway but not complete), and the Discussion section includes caveats about the in-progress nature of the research.  Again, you should check with your instructor, supervisor, or editor first.
  • Class assignments – in some classes in this department, an assignment must be written in APA style but is not exactly a traditional research paper (for instance, a student asked to write about an article that they read, and to write that report in APA style). In that case, the structure of the paper might approximate the typical sections of a research paper in APA style, but not entirely.  You should check with your instructor for further guidelines.

Workshops and Downloadable Resources

  • For in-person discussion of the process of writing research papers, please consider attending this department’s “Writing Research Papers” workshop (for dates and times, please check the undergraduate workshops calendar).

Downloadable Resources

  • How to Write APA Style Research Papers (a comprehensive guide) [ PDF ]
  • Tips for Writing APA Style Research Papers (a brief summary) [ PDF ]
  • Example APA Style Research Paper (for B.S. Degree – empirical research) [ PDF ]
  • Example APA Style Research Paper (for B.S. Degree – literature review) [ PDF ]

Further Resources

How-To Videos     

  • Writing Research Paper Videos

APA Journal Article Reporting Guidelines

  • Appelbaum, M., Cooper, H., Kline, R. B., Mayo-Wilson, E., Nezu, A. M., & Rao, S. M. (2018). Journal article reporting standards for quantitative research in psychology: The APA Publications and Communications Board task force report . American Psychologist , 73 (1), 3.
  • Levitt, H. M., Bamberg, M., Creswell, J. W., Frost, D. M., Josselson, R., & Suárez-Orozco, C. (2018). Journal article reporting standards for qualitative primary, qualitative meta-analytic, and mixed methods research in psychology: The APA Publications and Communications Board task force report . American Psychologist , 73 (1), 26.  

External Resources

  • Formatting APA Style Papers in Microsoft Word
  • How to Write an APA Style Research Paper from Hamilton University
  • WikiHow Guide to Writing APA Research Papers
  • Sample APA Formatted Paper with Comments
  • Sample APA Formatted Paper
  • Tips for Writing a Paper in APA Style

1 VandenBos, G. R. (Ed). (2010). Publication manual of the American Psychological Association (6th ed.) (pp. 41-60).  Washington, DC: American Psychological Association.

2 geller, e. (2018).  how to write an apa-style research report . [instructional materials]. , prepared by s. c. pan for ucsd psychology.

Back to top  

  • Formatting Research Papers
  • Using Databases and Finding References
  • What Types of References Are Appropriate?
  • Evaluating References and Taking Notes
  • Citing References
  • Writing a Literature Review
  • Writing Process and Revising
  • Improving Scientific Writing
  • Academic Integrity and Avoiding Plagiarism
  • Writing Research Papers Videos

Purdue Online Writing Lab Purdue OWL® College of Liberal Arts

APA Sample Paper: Experimental Psychology

OWL logo

Welcome to the Purdue OWL

This page is brought to you by the OWL at Purdue University. When printing this page, you must include the entire legal notice.

Copyright ©1995-2018 by The Writing Lab & The OWL at Purdue and Purdue University. All rights reserved. This material may not be published, reproduced, broadcast, rewritten, or redistributed without permission. Use of this site constitutes acceptance of our terms and conditions of fair use.

  • Open access
  • Published: 10 May 2024

Navigating an unpredictable environment: the moderating role of perceived environmental unpredictability in the effectiveness of ecological resource scarcity information on pro-environmental behavior

  • Dian Gu 1 , 2 &
  • Jiang Jiang 3  

BMC Psychology volume  12 , Article number:  261 ( 2024 ) Cite this article

135 Accesses

6 Altmetric

Metrics details

The global issue of ecological resource scarcity, worsened by climate change, necessitates effective methods to promote resource conservation. One commonly used approach is presenting ecological resource scarcity information. However, the effectiveness of this method remains uncertain, particularly in an unpredictable world. This research aims to examine the role of perceived environmental unpredictability in moderating the impact of ecological resource scarcity information on pro-environmental behavior (PEB).

We conducted three studies to test our hypothesis on moderation. Study 1 ( N  = 256) measured perceived general environmental unpredictability, perceived resource scarcity and daily PEB frequencies in a cross-sectional survey. Study 2 ( N  = 107) took it a step further by manipulating resource scarcity. Importantly, to increase ecological validity, Study 3 ( N  = 135) manipulated the information on both ecological resource scarcity and nature-related environmental unpredictability, and measured real water and paper consumption using a newly developed washing-hands paradigm.

In Study 1, we discovered that perceived resource scarcity positively predicted PEB, but only when individuals perceive the environment as less unpredictable (interaction effect: 95% CI  = [-0.09, -0.01], Δ R 2  = 0.018). Furthermore, by manipulating scarcity information, Study 2 revealed that only for individuals with lower levels of environmental unpredictability presenting ecological resource scarcity information could decrease forest resource consumption intention (interaction effect: 95% CI  = [-0.025, -0.031], Δ R 2  = .04). Moreover, Study 3 found that the negative effect of water resource scarcity information on actual water and (interaction effect: 95%CI = [3.037, 22.097], η p 2  = .050) paper saving behaviors (interaction effect: 95%CI = [0.021, 0.275], η p 2  = .040), as well as hypothetical forest resource consumption (interaction effect: 95%CI = [-0.053, 0.849], η p 2  = .023) emerged only for people who receiving weaker environmental unpredictability information.

Across three studies, we provide evidence to support the moderation hypothesis that environmental unpredictability weakens the positive effect of ecological resource scarcity information on PEB, offering important theoretical and practical implications on the optimal use of resource scarcity to enhance PEB.

Peer Review reports

Introduction

Ecological resource scarcity, such as water and energy, poses significant challenges in our current times. The reduction of renewable freshwater resources per capita by 55% from 1993 to 2014 emphasizes the urgency of addressing this issue [ 1 ]. According to the World Economic Forum (2019), water shortages remain a top concern for policymakers and business leaders worldwide. In response to resource scarcity, various entities, including governments, water utilities, and community-based organizations, have employed different strategies to promote resource conservation [ 2 ]. One of the most common approaches is to raise problem awareness by conveying information about resource scarcity [ 2 ]. For example, the fact that billions of people lack access to safe water is utilized in the World Water Day campaign in 2023 to encourage more people to take action. Additionally, the Hong Kong SAR Government’s “Let’s Save 10L Water 2.0” campaign emphasizes the importance of conserving water by highlighting the limited availability of this resource.

Despite these efforts, it is important to recognize the complexity and interconnectedness of the world we live in, which makes predicting future environmental conditions challenging. Unforeseen events such as pathogen prevalence, natural disasters, wars, and financial crises illustrate the dynamic nature of our environment. In such an unpredictable world, can simply providing information about ecological resource scarcity lead to a significant increase in pro-environmental behaviors?

In the current research, we aimed to explore whether ecological resource scarcity information could promote pro-environmental behaviors effectively in the unpredictable world. We argued that ecological resource scarcity information is not necessarily useful in promoting pro-environmental behaviors and proposed that environmental unpredictability is a vital factor weakening the effect of ecological resource scarcity on resource consumption.

Uncertain association between ecological resource scarcity information and pro-environmental behaviors

Based on the information-motivation-behavioral skills (IMB) model, individuals are more likely to change their behavior when they are informed about a problem, along with being motivated to act and have skills to act [ 3 ]. In the environmental protection domain, there is a general lack of problem awareness about ecological resource scarcity [ 4 , 5 ]. This lack of awareness hinders individuals from engaging in pro-environmental behaviors (PEB), which refers to the actions that enhance the quality of the environment, regardless of the intent behind them [ 6 ]. Resource conservation campaigns often focus on resource scarcity information to encourage PEB [ 7 ]. In some empirical studies, the resource scarcity information was found to be effective. For example, individuals living in regions that experience drought have a higher tendency to make behavioral changes to conserve water [ 8 , 9 ]. People who perceived stronger ecological resource scarcity reported higher resource-saving behavioral frequencies [ 10 ], and indicated a higher frequency of PEB [ 11 ]. And water scarcity information was linked to a significant decrease in water use [ 12 , 13 , 14 ].

However, we identified some conflicting evidence. Information about resource scarcity is often not sufficient to reduce resource consumption in intervention [ 15 ], and the effectiveness of awareness campaigns is unclear [ 16 ]. For example, presenting the information about water resource scarcity only was evaluated as ineffective to promote water-saving behaviors by lay people [ 10 ]. Energy scarcity information was not strong enough to affect attitudes, intentions, and behaviors toward electricity energy saving [ 17 ]. Moreover, resource scarcity information failed to modify resource consumption behaviors in experimental settings [ 2 , 18 ].

The uncertain relationship between resource scarcity and PEB can be understood through an evolutionary psychological approach. According to the life history theory, individuals may adopt various strategies for allocating resources [ 19 , 20 , 21 , 22 , 23 ]. Those who choose a slow life history strategy prioritize long-term benefits and future planning, which leads them to behave in an environmentally friendly manner for the sake of future generations. On the other hand, individuals adopting a fast life history strategy prioritize immediate gains over long-term consequences [ 24 ], resulting in less PEB.

This theory, combined with empirical evidence, suggest that the impact of resource scarcity on PEB may vary depending on the situation, implying that promoting pro-environmental actions may require considering factors beyond simply informing individuals about scarcity. If PEB is seen as an investment in the environment, people engaging in PEB expect long-term benefits from it. However, the environment does not always provide consistent long-term benefits, particularly in today’s unpredictable world. When the expected advantages of environmental protection become uncertain, individuals may prioritize immediate gains, exploit natural resources, and reduce their commitment to PEB. This study hence focuses on the situational factor related to the unpredictable environment, testing its importance in influencing individuals’ PEB under resource scarcity.

Moderating role of environmental unpredictability

Environmental unpredictability is defined as the level of spatial–temporal variation in environmental harshness [ 24 ]. Past empirical studies measured environmental unpredictability in diverse ways [ 25 ]. In the current research, we tried to capture both individual-related and nature-related environmental unpredictability in temporal or spatial dimensions. Individual-related environmental unpredictability is mostly indicated by residential changes, and changes in parental financial status for children [ 19 , 24 , 26 ]. It shows whether the structure of an environment, such as the social or economic environment in which one lives, changes over time. Nature-related environmental unpredictability focuses on the pattern of variation that makes environments unpredictable, such as unpredictability of weather and the unpredictability of natural disasters [ 25 ].

Based on the life history theory, the environment plays a crucial role in shaping individuals’ life history strategies [ 19 , 20 , 21 , 22 , 23 ]. In predictable environments individuals are more likely to adopt a slow life-history strategy, while highly unpredictable environments promote a fast life-history strategy [ 24 ]. Importantly, environmental unpredictability during childhood can influence short-sighted tendencies [ 27 , 28 , 29 , 30 ], and this effect can also be observed in adulthood [ 31 ]. In an unpredictable environment, individuals prioritize immediate desires over future needs because investing in long-term environmental protection may not yield future benefits. This has implications for PEB, as present efforts on environmental protection may not be effective in improving resource scarcity in the future when the environment is unpredictable.

There are two aspects that illustrate the expectation that PEB efforts may not pay off in unpredictable environments. Firstly, in an unpredictable environment, there is a flow of uncontrollable information, which makes it challenging for individuals to maintain strong beliefs that their actions can bring about positive outcomes, such as improving resource scarcity [ 32 ]. According to the theories of reasoned action and planned behavior, the impact of awareness of the problem on behavior is greater when individuals perceive a higher level of control over their actions [ 33 ]. Hence, environmental unpredictability not only reduces the perceived personal control but also creates a barrier between scarcity awareness and PEB.

Secondly, in unpredictable environments, individuals are more likely to fear free riders, which further hinders behavioral change towards environmental protection under resource scarcity. When deciding whether to take action to protect the environment, people consider whether others will cooperate. However, in unpredictable environments, the likelihood of others investing in PEB becomes uncertain as well, which induces a heightened fear of free riders. For instance, experimental games have shown that individuals behave less cooperatively and invest fewer public goods when the probability of benefiting from them is uncertain [ 34 ]. Moreover, studies have demonstrated that individuals are less likely to prioritize the interests of others over their own when environmental unpredictability is primed [ 31 , 35 ]. Due to the fear that others will not take action in an unpredictable environment, individual efforts to protect the environment may appear less effective in solving the issue of resource scarcity.

Taken together, stronger environmental unpredictability is associated with a fast life-history strategy characterized by low self-efficacy and high fear of free riders, which ultimately leads to less PEB performance in the face of resource scarcity. Both multilevel and individual-level studies have indicated that psychological traits similar to the fast life history strategy weaken the association between environmental problem awareness and actual PEB [ 10 , 36 ]. Besides, some indirect evidence revealed that resource scarcity and environmental unpredictability could lead to some psychological outcomes that go against promoting PEB. Specifically, poorer childhood and economic uncertainty jointly increase the present orientation and decrease the sense of control [ 37 , 38 ]. A strong present orientation and low sense of control discourage people from taking actions to save resources [ 39 ]. With the above in mind, the following moderation hypothesis was proposed:

Hypothesis: Environmental unpredictability will moderate the effect of ecological resource scarcity on PEB. Specifically, ecological resource scarcity information would play a less effective role in promoting PEB when environmental unpredictability is stronger.

Current research

In the current research, we conducted three studies to test our hypothesis on moderation. In Study 1, we examined whether perceived general environmental unpredictability would moderate the relationship between perceived resource scarcity and daily PEB frequencies. Study 2 took it a step further by manipulating resource scarcity to test whether the positive effect of ecological resource scarcity information on forest resource consumption intention would be weakened by individual-related environmental unpredictability, specifically the frequency of residential changes. Importantly, to increase ecological validity, Study 3 manipulated the information on both ecological resource scarcity and nature-related environmental unpredictability, and measured real water and paper consumption using a newly developed washing-hands paradigm.

To examine the moderating effect of environmental unpredictability on the relationship between ecological resource scarcity and daily PEB frequency, we conducted a cross-sectional survey for Study 1. We hypothesised that ecological resource scarcity would predict higher frequencies of daily PEB for individuals who perceived the environment as predictable. However, we expected this positive association to diminish for individuals who perceived high levels of environmental unpredictability.

Participants

To ensure sufficient statistical power (80% power, α = .05) to detect a small-to-medium-sized effect for our moderation hypothesis, based on previous research in the same domain [ 10 ], we estimated that a sample size of 256 participants would be required using G*Power 3.1 [ 40 ]. Participants were recruited from a Chinese online survey platform ( www.wjx.cn ) and received monetary compensation for their participation. The survey platform utilized a voluntary opt-in panel, inviting users to complete the questionnaire. A total of 263 participants from China completed the survey. It is important to note that data collection was planned to conclude once 256 observations were collected within a three-week period.

The average age of the participants was 32.21 ± 7.11 years (ranging from 18 to 66 years), with 44.1% of them being male ( N  = 116). In terms of educational attainment, 1.9% held a middle-school degree or below, 1.9% had a high school degree, 8.7% held a junior college degree, 79.8% had a bachelor’s degree, and 7.6% had a master’s degree or higher. The average annual family income was 23.65 ± 21.04 ten thousand yuan.

Procedure and measures

To address the potential influence of priming participants’ perceived resource scarcity through items expressing the seriousness of resource scarcity [ 11 , 41 , 42 ], we carefully structured the data collection process. Firstly, we measured the dependent variable, PEB frequencies. Following this, participants completed the measure of perceived environmental unpredictability, and subsequently rated their perceived ecological resource scarcity. Additionally, to account for potential bias in self-reported PEB due to social desirability [ 43 ], we included a measurement of social desirability as a control variable. Finally, participants provided their demographic information, including age, gender, educational attainment, and annual personal income.

Perceived resource scarcity

The measurement of perceived ecological resource scarcity, consisting of 5 items, was adapted from a previous study conducted by Gu and her colleagues [ 10 ] (Cronbach’s α in the current study is 0.79). Participants were asked to indicate their level of agreement with statements such as “There are not enough resources for everyone in the place where I live” and “In the place where I live, I have already noticed some signs of resource scarcity.” Each item was rated on a 7-point Likert scale, ranging from 1 ( strongly disagree ) to 7 ( strongly agree ). The mean score of the entire scale was computed. Higher scores on this scale indicated higher levels of perceived ecological resource scarcity.

Perceived general environmental unpredictability

The item “For me, the environment we live in is unpredictable” developed by Reynolds and McCrea [ 44 ], was used to measure how participants perceived the general unpredictability of their environment. Participants rated this item on a 7-point Likert scale, ranging from 1 ( strongly disagree ) to 7 ( strongly agree ). Higher score indicated stronger perceived unpredictability.

Daily PEB frequency

Participants were asked to rate the frequency of PEB in their daily lives on a scale from 1 ( never ) to 5 ( always ). They were presented with six common resource conservation actions and asked to consider their behaviors in the year prior to the survey. The items are “do not turn the tap to the maximum when using water”, “switch off the lights when you leave”, “set the air conditioner’s temperature to 26–28 degrees centigrade in summer”, “buy and use energy-efficient appliances”, “avoid using disposable tableware whenever possible”. These six PEB were then converted into a PEB frequency scale, and a mean score was calculated for each participant. Higher scores indicated a higher frequency of PEB. Although the Cronbach’s α for the PEB scale was relatively low at .50, we decided to keep the measure because the items were face-valid. It is worth noting that removing any of the items did not improve the Cronbach’s alpha. Consistent with findings from previous studies, different types of PEB were not completely consistent [ 45 , 46 ]. And importantly, using the common score derived from the six items did not significantly alter the results.

Social desirability

Social desirability was measured using the liar subscale of the Eysenck Personality Questionnaire (EPQ) [ 47 ]. This subscale consists of 12 items, with participants answering each question with a “ Yes ” or “ No ” response. A code of 1 was assigned to “ Yes ” and 0 to “ No ”. Higher scores on this subscale indicated a stronger tendency towards social desirability. The measure demonstrated good internal consistency with a Cronbach’s α of .75.

Correlation analyses

Prior to conducting hypothesis testing, all variables exhibited normal distributions, as indicated by skewness values ranging from -0.89 to + 0.05 and kurtosis values ranging from -0.72 to + 0.77. We computed Pearson’s correlation coefficients to explore the relations among the studied variables (see Table  1 for descriptive statistics and intercorrelation coefficients). We found a marginally significant positive relationship between perceived ecological resource scarcity and PEB frequency ( r  = 0.12, p  = .058). And there was no correlation between environmental unpredictability and PEB frequency ( r  = -0.07, p  = .29). Importantly, as expected, social desirability was positively associated with PEB ( r  = 0.30, p  < .001), indicating that it should be controlled for in subsequent analyses.

Moderation analyses

To examine the impact of environmental unpredictability on the relationship between perceived ecological resource scarcity and PEB, we used the PROCESS macro for SPSS [ 48 ]. Controlling for social desirability, we found a significant interaction effect between perceived ecological resource scarcity and environmental unpredictability ( b  = -0.05, SE  = 0.02, t  = -2.26, p  = .025, 95% CI  = [-0.09, -0.01], Δ R 2  = 0.018). To further understand this interaction, we conducted a floodlight analysis [ 49 ]. The results showed that perceived ecological resource scarcity was positively and significantly associated with PEB when environmental unpredictability was below 4.41 ( b  = 0.07, SE  = 0.03, t  = 1.97, p  = .05, 95% CI  = [0.000, 0.136]), but not when it was above 4.41.

Additionally, we performed a simple slope analysis to examine the relationship between perceived ecological resource scarcity and PEB for individuals with different levels of perceived environmental unpredictability with social desirability controlled (see Fig.  1 ). The results indicated that perceived ecological resource scarcity positively predicted PEB for individuals with lower levels of environmental unpredictability (-1 SD ), b  = 0.13, SE  = 0.05, t  = 2.83, p  = .005, 95% CI  = [0.039, 0.219]. However, this relationship was not significant for individuals with higher levels of environmental unpredictability (+ 1 SD ), ( b  = -0.02, SE  = 0.05, t  = -0.35, p  = 0.73, 95% CI  = [-0.114, 0.079]).

figure 1

The effect of resource scarcity on PEB at different levels of environmental unpredictability (Study 1)

Furthermore, controlling for demographic variables did not significantly change the results of moderation analysis. In summary, individuals who perceived the environment as more predictable were more likely to engage in PEB when facing ecological resource scarcity.

Brief discussion

Study 1 identified a moderating effect of environmental unpredictability on associations between perceived ecological resource scarcity and daily PEB. Individuals who perceived the environment as less unpredictable were more likely to adopt environmentally friendly ways to respond to ecological resource scarcity. However, it is important to consider the potential influence of responding to the PEB items on participants’ perceptions of ecological resource scarcity. The act of responding to these items may have directed participants’ attention towards environmental issues, potentially leading to an implicit increase in their perceived ecological resource scarcity. Therefore, it is not possible to infer the direction of the causal relationship between perceived ecological resource scarcity and PEB frequencies solely from correlational data. In addition, using a single item for measuring environmental unpredictability may raise concerns about the comprehensiveness of measurement. To address these limitations, we conducted Study 2, where we manipulated perceived ecological resource scarcity in order to demonstrate its causal effect, and further explore the moderating effect of environmental unpredictability by using another measurement.

Furthermore, it is important to note that the observed moderation effect size was small, which could be attributed to the fact that we measured various types of PEB in this study. According to the Goal System Theory, PEB can be motivated by multiple goals. In the context of resource scarcity, individuals who perceive the environment as more predictable are more likely to prioritize environmental protection for the benefit of future generations, especially if they themselves also stand to gain [ 50 ]. For instance, engaging in electricity-saving behaviors not only benefits the environment in the long run but also reduces personal electricity bills. In other words, personal benefits may matter. In our subsequent studies, we will focus on examining PEB that does not involve salient personal benefits in order to highlight the moderating effect of environmental unpredictability.

In Study 2, we sought to replicate the moderating effect of environmental unpredictability on the link between ecological resource scarcity and PEB by manipulating resource scarcity information. We proposed that receiving ecological resource scarcity information would increase PEB intention for individuals with lower levels of environmental unpredictability but that the effect would disappear for individuals with higher levels of environmental unpredictability.

To test our moderation hypothesis, we determined that a sample size of 107 would be necessary to achieve 80% power (α = .05) in order to detect a small-to-medium-size effect ( f 2  = .075) based on previous research [ 10 ] using G*Power 3.1 [ 40 ]. We established the rule for ending data collection prior to gathering data, stipulating that the survey link would be closed after obtaining more than 150 observations. Ultimately, we recruited 155 Chinese adults who completed an anonymous online questionnaire and all of these responses were valid.

The participants had an average age of 32.91 ± 10.10 years (range = 18–59 years) and 41.90% of them were males ( N  = 65). In terms of educational attainment, 9.70% held a high school degree, 16.1% held a junior college degree, 66.6% held a bachelor’s degree, and 13.5% held a master’s degree or higher. The average annual personal income was 11.18 ± 44.58 ten thousand yuan.

In the present study, participants reported their demographic information first. Then, environmental unpredictability was measured. Next, participants were randomly assigned to one of two experimental conditions to read a news article, where exposure to the information of resource scarcity (vs. control condition) was the manipulated factor. Finally, PEB intention was measured using a forest management task.

Manipulation of ecological resource scarcity information

Participants were assigned at random to read one of two news articles. The articles were created specifically to manipulate perceptions of ecological resource scarcity. In the scarcity group ( n  = 77), participants read an article titled “Interpretation of China’s Resources through Big Data: Invisible Resource Scarcity in China”, which highlighted the severity of natural resource scarcity in China. In the control group ( n  = 78), participants read an article of similar length that aimed to evoke similar levels of negative arousal. This article was titled “Interpretation of Sleep through Big Data: Invisible Sleeping Problems in China” and discussed sleep issues in China. To ensure the credibility of the mock news articles, participants were informed that the articles were sourced from The People’s Daily , a reputable Chinese newspaper.

Immediately after reading their respective article, participants rated their perception of ecological resource scarcity using a 7-point Likert scale ranging from “ strongly disagree ” (1) to “ strongly agree ” (7). The item presented was: “Currently, I believe that we live in an environment where natural resources are extremely scarce.” Besides, participants also responded to one item on their mood at the moment for the manipulation check on a 7-point Likert scale (1 = “ very negative ” to 7 = “ very positive ”).

  • Environmental unpredictability

At the individual level, environmental unpredictability is mostly indicated by residential changes [ 24 , 25 ]. The frequency of residential changes showed whether the structure of an environment one lives in changes over time, which is the important aspect of environmental unpredictability. Therefore, Study 2 used the frequency individuals moved in the past to represent their environmental unpredictability. Higher score indicates stronger environmental unpredictability ( M  = 3.59, SD  = 2.17, Min  = 0, Max  = 11). The variable showed approximately normal distribution, with skewness = 0.64 and kurtosis = 0.65. Hence, the raw data of moving frequency are used for analysis.

PEB intention

A forest management task was used to measure PEB intention, specifically in relation to forest resource conservation intention [ 51 ]. Participants were asked to imagine that they were the owner of a timber company and must compete with three other companies to harvest timber in the same forest. They need to cut down as many trees as possible for their companies to profit and thrive. However, the rapid deforestation could lead to forest destruction. Then, participants were asked to answer one question about deforestation rate on a 7-point Likert scale, ranging from 1 ( very slow ) to 7 ( very fast ), which asked, ‘How fast do you want your company to cut down trees?’ Additionally, they were asked one question about forest resource consumption, ranging from 1 to 100 acres, which asked, ‘How many acres of trees do you expect your company to cut down?’. Give that both questions indicate greedy for forest resources, the average of participants’ reversed standardized scores on the two questions was computed to represent PEB intention. Higher scores indicate stronger forest resource conservation intention. We also treated the two items separately to test our hypothesis, which can be found in the Additional file 1 .

Manipulation checks

The manipulation of resource scarcity information was successful. Specifically, participants in the scarcity condition ( M  = 5.17, SD  = 1.25) compared to those in the control condition ( M  = 4.55, SD  = 1.56), reported higher levels of awareness on ecological resource scarcity, t (153) = 2.72, p  = .007, 95%CI = [0.169, 1.066], d  = 0.44. Furthermore, there was no difference of mood between the two conditions ( M scarcity  = 5.06, SD scarcity  = 1.19; M control  = 4.92, SD control  = 1.23), t (153) = 0.73, p  > .05, 95%CI = [-0.526, 0.242].

Hypothesis test

To test for the moderating effect of environmental unpredictability, we regressed the forest resource conservation intention on ecological resource scarcity information (dummy coded: 1 =  scarcity condition, 0 =  control condition), environmental unpredictability and their interaction by employing the PROCESS macro (Model 1, 5000 bootstrap samples) for SPSS [ 48 ]. The results showed a significant main effect of ecological resource scarcity information ( b  = 0.63, SE  = 0.23, t  = 2.78, p  = .006, 95% CI  = [0.183, 1.078]). And there was no main effect of environmental unpredictability ( b  = 0.04, SE  = .03, t  = 1.23, p  > .05, 95% CI  = [-0.026, 0.109]).

Results showed a significant interaction effect ( b  = -0.14, SE  = 0.06, t  = -2.54, p  = .012, 95% CI  = [-0.025, -0.031], Δ R 2  = .04), meaning that the effect of ecological resource scarcity information on forest resource conservation intention was moderated by environmental unpredictability. Specifically, for individuals with lower levels of environmental unpredictability (below 1 SD ), participants in the scarcity condition exhibited stronger forest resource conservation intention relative to those in the control condition, b  = 0.43, SE  = 0.16, t  = 2.63, p  = .0095, 95% CI = [0.107, 0.755]. In contrast, for individuals with higher levels of environmental unpredictability (above 1 SD ), the ecological resource scarcity manipulation had no effect on forest resource conservation intention, b  = -0.17, SE  = 0.17, t  = -1.04, p  > .05, 95% CI = [-0.512, 0.158] (see Fig.  2 ).

figure 2

The effect of resource scarcity × environmental unpredictability on forest resource conservation intention (Study 2)

Besides, a floodlight analysis was performed to decompose the interaction [ 49 ]. It revealed that ecological resource scarcity manipulation increased forest resource conservation intention for any value of environmental unpredictability less than 2.78 ( b  = 0.24, SE  = 0.12, t  = 1.98, p  = .05, 95% CI = [0.000, 0.487]), but not for any value greater than 2.78. More importantly, the above findings did not significantly differ after controlling for demographic variables.

Study 2 replicated results of Study 1 and identified that environmental unpredictability weakened the positive effect of ecological resource scarcity information on resource conservation. Presenting ecological resource scarcity information could effectively increase forest conservation intention, particularly for individuals who move less frequently, indicating lower levels of environmental unpredictability.

However, the results of Study 2 were limited in several aspects. First, environmental unpredictability can be caused either by individuals themselves, such as frequent relocation, or by nature, such as unforeseen natural disasters. The present study focused on individual-related environmental unpredictability only. Secondly, the measurement of resource conservation intention instead of actual behaviors may have restricted the ecological validity of the findings. Thirdly, it is possible that the moderation effect was underestimated. In the forest management task, the psychological experience of forest resource scarcity may have been primed in both conditions, as participants were informed about the need to compete with other companies for limited forest resources. Consequently, participants’ decisions may have been heavily influenced by the forest management scenario.

Based on above discussions of Study 2, in Study 3, actual PEB was measured to increase ecological validity, and nature-caused environmental unpredictability was focused to improve generalizability. In addition, hypothetical forest resource conservation was also measured to replicate findings of Study 2. We proposed that receiving ecological resource scarcity information would increase actual resource conservation and forest resource conservation intention under predictable environmental conditions but that this effect would disappear under unpredictable environmental conditions.

We conducted a power analysis through G*Power 3.1 with the moderating effect size in Study 2, which suggested that a sample size of 135 would be required to achieve 80% power ( α  = .05) [ 40 ]. A total of 142 college students in Beijing, China was recruited to participate in the experiment in exchange for monetary compensation. Six participants who failed to finish all experimental tasks were excluded from data analysis. It is worth noting that the rule for terminating data collection was decided before data collection began: the experiment was terminated when more than 135 observations were collected in two weeks.

The average age of the participants was 21.87 ± 2.67 years (range = 17–29 years), and 75.00% of them were female ( N  = 102). The average annual household income was 12.37 ± 17.10 thousand yuan .

Research design and procedure

A 2 (water resource scarcity vs. control) × 2 (unpredictable vs. predictable environment) between-subject design was used.

Before arriving at the lab, participants were asked to fill out their demographic information in an online survey. Upon arrival at the lab, participants were randomly assigned into one of four groups to read a newspaper. These newspapers were designed to be looked like real Beijing Daily newspapers. In each type of newspaper, there were two pieces of news. One was designed to manipulate the water resource scarcity information, and another was designed to manipulate environmental unpredictability information. Then, actual water and paper consumption data was recorded in a washing-hands paradigm. Finally, forest resource consumption intention was measured.

Manipulation of water resource scarcity information

Similar to Study 2, in the scarcity condition ( n  = 67), the news article described the seriousness of water resource scarcity in Beijing. While, in the control condition ( n  = 69), the news article described Beijing residents’ sleep problems. After reading the article, participants responded to 1 item on perceived ecological resource scarcity on a 7-point Likert scale (1 = “ strongly disagree ” to 7 = “ strongly agree ”), which was adapted from new ecological paradigm scale (NEP): “The earth has plenty of natural resources if we just learn how to develop them” [ 52 ].

Manipulation of environmental unpredictability information

In the unpredictable condition ( n  = 68), the news article was titled “Natural Disasters are Unpredictable and Difficult to Prevent: 9.578 million People were Affected by Various Natural Disasters in January”. The news conveyed the information that natural disasters happened frequently, which caused many people to be affected in January, and there was no way to predict and prevent disasters. By contrast, in the predictable condition ( n  = 68), the news stated that even though natural disasters are frequent in China and many people were affected, now some devices can help predict and prevent disasters. The title was “Predication and Prevention of the Occurrence of Natural Disasters is Possible: 9.578 million People were Affected by Various Natural Disasters in January”.

Manipulation check items were rated right after reading the news article. Participants responded to 2 items about perceived unpredictability on a 7-point Likert scale (1 = “ strongly disagree ” to 7 = “ strongly agree ”): “The environment where I live is unstable”, and “The environment where I live is unpredictable”. The average score of the two items was computed such that a higher score indicated stronger perceived unpredictability.

Actual water and paper resource consumption

To cover up our real purpose, participants were told that the research was attempting to study palms, so that we would collect their fingerprints in the study. In the washing-hands paradigm, participants were asked to use the inkpad and leave their fingerprints on a sheet of white paper to study their palms. After that, they had to wash their hands in the lab. The amount of water and paper they used was recorded.

To measure the water consumption, the experimenter placed one measuring cylinder under the washbasin, and the measuring cylinder was linked to the washbasin’s outlet pipe. Importantly, participants could not see the cylinder. To measure their paper consumption, a bag of paper was placed on the washbasin for the participants to use. Besides, to exclude the experimenter effect, participants washed their hands without experimenter observation. Importantly, participants did not know that their behaviors were recorded, and participants were not aware of the real purpose of the study (see Fig.  3 ). All of the participants were debriefed at the end of the study.

figure 3

Set-up of washing-hands paradigm

Considering that water and paper consumption for washing ink from hands might be affected by palm size, we recorded the palm area for each participant based on their fingerprints. Then, actual resource consumption was represented by average water consumption and average paper consumption, calculated by water or paper consumption divided by palm area.

Hypothetical forest resource conservation

Same as Study 2, the forest management task was used. After reading the scenario, participants were asked to answer the question, “How many acres of trees do you expect your company to cut down?”, ranging from 1 to 100 acres. A higher score on the measurement indicates a lower intention for forest resource conservation.

Perceived resource scarcity was significantly greater in the scarcity condition ( n  = 67, M  = 5.61, SD  = 1.19) than that in the control condition ( n  = 69, M  = 5.13, SD  = 1.38), t (134) = 2.17, p  = .032, 95%CI = [0.043, 0.920], d  = 0.37. Perceived unpredictability was also significantly greater in the unpredictable condition ( n  = 68, M  = 5.13, SD  = 1.28) compared to the predictable condition ( n  = 68, M  = 4.55, SD  = 1.39), t (134) = 2.51, p  = .013, 95%CI = [0.121, 1.026], d  = 0.43. Overall, the manipulations were successful and valid.

To examine the interaction effect between water resource scarcity and environmental unpredictability on resource conservation, two-factor MANOVAs were conducted.

Concerning the average water consumption, gender, age, household income, and cleanliness habits are included as control variables. The findings revealed that the main effect of scarcity was significant ( F (1,128) = 5.44, p  = 0.021, 95%CI = [-14.168, -0.673], η p 2  = .041), and the main effect of environmental unpredictability was not significant ( F (1,128) = 0.23, p  > .05, 95%CI = [-7.437, 5.984]). As expected, the interaction was significant ( F (1,128) = 6.81, p  = .01, 95%CI = [3.037, 22.097], η p 2  = .050). Then, simple effect analysis revealed that under the predictable condition, the average water consumption was significantly less under the scarcity condition ( M  = 25.29, SD  = 13.87) than under the control condition ( M  = 37.13, SD  = 13.91), F (1,128) = 12.25, p  < .001, 95%CI = [-18.535, -5.146], η p 2  = .087. However, under the unpredictable condition, there was no significant difference between the scarcity condition ( M  = 32.71, SD  = 13.93) and control condition ( M  = 31.98, SD  = 13.90), F (1,128) = 0.05, p  > .05, 95%CI = [-5.984, 7.437] (see Fig.  4 ).

figure 4

Average water consumption as a function of resource scarcity and environmental unpredictability manipulations (Study 3)

Moreover, the results in average paper consumption showed a similar pattern. Main effects of scarcity ( F (1,128) = 0.42, p  > .05, 95%CI = [-0.137, 0.042]) and environmental unpredictability ( F (1,128) = 0.70, p  > .05, 95%CI = [-0.143, 0.036]) were not significant. A significant interaction effect was detected, F (1,128) = 5.30, p  = .023, 95%CI = [0.021, 0.275], η p 2  = .040. As predicted, in the predictable condition, paper consumption in the scarcity condition ( M  = 0.28, SD  = 0.18) was significantly less than in the control condition ( M  = 0.38, SD  = 0.18), F (1,128) = 4.39, p  = .038, 95%CI = [-0.184, -0.005], η p 2  = .033. No significant difference in paper consumption were observed between scarcity condition ( M  = 0.33, SD  = 0.19) and control condition ( M  = 0.28, SD  = 0.19) in unpredictable condition, F (1,128) = 1.39, p  > .05, 95%CI = [-0.036, 0.143] (see Fig.  5 ).

figure 5

Average paper consumption as a function of resource scarcity and environmental unpredictability manipulations (Study 3)

More importantly, the above findings did not significantly differ without control variables in data analysis, and also did not significantly differ using the raw scores of water and paper consumption. Detailed results can be found in the Additional file 1 .

Hypothetical forest resource consumption was log transformed as it showed non-normal distribution. The findings showed that the main effects of scarcity ( F (1,130) = 1.800, p  > .05, 95%CI = [-0.363, 0.271]) and environmental unpredictability ( F (1,130) = 2.189, p  > .05, 95%CI = [-0.688, 0.049]) were not significant. A marginally significant interaction effect was detected, F (1, 130) = 3.04, p  = .084, 95%CI = [-0.053, 0.849], η p 2  = .023. As predicted, in the predictable condition, forest resource consumption in the scarcity condition ( M raw  = 30.76, SD raw  = 14.97) was significantly less than the control condition ( M raw  = 40.74, SD raw  = 25.41), F (1,130) = 4.71, p  = .032, 95%CI = [-0.672, -0.031], η p 2  = .035. No significant difference of forest resource consumption was observed between scarcity condition ( M raw  = 42.15, SD raw  = 20.41) and control condition ( M raw  = 44.00, SD raw  = 28.17) in unpredictable condition, F (1,130) = 0.08, p  > .05, 95%CI = [-0.027, 0.363] (see Fig.  6 ).

figure 6

Forest resource consumption as a function of resource scarcity and environmental unpredictability manipulations (Study 3)

As expected, Study 3 replicated the findings of the previous two studies. We identified a moderating effect of nature-caused environmental unpredictability on ecological resource scarcity information’s effect on actual PEB. Specifically, individuals who received lower levels of environmental unpredictability information exhibited more water-saving and paper-saving behaviors, and were inclined to harvest fewer forest resources in the face of water scarcity. Interestingly, even though our manipulation focused solely on water scarcity, both paper consumption and forest resource consumption were affected as well, despite their lack of direct association with water. These results highlight the robust influence of resource scarcity information and environmental unpredictability on PEB, thereby enhancing the ecological validity of our findings.

General discussion

Focusing on the global issue of environmental unpredictability, the current research explored when does showing resource scarcity information promote PEB. In Study 1, a cross-sectional study, we discovered that resource scarcity information effectively enhances PEB, but only when individuals perceive the environment as less unpredictable. Furthermore, by manipulating scarcity information, Study 2 revealed that only for individuals with lower levels of environmental unpredictability could presenting ecological resource scarcity information decrease forest resource consumption intention. Moreover, an experiment with high ecological validity was conducted in Study 3 and found that the negative effect of water resource scarcity information on actual water and paper saving behaviors, as well as hypothetical forest resource consumption emerged only for people who receiving weaker environmental unpredictability information.

Theoretical contribution and practical implication

Environmental unpredictability is an important concept in life history theory. Numerous studies have verified that childhood environmental unpredictability plays a crucial role in shaping life history strategies [ 27 , 28 , 30 , 37 , 53 ]. However, little is known about how adulthood environmental unpredictability functions. The current research provided preliminary evidence that unpredictability in adulthood can also function in shaping behaviors. Adulthood unpredictability, including both individual- and nature-related environmental unpredictability, demotivates individuals to sacrifice present interests for future environmental benefits when facing scarcity.

Some psychological factors, including those discussed earlier (such as short-sighted tendency, fear of free riders, and perceived lack of control), as well as self-interest and competitive orientation, can serve as potential mechanisms underlying the moderating effect of environmental unpredictability. Self-interest and competitive orientation are important ways for individuals to survive in a harsh environment. Individuals may adopt a competitive orientation to obtain more benefits for themselves to survive during periods of scarcity. In addition, they may also seek to weaken others’ interests. These factors have been identified as “Stone Age” psychological biases leading to environment destruction [ 54 ]. To better respond to ecological resource scarcity, the current research demonstrated the importance of creating a predictable and peaceful world by removing the psychological barriers to mitigate ecological resource scarcity.

The IMB model provides a comprehensive framework advancing resource conservation research and intervention implements [ 3 ]. Even though the IMB model captures three vital components, information, motivation, and behavioral skills on behavior change, the psychological barriers caused by environmental unpredictability were ignored. As illustrated in a recent meta-analysis [ 15 ], compared with the control group, of the 38 interventions including IMB components, water use was reduced by only 5.9% in average with a small effect size, and the magnitude of effect varied widely in different interventions. According to the findings in the current research, levels of environmental unpredictability may be the underlying reason for the varied efficacy. Therefore, to best strengthen reducing resource consumption interventions based on the IMB model, it’s necessary to take environmental unpredictability into consideration.

Importantly, the current research developed a new paradigm, washing-hands paradigm, to measure actual resource consumption in the lab. As illustrated in previous studies, there are gaps between self-reported behaviors, and objective behaviors [ 43 ]. However, over 80% of recent studies only relied on self-reported data [ 55 ]. The washing-hands paradigm sets up a situation to capture actual water and paper resource consumption data. Importantly, the confounding variables can be controlled in the paradigm, such as habits, individual difference on palm size, and social desirability. This paradigm can help to establish causality and improve ecological validity of lab experiments, advancing resource conservation research.

The current research also provides some vital practical implications for both policy makers and environmental organizations. Our data suggested that creating a predictable environment can help promote resource conservation when facing ecological resource scarcity information. Governments should try to eliminate unpredictable factors. However, some unpredictable factors are difficult to address, such as natural disasters and virus spread. In such conditions, individual-level practices appear to be more important. For countries with a predictable environment, the strategy of the reminders of the ecological resource scarcity information is effective. However, for countries with an unpredictable environment, governments and organizations can consider using public media to decrease residents’ perceived unpredictability. Moreover, inspired by our Study 2, emphasizing predictable environmental information when reminding residents of scarcity should be encouraged. Environmental organizations should provide information that the environment is predictable when calling for resource conservation to respond to scarcity.

Limitations and future directions

The current research faces the limitation that the measurement in the correlation study is restricted due to the use of only one item to measure the moderator, and the alpha level of the PEB measure is low. For future studies, one aspect to consider is the exploration potential mechanisms of the moderation hypothesis. The current research did not delve into psychological mechanisms. It is suggested that future research could investigate underlying potential mechanisms of the moderation hypothesis to enrich the framework. Another related issue pertains to the IMB model. In the current research, we mainly focused on the effectiveness of scarcity information component but didn’t include motivation and behavioral skills components. It’s worthy for future research to test if creating a predictable environment can still strengthen the effect of IMB intervention. Besides, there are various types of resource conservation behaviors that individuals can engage in. Importantly, different behaviors are not necessarily highly relevant. For example, factors predicting shutting down electronics at night could not predict upgrading to energy-efficient appliances because these behaviors may cluster into distinct dimensions [ 56 , 57 ]. In the current research, we may not be able to generalize our findings to other types of behaviors. Thus, future research is encouraged to investigate whether the moderation hypothesis can be verified in other types of resource conservation behaviors.

Across three studies, we provided evidence to support the moderation hypothesis that environmental unpredictability weakens the positive effect of ecological resource scarcity information on PEB. Moving forward, it would be valuable to delve deeper into the underlying mechanisms, examine the moderation effect across various types of PEB, and investigate its potential application in PEB interventions.

Availability of data and materials

No datasets were generated or analysed during the current study.

World Bank. World development indicators 2016. Washington, DC: World Bank; 2016.

Book   Google Scholar  

Syme GJ, Nancarrow BE, Seligman C. The evaluation of information campaigns to promote voluntary household water conservation. Eval Rev. 2000;24(6):539–78. https://doi.org/10.1177/0193841x0002400601 .

Article   Google Scholar  

Fisher WA, Fisher JD, Shuper PA. Social psychology and the fight against AIDS: an information-motivation-behavioral skills model for the prediction and promotion of health behavior. Adv Exp Soc Psychol. 2014;50:105–93. https://doi.org/10.1016/B978-0-12-800284-1.00003-5 .

Famiglietti JS. The global groundwater crisis. Nat Clim Chang. 2014;4(11):945–8. https://doi.org/10.1038/nclimate2425 .

Fisher JD, Fisher WA, Amico KR, Harman J. An information-motivation- behavioral skills model of adherence to antiretroviral therapy. Health Psychol. 2006;25(4):462–73. https://doi.org/10.1037/0278-6133.25.4.462 .

Article   PubMed   Google Scholar  

Steg L, Perlaviciute G, van der Werff E, Lurvink J. The significance of hedonic values for environmentally relevant attitudes, preferences, and actions. Environ Behav. 2014;46(2):163–92. https://doi.org/10.1177/0013916512454730 .

Ehret PJ, Hodges HE, Kuehl C, Brick C, Mueller S, Anderson SE. Systematic review of household water conservation interventions using the information–motivation–behavioral skills model. Environ Behav. 2020;53(5):001391651989686. https://doi.org/10.1177/0013916519896868 .

Hannibal B, Sansom L, E. Portney K. The effect of local water scarcity and drought on water conservation behaviors. Environ Sociol. 2019;5(3):294–307. https://doi.org/10.1080/23251042.2018.1519882 .

Rodriguez-Sanchez C, Sarabia-Sanchez FJ. Does water context matter in water conservation decision behaviour? Sustainability. 2020;12(7):3026. https://doi.org/10.3390/su12073026 .

Gu D, Jiang J, Zhang Y, Sun Y, Jiang W, Du X. Concern for the future and saving the earth: when does ecological resource scarcity promote pro-environmental behavior? J Environ Psychol. 2020;72(101501):101501. https://doi.org/10.1016/j.jenvp.2020.101501 .

Berthold A, Cologna V, Siegrist M. The influence of scarcity perception on people’s pro-environmental behavior and their readiness to accept new sustainable technologies. Ecol Econ. 2022;196(107399):107399. https://doi.org/10.1016/j.ecolecon.2022.107399 .

Katz D, Grinstein A, Kronrod A, Nisan U. Evaluating the effectiveness of a water conservation campaign: combining experimental and field methods. J Environ Manage. 2016;180:335–43. https://doi.org/10.1016/j.jenvman.2016.05.049 .

March H, Domènech L, Saurí D. Water conservation campaigns and citizen perceptions: the drought of 2007–2008 in the Metropolitan Area of Barcelona. Nat Hazards (Dordr). 2013;65(3):1951–66. https://doi.org/10.1007/s11069-012-0456-2 .

Quesnel KJ, Ajami NK. Changes in water consumption linked to heavy news media coverage of extreme climatic events. Sci Adv. 2017;3(10):1–9. https://doi.org/10.1126/sciadv.1700784 .

Ehret PJ, Hodges HE, Kuehl C, Brick C, Mueller S, Anderson SE. Systematic review of household water conservation interventions using the information–motivation–behavioral skills model. Environ Behav. 2021;53(5):485–519. https://doi.org/10.1177/0013916519896868 .

Inman D, Jeffrey P. A review of residential water conservation tool performance and influences on implementation effectiveness. Urban Water J. 2006;3(3):127–43. https://doi.org/10.1080/15730620600961288 .

Fatmawati I, Dharmmesta BS, Purwanto BM, Nugroho SS. Promoting young adults to perform energy saving behavior through message framing: a lesson learned from Indonesia. Acad Strateg Manag J. 2018;17(5):1–282. https://doi.org/10.1016/j.copsyc.2015.08.011 .

Kimbrough EO, Vostroknutov A. The social and ecological determinants of common pool resource sustainability. J Environ Econ Manag. 2015;72(C):38–53. https://doi.org/10.1016/j.jeem.2015.04.004l .

Belsky J, Schlomer GL, Ellis BJ. Beyond cumulative risk: distinguishing harshness and unpredictability as determinants of parenting and early life history strategy. Dev Psychol. 2012;48(3):662–73. https://doi.org/10.1037/a0024454 .

Charnov EL. Life history invariants: some explorations of symmetry in evolutionary ecology. Oxford: Oxford University Press; 1993.

Roff DA, ed. Evolution of life histories: theory and analysis. 1993rd ed. NY: Chapman and Hall; 1993.

Roff DA. Life history evolution. Oxford: Oxford University Press; 2002.

Stearns SC. The evolution of life histories. Oxford: Oxford University Press; 1992.

Ellis BJ, Figueredo AJ, Brumbach BH, Schlomer GL. Fundamental dimensions of environmental risk: the impact of harsh versus unpredictable environments on the evolution and development of life history strategies. Hum Nat. 2009;20(2):204–68. https://doi.org/10.1007/s12110-009-9063-7 .

Young ES, Frankenhuis WE, Ellis BJ. Theory and measurement of environmental unpredictability. Evol Hum Behav. 2020;41(6):550–6. https://doi.org/10.1016/j.evolhumbehav.2020.08.006 .

Simpson JA, Griskevicius V, Kuo SIC, Sung S, Collins WA. Evolution, stress, and sensitive periods: the influence of unpredictability in early versus late childhood on sex and risky behavior. Dev Psychol. 2012;48(3):674–86. https://doi.org/10.1037/a0027293 .

Chen BB, Qu W. Life history strategies and procrastination: the role of environmental unpredictability. Pers Individ Dif. 2017;117:23–9. https://doi.org/10.1016/j.paid.2017.05.036 .

Chen BB, Shi Z, Sun S. Life history strategy as a mediator between childhood environmental unpredictability and adulthood personality. Pers Individ Dif. 2017;111:215–9. https://doi.org/10.1016/j.paid.2017.02.032 .

Frankenhuis WE, Panchanathan K, Nettle D. Cognition in harsh and unpredictable environments. Curr Opin Psychol. 2016;7:76–80. https://doi.org/10.1016/j.copsyc.2015.08.011 .

Hill EM, Jenkins J, Farmer L. Family unpredictability, future discounting, and risk taking. J Socio Econ. 2008;37(4):1381–96. https://doi.org/10.1016/j.socec.2006.12.081 .

Zhang Y, Gao Y, Jiang J. An unpredictable environment reduces pro-environmental behavior: a dynamic public goods experiment on forest use. J Environ Psychol. 2021;78(101702):101702. https://doi.org/10.1016/j.jenvp.2021.101702 .

Brumbach BH, Figueredo AJ, Ellis BJ. Effects of harsh and unpredictable environments in adolescence on development of life history strategies: a longitudinal test of an evolutionary model. Hum Nat. 2009;20(1):25–51. https://doi.org/10.1007/s12110-009-9059-3 .

Article   PubMed   PubMed Central   Google Scholar  

Ajzen I, Fishbein M. The influence of attitudes on behavior. In: Albarracín D, ed. The handbook of attitudes, Vol 826. Mahwah: Psychology Press; 2005. p. 173–221.

Wit A, Wilke H. Public good provision under environmental and social uncertainty. Eur J Soc Psychol. 1998;28(2):249–56. https://doi.org/10.1002/(SICI)1099-0992(199803/04)28:2%3C249::AID-EJSP868%3E3.0.CO;2-J .

Zhu N, Hawk ST, Chang L. Unpredictable and competitive cues affect prosocial behaviors and judgments. Pers Individ Dif. 2019;138:203–11. https://doi.org/10.1016/j.paid.2018.10.006 .

Tam KP, Chan HW. Generalized trust narrows the gap between environmental concern and pro-environmental behavior: Multilevel evidence. Glob Environ Change. 2018;48:182–94. https://doi.org/10.1016/j.gloenvcha.2017.12.001 .

Griskevicius V, Ackerman JM, Cantú SM, et al. When the economy falters, do people spend or save? Responses to resource scarcity depend on childhood environments. Psychol Sci. 2013;24(2):197–205. https://doi.org/10.1177/0956797612451471 .

Mittal C, Griskevicius V. Sense of control under uncertainty depends on people’s childhood environment: a life history theory approach. J Pers Soc Psychol. 2014;107(4):621–37. https://doi.org/10.1037/a0037398 .

Gifford R. The dragons of inaction: psychological barriers that limit climate change mitigation and adaptation. Am Psychol. 2011;66(4):290–302. https://doi.org/10.1037/a0023566 .

Faul F, Erdfelder E, Buchner A, Lang AG. Statistical power analyses using G*Power 3.1: tests for correlation and regression analyses. Behav Res Methods. 2009;41(4):1149–60. https://doi.org/10.3758/BRM.41.4.1149 .

Lee AJ, Zietsch BP. Experimental evidence that women’s mate preferences are directly influenced by cues of pathogen prevalence and resource scarcity. Biol Lett. 2011;7(6):892–5. https://doi.org/10.1098/rsbl.2011.0454 .

Watkins CD, DeBruine LM, Little AC, Feinberg DR, Jones BC. Priming concerns about pathogen threat versus resource scarcity: dissociable effects on women’s perceptions of men’s attractiveness and dominance. Behav Ecol Sociobiol. 2012;66(12):1549–56. https://doi.org/10.1007/s00265-012-1408-2 .

Kormos C, Gifford R. The validity of self-report measures of proenvironmental behavior: a meta-analytic review. J Environ Psychol. 2014;40:359–71. https://doi.org/10.1016/j.jenvp.2014.09.003 .

Reynolds JJ, McCrea SM. Life history theory and exploitative strategies. Evol Psychol. 2016;14(3):1–16. https://doi.org/10.1177/1474704916659483 .

Barr S. Factors influencing environmental attitudes and behaviors: a U.k. case study of household waste management. Environ Behav. 2007;39(4):435–73. https://doi.org/10.1177/0013916505283421 .

Bratt C, Stern PC, Matthies E, Nenseth V. Home, car use, and vacation: the structure of environmentally significant individual behavior. Environ Behav. 2015;47(4):436–73. https://doi.org/10.1177/001391651452503 .

Eysenck H, Eysenck SB. Manual of the Eysenck Personality Questionnaire : (Epq-R Adult). CA: EdITS/Educational and Industrial Testing Service; 1994.

Hancock GR, Mueller RO, eds. Structural equation modeling: a second course. 2nd ed. Charlotte: Information Age Publishing; 2013.

Spiller SA, Fitzsimons GJ, Lynch JG Jr, Mcclelland GH. Spotlights, floodlights, and the magic number zero: simple effects tests in moderated regression. J Mark Res. 2013;50(2):277–88. https://doi.org/10.1509/jmr.12.0420 .

Kruglanski AW, Shah JY, Fishbach A, Friedman R, Chun WY, Sleeth-Keppler D. A theory of goal systems. In: Advances in experimental social psychology. NY: Elsevier; 2002. p. 331–378. https://doi.org/10.1016/S0065-2601(02)80008-9 .

Wang L, Gu D, Jiang J, Sun Y. The not-so-dark side of materialism: can public versus private contexts make materialists less Eco-unfriendly? Front Psychol. 2019;10:1–10. https://doi.org/10.3389/fpsyg.2019.00790 .

Dunlap RE, Van Liere KD, Mertig AG, Jones RE. New trends in measuring environmental attitudes: measuring endorsement of the new ecological paradigm: a revised NEP scale. J Soc Issues. 2000;56(3):425–42. https://doi.org/10.1111/0022-4537.00176 .

White AE, Li YJ, Griskevicius V, Neuberg SL, Kenrick DT. Putting all your eggs in one basket: life-history strategies, bet hedging, and diversification. Psychol Sci. 2013;24(5):715–22. https://doi.org/10.1177/0956797612461919 .

van Vugt M, Griskevicius V, Schultz PW. Naturally green: harnessing stone age psychological biases to foster environmental behavior. Soc Issues Policy Rev. 2014;8(1):1–32. https://doi.org/10.1111/sipr.12000 .

Lange F, Steinke A, Dewitte S. The pro-environmental behavior task: a laboratory measure of actual pro-environmental behavior. J Environ Psychol. 2018;56:46–54. https://doi.org/10.1016/j.jenvp.2018.02.007 .

Karlin B, Davis N, Sanguinetti A, Gamble K, Kirkby D, Stokols D. Dimensions of conservation: exploring differences among energy behaviors. Environ Behav. 2014;46(4):423–52. https://doi.org/10.1177/0013916512467532 .

Nair G, Gustavsson L, Mahapatra K. Factors influencing energy efficiency investments in existing Swedish residential buildings. Energy Policy. 2010;38(6):2956–63. https://doi.org/10.1016/j.enpol.2010.01.033 .

Download references

The authors acknowledge the financial support from the National Natural Science Foundation of China (31871126), Chongqing Normal University (23XWB043) and Social Science Fund of Chongqing, China (2023BS076).

Author information

Authors and affiliations.

Key Laboratory of Applied Psychology, Chongqing Normal University, Chongqing, China

School of Education, Chongqing Normal University, Chongqing, China

Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, No.19 Xinjiekouwai Street, Beijing, 100875, China

Jiang Jiang

You can also search for this author in PubMed   Google Scholar

Contributions

GD and JJ contributed to the study conception and design. Material preparation, data collection and analysis were performed by GD. GD and JJ contributed to the interpretation of data. The first draft of the manuscript was written by GD. JJ commented on previous versions of the manuscript or revised it critically for important intellectual content. All authors approved the final version of the manuscript.

Corresponding author

Correspondence to Jiang Jiang .

Ethics declarations

Ethics approval and consent to participate.

The study was conducted after getting approval from the Institutional Review Board of the Beijing Normal University and was conducted following the ethical standards of the 1964 Declaration of Helsinki. All methods were carried out in accordance with relevant guidelines and regulations of Beijing Normal University. Participants were informed that their names and institution names would be kept confidential and their privacy rights were protected. Participants were included in the process on a voluntary basis and informed consent was obtained from all subjects.

Consent for publication

Not applicable.

Competing interests

The authors declare no competing interests.

Additional information

Publisher’s note.

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Supplementary material 1., rights and permissions.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/ . The Creative Commons Public Domain Dedication waiver ( http://creativecommons.org/publicdomain/zero/1.0/ ) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Cite this article.

Gu, D., Jiang, J. Navigating an unpredictable environment: the moderating role of perceived environmental unpredictability in the effectiveness of ecological resource scarcity information on pro-environmental behavior. BMC Psychol 12 , 261 (2024). https://doi.org/10.1186/s40359-024-01762-1

Download citation

Received : 01 December 2023

Accepted : 02 May 2024

Published : 10 May 2024

DOI : https://doi.org/10.1186/s40359-024-01762-1

Share this article

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

  • Ecological resource scarcity information
  • Pro-environmental behaviors
  • Life history theory
  • Washing-hands paradigm

BMC Psychology

ISSN: 2050-7283

psychology survey research paper

IMAGES

  1. psychology lab report example

    psychology survey research paper

  2. Psychology research paper

    psychology survey research paper

  3. Example of psychology research paper by chia25lindmi

    psychology survey research paper

  4. 💄 Format for psychology research paper. Sample Psychology Research

    psychology survey research paper

  5. 💐 How to write a psychology research paper. 6 Tips For Crafting A

    psychology survey research paper

  6. Psych research topics. 150+ Psychology Research Topics for College

    psychology survey research paper

VIDEO

  1. (just reading paper) LLM based Agents : Survey Research Paper Reading

  2. 12th Psychology Survey Report PDF

  3. Tricky Survey #survey #marketing #psychology #branding #business #tip #hack #customers

  4. 1st and 2nd year Psychology Survey Report Step-10

  5. Survey Research Method in Psychology] Urdu/ Hindi #wellnessbyfarah #psychologylessons #psychology

  6. Survey report of psychology

COMMENTS

  1. Understanding and Evaluating Survey Research

    Survey research is defined as "the collection of information from a sample of individuals through their responses to questions" ( Check & Schutt, 2012, p. 160 ). This type of research allows for a variety of methods to recruit participants, collect data, and utilize various methods of instrumentation. Survey research can use quantitative ...

  2. A checklist to assess the quality of survey studies in psychology

    In addition, we note the absence of quality assessment tools designed specifically for survey research in psychology. 2 As survey research is one of the most frequently-used methods in psychology (Ponto, 2015; Singleton and Straits, 2009), a dedicated, fit-for-purpose quality tool is needed (Protogerou and Hagger, 2019).

  3. PDF Survey Research

    This chapter describes a research methodology that we believe has much to offer social psychologists in- terested in a multimethod approach: survey research. Survey research is a specific type of field study that in- volves the collection of data from a sample of ele- ments (e.g., adult women) drawn from a well-defined

  4. 9.1 Overview of Survey Research

    What Is Survey Research? Survey research is a quantitative approach that has two important characteristics. First, the variables of interest are measured using self-reports. In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviors.

  5. SURVEY RESEARCH

    Abstract For the first time in decades, conventional wisdom about survey methodology is being challenged on many fronts. The insights gained can not only help psychologists do their research better but also provide useful insights into the basics of social interaction and cognition. This chapter reviews some of the many recent advances in the literature, including the following: New findings ...

  6. Survey Research

    Survey research uses a list of questions to collect data about a group of people. You can conduct surveys online, by mail, or in person. ... Psychology: researching personality traits, preferences and behaviours; ... Sending out a paper survey by mail is a common method of gathering demographic information (for example, in a government census ...

  7. Overview of Survey Research

    Survey research is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports (using questionnaires or interviews). In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts ...

  8. The Use of Research Methods in Psychological Research: A Systematised

    Introduction. Psychology is an ever-growing and popular field (Gough and Lyons, 2016; Clay, 2017).Due to this growth and the need for science-based research to base health decisions on (Perestelo-Pérez, 2013), the use of research methods in the broad field of psychology is an essential point of investigation (Stangor, 2011; Aanstoos, 2014).Research methods are therefore viewed as important ...

  9. Overview of Survey Research

    Survey research is a quantitative and qualitative method with two important characteristics. First, the variables of interest are measured using self-reports. In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviours.

  10. Writing a Research Report in American Psychological Association (APA

    Chapter 9: Survey Research. Overview of Survey Research. Constructing Survey Questionnaires. Conducting Surveys. ... In some areas of psychology, the titles of many empirical research reports are informal in a way that is perhaps best described as "cute." ... This student paper does not include the author note on the title page. The ...

  11. PDF Guide to Writing a Psychology Research Paper

    Component 1: The Title Page. • On the right side of the header, type the first 2-3 words of your full title followed by the page number. This header will appear on every page of you report. • At the top of the page, type flush left the words "Running head:" followed by an abbreviation of your title in all caps.

  12. 50+ Research Topics for Psychology Papers

    Topics of Psychology Research Related to Human Cognition. Some of the possible topics you might explore in this area include thinking, language, intelligence, and decision-making. Other ideas might include: Dreams. False memories. Attention. Perception.

  13. How To Write a Good Survey for Psychological Research

    Write each question in simple to read, easy to understand language. A good rule of thumb is to write questions at the 6th to 8th grade reading level. Ask your question immediately, using unambiguous words. Be careful of words like "very," "many," "a lot," and so forth.". They can introduce ambiguity.

  14. When to Use Surveys in Psychology Research

    A survey is a data collection tool used to gather information about individuals. Surveys are commonly used in psychology research to collect self-report data from study participants. A survey may focus on factual information about individuals, or it might aim to obtain the opinions of the survey takers.

  15. Big enough? Sampling in qualitative inquiry

    Any senior researcher, or seasoned mentor, has a practiced response to the 'how many' question. Mine tends to start with a reminder about the different philosophical assumptions undergirding qualitative and quantitative research projects (Staller, 2013).As Abrams (2010) points out, this difference leads to "major differences in sampling goals and strategies."(p.537).

  16. Constructing Survey Questionnaires

    Figure 9.2 long description: Three different rating scales for survey questions. The first scale provides a choice between "strongly agree," "agree," "neither agree nor disagree," "disagree," and "strongly disagree.". The second is a scale from 1 to 7, with 1 being "extremely unlikely" and 7 being "extremely likely.".

  17. Research Methods In Psychology

    Olivia Guy-Evans, MSc. Research methods in psychology are systematic procedures used to observe, describe, predict, and explain behavior and mental processes. They include experiments, surveys, case studies, and naturalistic observations, ensuring data collection is objective and reliable to understand and explain psychological phenomena.

  18. PDF Reporting Qualitative Research in Psychology

    Chapters 4 through 7 consider the typical sections of a qualitative research paper— the introductory sections, Method, Results, and Discussion. These chapters emphasize aspects of reporting that are unique to qualitative research. They describe the general elements that should be reported in qualitative papers and can assist authors in devel-

  19. How to Write an Introduction for a Psychology Paper

    At a Glance. Writing a great introduction can be a great foundation for the rest of your psychology paper. To create a strong intro: Research your topic. Outline your paper. Introduce your topic. Summarize the previous research. Present your hypothesis or main argument.

  20. Free APA Journal Articles

    Recently published articles from subdisciplines of psychology covered by more than 90 APA Journals™ publications. For additional free resources (such as article summaries, podcasts, and more), please visit the Highlights in Psychological Research page. Browse and read free articles from APA Journals across the field of psychology, selected by ...

  21. How to Write an APA Methods Section

    The main heading of "Methods" should be centered, boldfaced, and capitalized. Subheadings within this section are left-aligned, boldfaced, and in title case. You can also add lower level headings within these subsections, as long as they follow APA heading styles. To structure your methods section, you can use the subheadings of ...

  22. Reporting Research Results in APA Style

    The APA manual provides rigorous guidelines for what to report in quantitative research papers in the fields of psychology, education, and other social sciences. ... Example: Reporting participant flow. Of the 298 participants who completed the initial screening survey, 78 (26.1%) participants were excluded for not meeting study criteria, as ...

  23. 61 Interesting Psychology Research Topics (2024)

    Examples of systemic racism-related psychology research topics include: Access to mental health resources based on race. The prevalence of BIPOC mental health therapists in a chosen area. The impact of systemic racism on mental health and self-worth. Racism training for mental health workers.

  24. Research Paper Structure

    A complete research paper in APA style that is reporting on experimental research will typically contain a Title page, Abstract, Introduction, Methods, Results, Discussion, and References sections. 1 Many will also contain Figures and Tables and some will have an Appendix or Appendices. These sections are detailed as follows (for a more in ...

  25. APA Sample Paper: Experimental Psychology

    Writing the Experimental Report: Methods, Results, and Discussion. Tables, Appendices, Footnotes and Endnotes. References and Sources for More Information. APA Sample Paper: Experimental Psychology. Style Guide Overview MLA Guide APA Guide Chicago Guide OWL Exercises. Purdue OWL. Subject-Specific Writing.

  26. Navigating an unpredictable environment: the moderating role of

    The global issue of ecological resource scarcity, worsened by climate change, necessitates effective methods to promote resource conservation. One commonly used approach is presenting ecological resource scarcity information. However, the effectiveness of this method remains uncertain, particularly in an unpredictable world. This research aims to examine the role of perceived environmental ...