"

2 Module 2: Research Methods in Developmental Psychology

Module 2 Learning Objectives

Upon completion of this module, the learner will be able to:

  • Understand Ways of Knowing
    Explain different ways of knowing, including intuition, authority, and science, and their role in understanding the world.
  • Describe the Scientific Method
    Outline the steps of the scientific method and explain how it is used to advance knowledge in developmental psychology.
  • Differentiate Research Designs
    Identify and compare descriptive, correlational, and experimental research designs, including their strengths and weaknesses.
  • Understand Developmental Research Designs
    Explain the purposes of cross-sectional, longitudinal, and sequential designs in studying changes over time.
  • Explain Validity in Research
    Discuss the importance of internal and external validity in ensuring trustworthy and applicable research findings.
  • Understand Key Concepts in Research
    Define and explain the roles of hypotheses, variables, and operationalization in psychological research.
  • Recognize Ethical Guidelines
    Describe ethical guidelines for research with human participants, including informed consent, deception, and debriefing.
  • Analyze Historical Ethical Violations
    Reflect on past unethical studies, such as the Tuskegee Syphilis Study, and their impact on modern research ethics.
  • Examine Collaborative Methodologies
    Explain the value of community-based participatory research and the role of cultural sensitivity in research design.
  • Interpret Correlation and Causation
    Understand the difference between correlation and causation, and explain the limitations of correlational research.
  • Apply Ethical Principles
    Identify the role of Institutional Review Boards (IRBs) in protecting human subjects and ensuring research ethics.
  • Understand Research Challenges
    Discuss issues like confirmation bias, sampling challenges, and the Hawthorne effect that can influence research outcomes.

What is Science and how Does it Fit with Other Ways of Knowing?

Developmental science, along with other important sources of information, contribute to our understanding of “best developmental practices.” Science is a powerful process of learning, but it also has its limitations. Science uses multiple kinds of methodologies, ways of collecting information, and designs, each with its strengths and limitations and its hidden assumptions. Since research methods are central in producing valid and useful knowledge, we have to be thoughtful and critical about the processes and tools of science. Learning more about research methods in developmental science can also contribute to your learning more about important ways to promote healthy development, both your own and others.

At its core, science is a way of knowing: a set of practices for learning about the world. There are many other ways of knowing, including our intuition, emotions, and observations; the beliefs and customs of our families and neighbors; the opinions of friends and peers; communications from political and religious authorities; and messages from the media. If we bundle all these other sources of information together, they make up our “personal experience.”

The first method of knowing is intuition. When we use our intuition, we are relying on our guts, our emotions, and/or our instincts to guide us. Rather than examining facts or using rational thought, intuition involves believing what feels true. The problem with relying on intuition is that our intuitions can be wrong because they are driven by cognitive and motivational biases rather than logical reasoning or scientific evidence.

Perhaps one of the most common methods of acquiring knowledge is through authority. This method involves accepting new ideas because some authority figure states that they are true. These authorities include parents, the media, doctors, Priests and other religious authorities, the government, and professors. While in an ideal world we should be able to trust authority figures, history has taught us otherwise and many instances of atrocities against humanity are a consequence of people unquestioningly following authority (e.g., Salem Witch Trials, Nazi War Crimes). Nevertheless, much of the information we acquire is through authority because we don’t have time to question and independently research every piece of knowledge we learn through authority. But we can learn to evaluate the credentials of authority figures, to evaluate the methods they used to arrive at their conclusions, and evaluate whether they have any reasons to mislead us.

Confirmation bias is the tendency to look for evidence that only supports what we think or believe, and in doing, we ignore evidence or information that is inconsistent with our beliefs. For example, imagine that a person believes left-handed people are more creative than right-handed people. Whenever this person encounters a person that is both left-handed and creative, they place greater importance on this “evidence” that supports what they already believe. This individual might even seek proof that further backs up this belief while discounting examples that don’t support the idea. Confirmation biases impact how we gather information but also influence how we interpret and recall information.

WATCH THIS video below or view online for more information on ways of knowing in psychology.

The distinction between that which is scientific and that which is unscientific is that science is falsifiable; scientific inquiry involves attempts to reject or refute a theory or set of assumptions. And much of what we do in personal inquiry involves drawing conclusions based on what we have personally experienced or validating our own experience by discussing what we think is true with others who share the same views.

Why is science an important way of knowing?

 

Science offers a more systematic way to make comparisons guard against bias. Science is not perfect and it needs to be guided by strong ethical principles, but nevertheless it is a powerful way of knowing and an important source of knowledge. Science assumes that careful and systematic observation and thought are processes we can use to better understand ourselves and our world. The process of science describes a way of learning. Scientific knowledge is built by testing ideas using evidence gathered from the social and natural world. Initially, these ideas are tentative intuitions, but as they cycle through the process of science again and again, they are examined and tested in different ways, so we become increasingly confident about their validity.

WATCH THIS video below or view online for more information on why the field of developmental science is an exciting career field.

The Scientific Method

Scientific knowledge is advanced through a process known as the scientific method. Basically, ideas (in the form of theories and hypotheses) are tested against the real world (in the form of empirical observations), and those empirical observations lead to more ideas that are tested against the real world, and so on. Flowchart of the scientific method. It begins with make an observation, then ask a question, form a hypothesis that answers the question, make a prediction based on the hypothesis, do an experiment to test the prediction, analyze the results, prove the hypothesis correct or incorrect, then report the results.

One method of scientific investigation involves the following steps:

  • Determining a research question
  • Reviewing previous studies addressing the topic in question (known as a literature review)
  • Determining a method of gathering information
  • Conducting the study
  • Interpreting results
  • Drawing conclusions; stating limitations of the study and suggestions for future research
  • Making your findings available to others (both to share information and be scrutinized by others)

A Theory and Hypothesis

Two key concepts in the scientific approach are theory and hypothesis. A theory is a well-developed set of ideas that propose an explanation for observed phenomena that can be used to make predictions about future observations. A hypothesis is a testable prediction that is arrived at logically from a theory. It is often worded as an if-then statement (e.g., if I study all night, I will get a passing grade on the test). The hypothesis is extremely important because it bridges the gap between the realm of ideas and the real world. A hypothesis is not just a guess – it should be based on existing theories and knowledge. In some instances, researchers might create their hypothesis based on beliefs that are commonly held in society. “Birds of a feather flock together” is one example of folk wisdom that a psychologist might try to investigate. The researcher might pose a specific hypothesis that “People tend to select romantic partners who are similar to them in interests and educational level.” A hypothesis also has to be testable, which means you can support or refute it through scientific research methods (such as experiments, observations and statistical analysis of data).

WATCH THIS video below or view online for tips on how to write a strong hypothesis.

The Science of Psychology: Research Methods

There are many research methods available to psychologists in their efforts to understand, describe, and explain behavior and the cognitive and biological processes that underlie it. Some methods rely on observational techniques. Other approaches involve interactions between the researcher and the individuals who are being studied—ranging from a series of simple questions to extensive, in-depth interviews—to well-controlled experiments. The three main categories of psychological research are descriptive, correlational, and experimental research. Research studies that do not test specific relationships between variables are called descriptive, or qualitative, studies. These studies are used to describe general or specific behaviors and attributes that are observed and measured.

Each of these research methods has unique strengths and weaknesses, and each method may only be appropriate for certain types of research questions.

Descriptive Research

Not all studies are designed to reach the same goal. Descriptive studies focus on describing an occurrence. Some examples of descriptive questions include:

“How much time do parents spend with children?”

“How many times per week do couples argue at different lengths of marriage?”

“Are marriages happier after couples have children?”

Observations

Observational studies, also called naturalistic observation, involve watching and recording the actions of participants. This may take place in the natural setting, such as observing children at play in a park, or behind a one-way glass while children are at play in a laboratory playroom. The researcher may follow a checklist and record the frequency and duration of events (perhaps how many conflicts occur among 2-year-olds) or may observe and record as much as possible about an event as a participant (such as attending an Alcoholics Anonymous meeting and recording the slogans on the walls, the structure of the meeting, the expressions commonly used, etc.). The researcher may be a participant or a non-participant. What would be the strengths of being a participant? What would be the weaknesses?

In general, observational studies have the strength of allowing the researcher to see how people behave rather than relying on self-report. One weakness of self-report studies is that what people do and what they say they do are often very different. A major weakness of observational studies is that they do not allow the researcher to explain causal relationships. Yet, observational studies are useful and widely used when studying children. It is important to remember that most people tend to change their behavior when they know they are being watched (known as the Hawthorne effect) and children may not survey well.

Case Studies

Case studies involve exploring a single case or situation in great detail. Information may be gathered with the use of observation, interviews, testing, or other methods to uncover as much as possible about a person or situation. Case studies are helpful when investigating unusual situations such as brain trauma or children reared in isolation. And they are often used by clinicians who conduct case studies as part of their normal practice when gathering information about a client or patient coming in for treatment. Case studies can be used to explore areas about which little is known and can provide rich detail about situations or conditions. However, the findings from case studies cannot be generalized or applied to larger populations; this is because cases are not randomly selected and no control group is used for comparison.

Surveys

Surveys are familiar to most people because they are so widely used. First, the variables of interest are measured using self-reports (using questionnaires or interviews). In essence, survey researchers ask their participants (who are often called respondents in survey research) to report directly on their own thoughts, feelings, and behaviors. Second, considerable attention is paid to the issue of sampling.

image

In particular, survey researchers have a strong preference for large random samples because they provide the most accurate estimates of what is true in the population. In fact, survey research may be the only approach in psychology in which random sampling is routinely used. Beyond these two characteristics, almost anything goes in survey research. Surveys can be long or short. They can be conducted in person, by telephone, through the mail, or over the Internet. They can be about voting intentions, consumer preferences, social attitudes, health, or anything else that it is possible to ask people about and receive meaningful answers.

Of course, surveys can be designed in a number of ways. They may include forced-choice questions and semi-structured questions in which the researcher allows the respondent to describe or give details about certain events. One of the most difficult aspects of designing a good survey is wording questions in an unbiased way and asking the right questions so that respondents can give a clear response rather than choosing “undecided” each time. Knowing that 30% of respondents are undecided is of little use! So a lot of time and effort should be placed on the construction of survey items.

Surveys are useful in examining stated values, attitudes, opinions, and reporting on practices. However, they are based on self-report, or what people say they do rather than on observation, and this can limit accuracy. Validity refers to accuracy and reliability refers to consistency in responses to tests and other measures; great care is taken to ensure the validity and reliability of surveys. Picture a dart board where you are throwing darts and trying to hit the center- a “bull’s eye.” If over a series of many throws your darts land all over the board but not ever in the center, your throws are both unvalid and unreliable. If your darts mostly miss the center but do hit a bull’s eye once or a few times, then your throws are unreliable but valid. If your darts all hit a certain place of the dartboard, say the upper right, then your throws are very reliable but unvalid. If your throws consistently hit the middle of the dartboard, then they are both reliable and valid.

Demonstration of reliability, and validity, visualized.

Correlational Research

When scientists passively observe and measure phenomena it is called correlational research. Here, researchers do not intervene and change behavior, as they do in experiments. In correlational research, the goal is to identify patterns of relationships, but not cause and effect. Importantly, with correlational research, you can examine only two variables at a time, no more and no less.

So, what if you wanted to test whether spending money on others is related to happiness, but you don’t have $20 to give to each participant in order to have them spend it for your experiment? You could use a correlational design—which is exactly what Professor Elizabeth Dunn at the University of British Columbia did when she conducted research on spending and happiness. She asked people how much of their income they spent on others or donated to charity, and later she asked them how happy they were. Do you think these two variables were related? Yes, they were! The more money people reported spending on others, the happier they were.

Understanding Correlation

To find out how well two variables correlate, you can plot the relationship between the two scores on what is known as a scatterplot. In the scatterplot, each dot represents a data point. (In this case it’s individuals, but it could be some other unit.) Importantly, each dot provides us with two pieces of information—in this case, information about how good the person rated the past month (x-axis) and how happy the person felt in the past month (y-axis). Which variable is plotted on which axis does not matter.

The association between two variables can be summarized statistically using the correlation coefficient (abbreviated as r). A correlation coefficient provides information about the direction and strength of the association between two variables. A positive correlation means, for example, that people who perceived the past month as being good reported feeling more happy, whereas people who perceived the month as being bad reported feeling less happy. With a positive correlation, the two variables go up or down together. In a scatterplot, the dots form a pattern that extends from the bottom left to the upper right.

negative correlation is one in which the two variables move in opposite directions. That is, as one variable goes up, the other goes down. In the middle picture of the figure below, it shows the association between the hours of sleep and tiredness. Notice how the dots extend from the top left to the bottom right. What does this mean in real-world terms? It means that the more hours of sleep a person gets the less tired they report being.

Three scatterplots are shown. Scatterplot (a) is labeled “positive correlation” and shows scattered dots forming a rough line from the bottom left to the top right; the x-axis is labeled “weight” and the y-axis is labeled “height.” Scatterplot (b) is labeled “negative correlation” and shows scattered dots forming a rough line from the top left to the bottom right; the x-axis is labeled “tiredness” and the y-axis is labeled “hours of sleep.” Scatterplot (c) is labeled “no correlation” and shows scattered dots having no pattern; the x-axis is labeled “shoe size” and the y-axis is labeled “hours of sleep.”

WATCH THIS video below or view online with captions for more on finding relationships between variables:

The strength of a correlation has to do with how well the two variables align. For example, in one correlational study, spending on others positively correlated with happiness; the more money people reported spending on others, the happier they reported to be. At this point you may be thinking to yourself, I know a very generous person who gave away lots of money to other people but is miserable! Or maybe you know of a very stingy person who is happy as can be. Yes, there might be exceptions. If an association has many exceptions, it is considered a weak correlation. If an association has few or no exceptions, it is considered a strong correlation. A strong correlation is one in which the two variables always, or almost always, go together. In the example of happiness and how good the month has been, the association is strong. The stronger a correlation is, the tighter the dots in the scatterplot will be arranged along a sloped line.

Correlation Does Not Indicate Causation

Correlational research is useful because it allows us to discover the strength and direction of relationships that exist between two variables. However, correlation is limited because establishing the existence of a relationship tells us little about cause and effect. While variables are sometimes correlated because one does cause the other, it could also be that some other factor, a confounding variable, is actually causing the systematic movement in our variables of interest. We see a correlation between eating ice cream and water deaths. Does that mean eating ice cream CAUSED water deaths? In this example, temperature is a confounding variable that could account for the relationship between the two variables. Even when we cannot point to clear confounding variables, we should not assume that a correlation between two variables implies that one variable causes a change in another. This can be frustrating when a cause-and-effect relationship seems clear and intuitive.

Unfortunately, people mistakenly make claims of causation as a function of correlations all the time. Such claims are especially common in advertisements and news stories. If a correlation was found between eating cereal and weight, does eating cereal really cause an individual to maintain a healthy weight, or are there other possible explanations, such as, someone at a healthy weight is more likely to regularly eat a healthy breakfast than someone who is obese or someone who avoids meals in an attempt to diet? While correlational research is invaluable in identifying relationships among variables, a major limitation is the inability to establish causality. Psychologists want to make statements about cause and effect, but the only way to do that is to conduct an experiment to answer a research question. The next section describes how scientific experiments incorporate methods that eliminate, or control for, alternative explanations, which allow researchers to explore how changes in one variable cause changes in another variable.

Experimental Research Design

Experiments are designed to test hypotheses (or specific statements about the relationship between variables) in a controlled setting in efforts to explain how certain factors or events produce outcomes. A variable is anything that changes in value. Concepts are operationalized or transformed into variables in research which means that the researcher must specify exactly what is going to be measured in the study. For example, if we are interested in studying marital satisfaction, we have to specify what marital satisfaction really means or what we are going to use as an indicator of marital satisfaction. What is something measurable that would indicate some level of marital satisfaction? Would it be the amount of time couples spend together each day? Or eye contact during a discussion about money? Or maybe a subject’s score on a marital satisfaction scale? Each of these is measurable but these may not be equally valid or accurate indicators of marital satisfaction. What do you think? These are the kinds of considerations researchers must make when working through the design.

The experimental method is the only research method that can measure cause and effect relationships between variables. Three conditions must be met in order to establish cause and effect. Experimental designs are useful in meeting these conditions:

The independent and dependent variables must be related. In other words, when one is altered, the other changes in response. The independent variable is something altered or introduced by the researcher; sometimes thought of as the treatment or intervention. The dependent variable is the outcome or the factor affected by the introduction of the independent variable; the dependent variable depends on the independent variable. For example, if we are looking at the impact of exercise on stress levels, the independent variable would be exercise; the dependent variable would be stress. In addition, a confounding variable is an extraneous variable that differs on average across levels of the independent variable. Because they differ across conditions—just like the independent variable—they provide an alternative explanation for any observed difference in the dependent variable.

The cause must come before the effect. Experiments measure subjects on the dependent variable before exposing them to the independent variable (establishing a baseline). So we would measure the subjects’ level of stress before introducing exercise and then again after the exercise to see if there has been a change in stress levels. (Observational and survey research does not always allow us to look at the timing of these events which makes understanding causality problematic with these methods.)

The cause must be isolated. The researcher must ensure that no outside, perhaps unknown variables, are actually causing the effect we see. The experimental design helps make this possible. In an experiment, we would make sure that our subjects’ diets were held constant throughout the exercise program. Otherwise, the diet might really be creating a change in stress level rather than exercise.

A box labeled “independent variable: type of television programming viewed” contains a photograph of a person shooting an automatic weapon. An arrow labeled “influences change in the…” leads to a second box. The second box is labeled “dependent variable: violent behavior displayed” and has a photograph of a child with an outstretched fist pointed at the camera.

A basic experimental design involves beginning with a sample (or subset of a population) and randomly assigning subjects to one of two groups: the experimental group or the control group. Ideally, to prevent bias, the participants would be blind to their condition (not aware of which group they are in) and the researchers would also be blind to each participant’s condition (referred to as “double blind“). If only the participants are blind to their condition, but the researchers know it would be a “single blind“. The experimental group is the group that is going to be exposed to an independent variable or condition the researcher is introducing as a potential cause of an event. The control group is going to be used for comparison and is going to have the same experience as the experimental group but will not be exposed to the independent variable. This helps address the placebo effect, which is that a group may expect changes to happen just by participating. After exposing the experimental group to the independent variable, the two groups are measured again to see if a change has occurred. If so, we are in a better position to suggest that the independent variable caused the change in the dependent variable.

The basic experimental model looks like this:

Flow chart. Begins with random assignment to conditions. One arrow to group A which is "violent video game." Second arrow to group B which is "non violent video game." Type of video game is the independent variable (experimental manipulation). Arrows from both groups go to "white noise administered" and then the measured dependent variable.

The major advantage of the experimental design is that of helping to establish cause and effect relationships. A disadvantage of this design is the difficulty of translating much of what concerns us about human behavior into a laboratory setting.

Assessing the Validity of a Study

More than any other thing, a scientist wants their research to be “valid”. The conceptual idea behind validity is very simple: can you trust the results of your study? If not, the study is invalid. However, while it’s easy to state, in practice it’s much harder to check validity than it is to check reliability. And in all honesty, there’s no precise, clearly agreed upon notion of what validity actually is. In fact, there’s lots of different kinds of validity, each of which raises its own issues, and not all forms of validity are relevant to all studies. Two widely used types are:

Internal validity – Internal validity is the extent to which a research study establishes a trustworthy cause-and-effect relationship. This type of validity depends largely on the study’s procedures and how rigorously it is performed.

External validity – This relates to the generalizability of your findings. That is, how does what you expect to see the same pattern of results in “real life” as you saw in your study.

READ THIS article to learn more about internal and external validity

Developmental Research Designs

Sometimes, especially in developmental research, the researcher is interested in examining changes over time and will need to consider a research design that will capture these changes. Research methods are tools that are used to collect information, while research design is the strategy or blueprint for deciding how to collect and analyze information. Research design dictates which methods are used and how. There are three types of developmental research designs: cross-sectional, longitudinal, and sequential.

WATCH THIS video below or view online with captions for for an introduction to developmental research designs:

Cross-Sectional Designs

The majority of developmental studies use cross-sectional designs because they are less time-consuming and less expensive than other developmental designs. Cross-sectional research designs are used to examine behavior in participants of different ages who are tested at the same point in time. Let’s suppose that researchers are interested in the relationship between intelligence and aging. They might have a hypothesis that intelligence declines as people get older. The researchers might choose to give a particular intelligence test to individuals who are 20 years old, individuals who are 50 years old, and individuals who are 80 years old at the same time and compare the data from each age group. This research is cross-sectional in design because the researchers plan to examine the intelligence scores of individuals of different ages within the same study at the same time; they are taking a “cross-section” of people at one point in time. Let’s say that the comparisons find that the 80-year-old adults score lower on the intelligence test than the 50-year-old adults, and the 50-year-old adults score lower on the intelligence test than the 20-year-old adults. Based on these data, the researchers might conclude that individuals become less intelligent as they get older. Would that be a valid (accurate) interpretation of the results?

Years of study- 2004. Cohort A: 2 year olds. Cohort B: 6 year olds. Cohort C: 8 year olds.
Example of cross sectional research design

No, that would not be a valid conclusion because the researchers did not follow individuals as they aged from 20 to 50 to 80 years old. One of the primary limitations of cross-sectional research is that the results yield information about age differences not necessarily changes over time. That is, although the study described above can show that the 80-year-olds scored lower on the intelligence test than the 50-year-olds, and the 50-year-olds scored lower than the 20-year-olds, the data used for this conclusion were collected from different individuals (or groups). It could be, for instance, that when these 20-year-olds get older, they will still score just as high on the intelligence test as they did at age 20. Similarly, maybe the 80-year-olds would have scored relatively low on the intelligence test when they were young; the researchers don’t know for certain because they did not follow the same individuals as they got older.

With each cohort being members of a different generation, it is also possible that the differences found between the groups are not due to age, per se, but due to cohort effects. Differences between these cohorts’ IQ results could be due to differences in life experiences specific to their generation, such as differences in education, economic conditions, advances in technology, or changes in health and nutrition standards, and not due to age-related changes.

Another disadvantage of cross-sectional research is that it is limited to one time of measurement. Data are collected at one point in time, and it’s possible that something could have happened in that year in history that affected all of the participants, although possibly each cohort may have been affected differently.

Longitudinal Research Designs

Longitudinal research involves beginning with a group of people who may be of the same age and background (cohort) and measuring them repeatedly over a long period of time. One of the benefits of this type of research is that people can be followed through time and be compared with themselves when they were younger; therefore, changes with age over time are measured. What would be the advantages and disadvantages of longitudinal research? Problems with this type of research include being expensive, taking a long time, and subjects dropping out over time.

Longitudinal research designs are used to examine behavior in the same individuals over time. For instance, with our example of studying intelligence and aging, a researcher might conduct a longitudinal study to examine whether 20-year-olds become less intelligent with age over time. To this end, a researcher might give an intelligence test to individuals when they are 20 years old, again when they are 50 years old, and then again when they are 80 years old. This study is longitudinal in nature because the researcher plans to study the same individuals as they age. Based on these data, the pattern of intelligence and age might look different than from the cross-sectional research; it might be found that participants’ intelligence scores are higher at age 50 than at age 20 and then remain stable or decline a little by age 80. How can that be when cross-sectional research revealed declines in intelligence with age?

Study: Child A is 2 years old in 2004 then Child A is 4 years old in 2006 then Child A is 6 years old in 2008 then Child A is 8 years old in 2010.
Example of longitudinal research design

Since longitudinal research happens over a period of time (which could be short term, as in months, but is often longer, as in years), there is a risk of attrition. Attrition occurs when participants fail to complete all portions of a study. Participants may move, change their phone numbers, die, or simply become disinterested in participating over time. Researchers should account for the possibility of attrition by enrolling a larger sample into their study initially, as some participants will likely drop out over time. There is also something known as selective attrition—this means that certain groups of individuals may tend to drop out. It is often the least healthy, least educated, and lower socioeconomic participants who tend to drop out over time. That means that the remaining participants may no longer be representative of the whole population, as they are, in general, healthier, better educated, and have more money. This could be a factor in why our hypothetical research found a more optimistic picture of intelligence and aging as the years went by. What can researchers do about selective attrition? At each time of testing, they could randomly recruit more participants from the same cohort as the original members to replace those who have dropped out.

Sequential Research Designs

Sequential research designs include elements of both longitudinal and cross-sectional research designs. Similar to longitudinal designs, sequential research features participants who are followed over time; similar to cross-sectional designs, sequential research includes participants of different ages. This research design is also distinct from those that have been discussed previously in that individuals of different ages are enrolled into a study at various points in time to examine age-related changes, development within the same individuals as they age, and to account for the possibility of cohort and/or time of measurement effects

Consider, once again, our example of intelligence and aging. In a study with a sequential design, a researcher might recruit three separate groups of participants (Groups A, B, and C). Group A would be recruited when they are 20 years old in 2010 and would be tested again when they are 50 and 80 years old in 2040 and 2070, respectively (similar in design to the longitudinal study described previously). Group B would be recruited when they are 20 years old in 2040 and would be tested again when they are 50 years old in 2070. Group C would be recruited when they are 20 years old in 2070, and so on.

Study beginning with only Cohort A measured in 2002 at age 2, then measuring in 2004 again with Cohort A at age 4 and a new Cohort B at age 2, then a third time in 2006 with Cohort A now 6 years old, Cohort B now 4 years old and a new Cohort C at 2 years old.
Example of sequential research design

Studies with sequential designs are powerful because they allow for both longitudinal and cross-sectional comparisons—changes and/or stability with age over time can be measured and compared with differences between age and cohort groups. This research design also allows for the examination of cohort and time of measurement effects. For example, the researcher could examine the intelligence scores of 20-year-olds at different times in history and different cohorts (follow the yellow diagonal lines in the figure above). This might be examined by researchers who are interested in sociocultural and historical changes (because we know that lifespan development is multidisciplinary). One way of looking at the usefulness of the various developmental research designs was described by Schaie and Baltes (1975): cross-sectional and longitudinal designs might reveal change patterns while sequential designs might identify developmental origins for the observed change patterns.

Since they include elements of longitudinal and cross-sectional designs, sequential research has many of the same strengths and limitations as these other approaches. For example, sequential work may require less time and effort than longitudinal research (if data are collected more frequently than over the 30-year spans in our example) but more time and effort than cross-sectional research. Although practice effects may be an issue if participants are asked to complete the same tasks or assessments over time, attrition may be less problematic than what is commonly experienced in longitudinal research since participants may not have to remain involved in the study for such a long period of time.

Comparing Developmental Research Designs

When considering the best research design to use in their research, scientists think about their main research question and the best way to come up with an answer. A table of advantages and disadvantages for each of the described research designs is provided here to help you as you consider what sorts of studies would be best conducted using each of these different approaches.

Advantages and disadvantages of different research designs

Advantages

Disadvantages

Cross-Sectional

Examines changes between participants of different ages at the same point in time

Provides information on age differences

Cannot examine change over time

Limited to one time in history

Cohort differences confounded with age differences

Longitudinal

Examines changes within individuals over time

Provides a developmental analysis

Expensive

Takes a long time

Participant attrition

Possibility of practice effects

Limited to one cohort

Time in history effects confounded with age changes

Sequential

Examines changes within individuals over time

Examines changes between participants of different ages at the same point in time

Used to examine cohort effects

Used to examine time in history effects

May be expensive

May take a long time

Possibility of practice effects

Some participant attrition

Considerations in Research

Today, scientists agree that good research is ethical in nature and is guided by a basic respect for human dignity and safety. However, as you will read in the feature box, this has not always been the case. Modern researchers must demonstrate that the research they perform is ethically sound. This section presents how ethical considerations affect the design and implementation of research conducted today.

Ethnocentrism

Ethnocentrism is the tendency to look at the world primarily from the perspective of one’s own culture. Part of ethnocentrism is the belief that one’s own race, ethnic or cultural group is the most important or that some or all aspects of its culture are superior to those of other groups. However, much of the time ethnocentrism in research is simply seeing seeing the world in terms of what you know- your experiences, thoughts, and beliefs. These tend to be related closely to the culture you are in. Ethnocentrism often leads to incorrect assumptions about others’ behavior based on your own norms, values, and beliefs. In extreme cases, a group of individuals may see another culture as wrong or immoral and because of this may try to convert, sometimes forcibly, the group to their own ways of living.  Ethnocentrism may not, in some circumstances, be avoidable. We often have involuntary reactions toward another person or culture’s practices or beliefs. In research it is important to acknowledge this so we can build in safeguards against this bias affecting the research at all steps of the methodology.

Collaborative Methodologies

During the development of anthropology in North America (Canada, United States, and Mexico), the significant contribution made by American anthropologists in the nineteenth and twentieth centuries was the concept of cultural relativism, which is the idea that cultures cannot be objectively understood since all humans see the world through the lens of their own culture. This idea is important to the research in the field of psychology. Many folks approach methodologies with the assumption that knowledge, research, and effective social action can best be co-constructed among researchers and community participants, incorporating the strengths and perspectives of all the stakeholders involved in a particular set of issues. This approach, often called community-based participatory action research, holds that complex social issues cannot be well understood or resolved by “expert” research, pointing to interventions from outside of the community which often have disappointing results or unintended side effects. In collaborative approaches, researchers and community partners build a genuine trusting relationship, and this cooperative partnership is the basis on which all decisions about the project are made: from articulating a set of research questions, to identifying data collection strategies, analysis and interpretation of information, and dissemination and application of findings.

Institutional Review Board

An institution’s IRB meets regularly to review experimental proposals that involve human participants. (credit: International Hydropower Association/Flickr)
Any experiment involving the participation of human subjects is governed by extensive, strict guidelines designed to ensure that the experiment does not result in harm. Any research institution that receives federal support for research involving human participants must have access to an Institutional Review Board (IRB). The IRB is a committee of individuals often made up of members of the institution’s administration, scientists, and community members. The purpose of the IRB is to review proposals for research that involves human participants. The IRB reviews these proposals with the principles mentioned below in mind, and generally, approval from the IRB is required in order for the experiment to proceed.An institution’s IRB requires several components in any experiment it approves. For one, each participant must sign an informed consent form before they can participate in the experiment. An informed consent form provides a written description of what participants can expect during the experiment, including potential risks and implications of the research. It also lets participants know that their involvement is completely voluntary and can be discontinued without penalty at any time. Furthermore, the informed consent guarantees that any data collected in the experiment will remain completely confidential. In cases where research participants are under the age of 18, the parents or legal guardians are required to sign the informed consent form.While the informed consent form should be as honest as possible in describing exactly what participants will be doing, sometimes deception is necessary to prevent participants’ knowledge of the exact research question from affecting the results of the study. Deception involves purposely misleading experiment participants in order to maintain the integrity of the experiment, but not to the point where the deception could be considered harmful. For example, if we are interested in how our opinion of someone is affected by their attire, we might use deception in describing the experiment to prevent that knowledge from affecting participants’ responses. In cases where deception is involved, participants must receive a full debriefing upon conclusion of the study—complete, honest information about the purpose of the experiment, how the data collected will be used, the reasons why deception was necessary, and information about how to obtain additional information about the study.

Ethics and the Tuskegee Syphilis Study

A white doctor administers an injection to a black participant in the Tuskegee Syphilis Study while two others look on.

Unfortunately, the ethical guidelines that exist for research today were not always applied in the past. In 1932, rural, Black men from Tuskegee, Alabama, were recruited to participate in an experiment conducted by the U.S. Public Health Service, with the aim of studying syphilis in Black men. In exchange for free medical care, meals, and burial insurance, 600 men agreed to participate in the study. A little more than half of the men tested positive for syphilis, and they served as the experimental group (given that the researchers could not randomly assign participants to groups, this represents a quasi-experiment). The remaining syphilis-free individuals served as the control group. However, those individuals that tested positive for syphilis were never informed that they had the disease.While there was no treatment for syphilis when the study began, by 1947 penicillin was recognized as an effective treatment for the disease. Despite this, no penicillin was administered to the participants in this study, and the participants were not allowed to seek treatment at any other facilities if they continued in the study. Over the course of 40 years, many of the participants unknowingly spread syphilis to their wives (and subsequently their children born from their wives) and eventually died because they never received treatment for the disease. This study was discontinued in 1972 when the experiment was discovered by the national press. The resulting outrage over the experiment led directly to the National Research Act of 1974 and the strict ethical guidelines for research on humans described in this chapter. As you can see, conducting ethical research is imperative.

 

References and Resources:

Listed below are the references and resources used to curate this module.
Anonymous (Jul. 2019). Research Methods in Psychology. LibreTexts Social Sciences. socialsci.libretexts.org/Courses/CSU_Fresno/Book%3A_Research_Methods_in_Psychology_(Cuttler_et_al.).
Bachman, Rosalee, et al. (2017). Nurses Report Psychological Testing Helps Them Provide Better Patient Care. Pearson Education. http://images.pearsonclinical.com/images/Assets/p-3/_btg_winter04p3.pdf.
Boundless (Dec. 2020). Sociology. LibreTexts Social Sciences. socialsci.libretexts.org/Bookshelves/Sociology/Introduction_to_Sociology/Book%3A_Sociology_(Boundless).
Cherry, Kendra (Nov. 2022). What Is the Confirmation Bias? VeryWellMind. https://www.verywellmind.com/what-is-a-confirmation-bias-2795024
Cherry, Kendra (Mar. 2023). How to Write a Great Hypothesis. VeryWellMind. verywellmind.com/what-is-a-hypothesis-2795239.
Chippewa Valley Technical College (Feb. 2023). Nursing Fundamentals. LibreTexts Medicine. med.libretexts.org/Bookshelves/Nursing/Nursing_Fundamentals_(OpenRN).
Fowler, Samantha, et al. (Apr. 2013). Concepts of Biology. OpenStax. openstax.org/details/books/concepts-biology.
Guntzviller, L.M. (n.d.). Cultural Sensitivity in Research. Sage Knowledge. sk.sagepub.com/reference/the-sage-encyclopedia-of-communication-research-methods/i3835.xml.
Hall, J.M, and Jill Powell (May 2011). Understanding the Person Through Narrative. National Library of Medicine. ncbi.nlm.nih.gov/pmc/articles/PMC3169914/.
Hudson Valley Community College (n.d.). Adolescent Psychology. Lumen Candela. courses.lumenlearning.com/adolescent/.Jhangiani, R.S., et al. (Aug. 2019). Research Methods in Psychology. Kwantlen Polytechnic University. kpu.pressbooks.pub/psychmethods4e/.
Liamputtong, Pranee (Jun. 2012). Cultural Sensitivity: A Responsible Researcher. Cambridge Core. cambridge.org/core/books/abs/performing-qualitative-crosscultural-research/cultural-sensitivity-a-responsible-researcher/6F3FE4968A7D662C37CB6A0BC7A39BEB.
LibreTexts Social Sciences (Aug. 2022). Child Family Community: The Socialization of Diverse Children. LibreTexts Social Sciences. socialsci.libretexts.org/Bookshelves/Early_Childhood_Education/Child_Family_Community%3A_The_Socialization_of_Diverse_Children.
LibreTexts Social Sciences (Oct. 2022). Individual and Family Development, Health, and Well-Being. LibreTexts Social Sciences. socialsci.libretexts.org/Sandboxes/admin/Individual_and_Family_Development_Health_and_Well-being_(Lang).
Lumen Learning (Nov. 2020). Cultural Anthropology. LibreTexts Social Sciences. socialsci.libretexts.org/Bookshelves/Anthropology/Cultural_Anthropology/Cultural_Anthropology_(Evans).
Lumen Learning (n.d.). Research Methods in Psychology. Lumen Candela. courses.lumenlearning.com/suny-psychologyresearchmethods/.Magsamen-Conrad.
Kate (n.d.). Introduction to Social Scientific Research Methods in the Field of Communication. University of Iowa. pressbooks.uiowa.edu/ssresearchmethodscommunicationonline/.
McCombes, Shona (May 2022). How to Write a Strong Hypothesis | Steps & Examples. Scribbr. scribbr.com/methodology/hypothesis/.
Miller, S.A., et al. (2022). Abnormal Psychology. Lumen Waymaker. courses.lumenlearning.com/wm-abnormalpsych/.
Navarro, Danielle (Jul. 2022). Learning STatistics with R-A Tutorial for Psychology Students and Other Beginners. LibreTexts Statistics. stats.libretexts.org/Bookshelves/Applied_Statistics/Book%3A_Learning_Statistics_with_R_-_A_tutorial_for_Psychology_Students_and_other_Beginners_(Navarro).
Pelz, Bill, et al. (n.d.). Developmental Psychology. Lumen Learning. courses.lumenlearning.com/suny-hccc-ss-152-1/.
Psychology (n.d.). Developmental Psychology Research Methods. Psychology. http://psychology.iresearchnet.com/developmental-psychology/developmental-psychology-research-methods/.
SLKN (n.d.). The Role of Nurses in Psychological Assessment. NursingNotes. nursingenotes.com/role-of-nurses-in-psychological-assessment/.
Spielman, R.M., et al. (Apr. 2020). Psychology 2e. OpenStax. openstax.org/details/books/psychology-2e.
Wake Forest University School of Medicine (n.d.). Ethics in Clinical Research: Foundations and Current Issues. Wake Forest University School of Medicine. school.wakehealth.edu/education-and-training/graduate-programs/clinical-research-management-ms/features/ethics-in-clinical-research.
Worthy, LD, Lavigne, T. and Romero, F. (2020). Ethnocentrism and Cultural Relativism. Culture and Psychology. DOI: https://open.maricopa.edu/culturepsychology/chapter/ethnocentrism-and-cultural-relativism/. Licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

License

Icon for the Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License

Introduction to Human Development Copyright © 2024 by Bridget Reigstad and Stacey Peterson is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License, except where otherwise noted.