Research in Public and Nonprofit Programs

The Basics

What do you think of when you hear the word research? Do you imagine a scientist working in a lab, an economist surrounded by computers and graphs, or a scholar surrounded by a pile of books? You are right; each is conducting research. But research goes well beyond the work of a scientist, an economist, or a scholar. Research may describe what you do when you decide what computer to buy, what jobs to apply for, or what music to download. It also describes what effective managers and influential stakeholders do when they monitor a program’s performance, determine its effectiveness, and assess the experiences and opinions of its clients. With information gathered from research, managers monitor their programs, publicize their successes, and identify opportunities for program improvement. Stakeholders may use the information to decide what programs to support and what policies to advocate for. Just as you sometimes make decisions, organizations make some decisions quickly with little information. Sometimes, however, decisions are made after gathering as much information as possible and organizing it. This latter approach to research is the subject of this text. We define research as the systematic gathering and analysis of empirical information to answer a question.


Research and effective organizations go hand in hand. They both depend on openness, accountability, and a commitment to learning and change. Openness or transparency is a key organizational and research value, which adds to an organization’s reputation. Stakeholders want to know how an organization uses its resources. They want information on what the organization does and what it has achieved. Information on a program’s performance, its effectiveness, and how it is perceived all contribute to transparency. You, your staff, or a contractor may conduct research to gather information. Whoever conducts the research is expected to disclose what information they gathered, whom they gathered it from, and how they analyzed it. With full disclosure, management teams have the information they need to decide if and how they can act on the research findings.

Accountability may be thought of as an attribute of transparency. An accountable organization is open and produces evidence that it is meeting stakeholder expectations. Donors, funders, and clients may all want confirmation that a program is well managed and achieving results. You may collect data on a program’s clients, activities, services, and results. Information on clients and activities documents that the program reaches the target population and provides appropriate services. You can link the information on clients and activities to data that can be linked to results. From the analysis you can learn which clients benefit most from a program and what services yield the greatest benefits.

In our eyes the most exciting use of research is to aid in organizational learning and change. The learning opportunities seem limitless. A management team can examine data, gain insights about what is working and what isn’t, and explore how a program could do better. Team members with a firm grounding in research methods and analysis may avoid overvaluing research findings and learning the wrong things or undervaluing findings and missing an opportunity to learn.

An organization may conduct a study only because someone else wants it. Managers may comply with the requirements and simply supply the requested information. In doing so they may miss an opportunity to design a study that answers valuable questions or to use the findings to improve their organization. One of our objectives is to empower you to design and implement studies and use findings to further an organization’s goals or to improve programs. In this text we consider the words researcher, managers, administrators, and management team interchangeable. We assume that you will be able to competently and correctly apply the skills presented here no matter what roles you play in your career.


Whether you want to learn about programs, organizations, or communities you should start with a research question. The research question is a question that someone wants answered. It has two characteristics. First, it can only be answered with observable or empirical information. By definition, research involves the study of observable information. Without observable information no research takes place. Second, the research question should have more than one possible answer. Otherwise why spend the time and money to answer it? While research questions are easy to generate, asking an appropriate question depends on identifying exactly why the study is being done.

Whoever wants the research must be clear on why they want the information, when they want it, how they will use it, and what resources they have available to conduct the research and implement the findings. A management team may consider if the research question is related to the organization’s mission and what it will do with the findings. Can the answers justify additional resources, help keep existing resources, or improve employee performance? Studies of inconsequential topics or policies resistant to change are likely to be ignored. A record of producing unused studies is unlikely to lead to organizational strength or career success.

In selecting the specific research questions the management team should decide what evidence will provide adequate answers. Management teams want to avoid “shooting ants with an elephant gun.” In other words they do not want to develop a complicated study to answer a simple question. Alternatively, they do not want to design a study that overlooks important information and yields superficial, unusable findings. They do not want to plan a study whose requirements exceeds available resources or yields information too late to help make a decision.

Let’s consider a community effort to end homelessness. A task force may want to identify gaps in existing services. To get started it may generate a list of research questions: What programs does the community have for people who are homeless? What population does each program serve? What services does each program offer? What is each program’s success rate? Each question requires empirical information. As task force members discuss the questions, they may develop more specific questions, for example:

■  How many clients do existing programs serve each month? How long does it take a client to find permanent housing? (These questions address performance.)

■  What do clients think about the programs and services? (This question addresses client experiences and opinions.)

■  Which services are most successful? What clients and programs have the greatest success? (These questions address program effectiveness.)

■  Where should new or expanded services be located? What populations are underserved? (These questions address community needs.)


Once the study’s purpose and the research question are defined you can begin your research. In this section we introduce basic research terms that we will use throughout the text.

Variables, Values, and Constants

Variables, values, and constants refer to the basic components needed to answer a research question. Variables are observable characteristics that have more than one value. Values represent the characteristics of a variable that change. The values may be dichotomous (yes or no), specific categories (Main Street Center, Southside Shelter, Salvation Army), or numbers.

To answer the question “What clients and programs have the greatest success?” you might ask managers of programs that have homeless clients to report the following information on each homeless client:

Variable Values
Client age Under 18, 18–21, 22–25, etc.
Has chronic mental illness Yes, no
Received job counseling Yes, no
Received mental health treatment Yes, no
Received substance abuse treatment Yes, no
Total services received 0, 1, 2, 3
Months homeless Actual number of months

If a characteristic has only one value, it is a constant. Constants do not vary. For example, if all the programs were in Washington County, Washington County would be a constant.


Variables are linked to other variables to create a hypothesis. A hypothesis is a statement that specifies or describes the relationship between two variables in such a way that the relationship can be tested empirically. A clearly written hypothesis helps you decide what data to collect and how to analyze them. We used the above variables to create three hypotheses:

H1 Clients under the age of 18 are homeless an average of 14 days.
H2 Younger clients receive more services than older clients.
H3 The more services clients receive the less time they are homeless.

In H1 “Clients under the age of 18” is a constant, the only one in the three hypotheses. We could argue that H1 is not really a hypothesis since it does not test a relationship between two variables. Still, examining only one variable can provide useful information. Programs for adolescents may require specific information about homeless youth—who they are, what services they use, and how long they have been homeless.

For many research efforts comparisons are valuable. Program managers may want to compare their clients with those of other programs. Staff members may want to learn how one variable is linked to another. H2 compares number of services received by older and younger clients. H3 compares the number of services received with the time a person is homeless.

A hypothesis should be specific; it should not be vague or ambiguous. Imagine a hypothesis “Services are related to program success.” We do not know (1) if the hypothesis is referring to specific services, the number of services, or something else; (2) whether services have a beneficial or detrimental impact on program success; or (3) what is meant by success. H2 could be more specific. You might want to know what ages are represented by “older” and “younger.” To see if the number of services varies across several age groups from the youngest to the oldest clients the hypothesis might be stated as “The older the client the fewer services he or she receives.”

A hypothesis implies that a change in one variable brings about a change in another variable. The independent variable is thought to explain the variations in the characteristic or event of interest. The dependent variable represents or measures the characteristic or event the investigators want to explain. For example, in H3 the independent variable “number of services received” is thought to explain how long a client is homeless. One may visually link the independent and dependent variables with an arrow from the independent variable pointing to the dependent variable.

A hypothesis implies a cause–effect relationship. You may state a hypothesis as an if–then statement. For example, “If clients are older then they will be homeless longer.” The “if” statement contains the independent variable, that is, client age. The “then” statement contains the dependent variable, that is, the length of time homeless. In our example hypotheses, the independent variables are number of services received (H3) and client age (H2). The dependent variables are number of services received (H2) and length of time a person is homeless (H3).

For a cause–effect relationship to exist three criteria must be satisfied: a statistical relationship must exist between the independent and dependent variables, the independent variable must occur before the dependent variable, and other possible causes must be eliminated. H2 and H3 both imply a statistical relationship between the independent variable and the dependent variable. We may assume that the independent variables in H2 and H3 occurred before the dependent variable. Eliminating other possible causes, however, is difficult. You cannot assume that age explains why some clients received more services than others or that the number of services received explains why some clients found housing more quickly.

Hypotheses do not have to imply causal relationships. Just knowing a relationship exists may be sufficient. How would you interpret the findings if they supported H2? You might wonder if programs discriminate against younger clients, if older clients have different needs, if younger clients are less willing to accept services, or if programs have limited capacity and offer more to those who may benefit the most?

You may wonder why we go to the trouble of stating hypotheses and identifying independent and dependent variables at all. First, this step helps clarify our thinking. As you generate and evaluate hypotheses you can ask yourself, “Why do I think this relationship exists?” “Why do I want to know about this relationship?” “Do I think that the independent variable leads to the dependent variable? Why?” Let’s consider H3. While you might assume that the more services received led to more success in finding housing, it is also possible that the longer a person was homeless the more services he or she was offered. So, in stating a hypothesis and designating an independent variable you have really fleshed out the relationship beyond the bare bones of the hypotheses. Second, identifying the hypotheses and the variables helps in communication. An audience can quickly visualize the question you are asking and how you plan to answer it. Third, specifying the variables and the relationship between them guides your analysis as you decide what to study and how to organize your data.

The Direction of Relationships: An important characteristic of a hypothesis is the pattern of the relationships it postulates. The relationship between two variables may take on one of three forms: direct, inverse, or nonlinear.

Let’s consider the link between age and how long a person is homeless. A direct or positive relationship occurs if the older clients are homeless longer than younger clients. An inverse or negative relationship occurs if older clients are homeless for a shorter time than younger clients. Both direct and inverse relationships may be described as “linear,” that is, as the independent variable increases the respective dependent variable increases or decreases. Figures 1.1 and 1.2 illustrate a direct and inverse relationship, respectively

A relationship may have a distinctive but nonlinear pattern. An independent variable and dependent variable may vary together and then level off or change direction. Data may show that too much of a good thing can be harmful. For example, too much fertilizer may weaken crops. If the data on fertilizer use and crop yields were plotted, the points might form an upside down U. Each addition of fertilizer may yield better crops, but at some point adding fertilizer will steadily reduce plant quality. The same thing happens to humans. For example, the longer a person stays in a job training program the more likely he or she is to find a job, but at some point the benefit may drop off or actually decrease. Clients who have stayed on “too long” may cease to benefit from the advice they are given. After a period of time some clients may continue to receive training but give up looking for a job. Figure 1.3 illustrates a possible nonlinear relationship between age and number of days homeless. We imagined that among younger and middle-aged persons the relationship between age and time homeless may be direct, but as the person becomes elderly the period of homelessness may decrease.

FIGURE 1.1 Illustrating Direct Relationship: Age and Time Homeless
FIGURE 1.2 Illustrating an Inverse Relationship: Age and Time Homeless

If possible a hypothesis should specify the direction of a relationship. Postulating the direction will not affect the data you gather or the analysis you do. Your findings may be more valuable if you have laid out a case for the hypothesized directions. Findings that run counter to your expectation may generate thoughtful discussions and new insights.

If an independent variable has no discernible effect on the dependent variable, we say that the two variables do not vary together. If two variables do not vary together, their relationship may be described as random or null.

FIGURE 1.3 Illustrating a Nonlinear Relationship

Control Variables: In reality a two-variable hypothesis gives limited information. Additional variables, termed control variables, may better describe the complexity of why something happens. A control variable may be added to see how it alters the relationship between the independent and dependent variables. A control variable may show that (1) the relationship between the independent and dependent variables is stronger for some values of the control variable than for other values or the hypothesis is supported for some values of the control variable and unsupported for other values, (2) the relationship does not have the same direction for each value of the control variable, (3) the control variable has little or no impact on the relationship, or (4) the original relationship is wrong.

We have created hypothetical tables to examine the hypothesis that older clients are homeless longer than younger clients. Assume that your data show that younger clients are homeless no more than 30 days and older clients are homeless for more than 30 days. You wonder if older clients are homeless longer because they are more likely to suffer from chronic mental illness. You designate chronic mental illness as the control variable and divide the data into two groups: clients with chronic mental illness and other clients. Tables 1.1 through 1.3 illustrate different impacts of a control variable.

In Table 1.1, among clients with mental illness older clients are homeless longer. The same relationship is not seen among clients who are not mentally ill: both older and younger clients are homeless for no more than 30 days. In Table 1.2, among clients with mental illness older clients are homeless longer (a direct relationship). Among clients without mental illness younger clients are homeless longer, an inverse relationship. In Table 1.3, the difference between younger and older clients disappears. From Table 1.3 we would conclude that mental illness, not age, is related to how long a client is homeless. If the original relationship stayed the same, that is, younger clients were homeless for no more than 30 days and older clients were homeless for more than 30 days whether or not they had mental illness, we would conclude that the control variable does not affect the relationship.


Median Days Homeless of Younger and Older Clients by Presence of Mental Illness: Impact 1

Median Days Homeless
Mental Illness Younger Older
Yes <30 >30
No <30 <30


Median Days Homeless of Younger and Older Clients by Presence of Mental Illness: Impact 2

Median Days Homeless
Mental Illness Younger Older
Yes <30 >30
No >30 <30


Median Days Homeless of Younger and Older Clients by Presence of Mental Illness: Impact 3

Median Days Homeless
Mental Illness Younger Older
Yes >30 >30
No <30 <30

Defining Research Subjects

All research depends on subjects. Possible subjects include geographic units, organizations, services, or individuals. The term unit of analysis describes what constitutes an individual case. If we collected data on each client of a program to serve homeless persons our unit of analysis would be individuals. To create a list of variables and values you may ask organizations to provide data on individual clients. If our research question was “What are the different client populations, services offered, and success rate of local programs?” the unit of analysis might be program. You might ask of each program for information on the following variables.

Upon analysis you may find that programs with older clients report longer periods of homelessness than programs with younger clients. This finding alone does not indicate that older clients are homeless for a longer period of time because the data do not represent individual clients. You may correctly infer that programs that serve older clients may have unique features. They may offer more long-term treatment programs to address problems associated with old age. They may be more willing to accept clients who have a history of drug abuse or mental illness. As you may have observed, we may be able to aggregate individual data to describe a program. But we cannot do the reverse.

Variable Value to Report
Percentage of clients with chronic mental illness Actual percentage
Average age of clients Median age
Median days clients are homeless Median number of days
Service provided
Substance abuse rehab Yes, no
Job counseling Yes, no
Medical diagnosis and treatment Yes, no

The population describes the specific set of subjects we are interested in. The population can be counties in a state, programs in a county, or city residents who are homeless. The respective units of analysis would be counties, programs, and individuals. A sample is a subset of the population. We may create a sample to learn about something of interest or we may create a sample to estimate something about the population. For example, to learn about programs serving homeless persons we may decide that we will get the best information by sampling agencies known for excellent services. To learn about what types of programs exist, their clients, and their services we may want to randomly sample from the population of all programs.


Sound research skills are important to public and nonprofit organizations and their managers. Effective managers should know how to design and use research to determine how well their organization is doing; how it is seen by their clients, users, and the community; how effective its programs are; and what its community needs are. The public, donors, and legislators, who expect organizations to be transparent and accountable, require such information. Furthermore, the information is needed to help identify problems, design programs, and improve services.

This text is designed to teach you research skills you will use in your career and to give you opportunities to practice applying them to programs. This section of the text introduces basic research concerns. This chapter covered types of research questions and associated terms. Chapters 2 and 3 cover measurement and ethical treatment of research participants, respectively. Effective measurement is critical to any research study. Considering ethical issues that arise in conducting research before beginning a study is essential.

In the later sections we integrate material on methodology and statistics with the types of studies typically conducted by effective managers and programs. These studies may

1.  track program performance to provide empirical information on its operations and accomplishments;

2.  survey clients, users, and citizens to identify their impressions, experiences, and satisfaction with a program;

3.  evaluate a program to demonstrate its effectiveness;

4.  assess a community’s needs to determine what problems exist and gaps and overlaps in existing services.

Chapters 4 and 5 focus on tracking performance. Chapter 4 presents logic models, which are used to design, implement, and evaluate programs. We introduce them here because they help identify relevant variables. The remainder of the chapter covers how to measure efficiency and track a variable over time. Since performance monitoring typically tracks a single variable Chapter 5 is the logical place for us to discuss graphs and basic statistics.

Chapters 6 through 10 cover surveys. These methods do not apply only to surveys; they play a role in designing other types of studies. Chapters 6 and 7 describe how to select a sample and ask questions, which are basic methodology skills. Chapters 8 through 10 explain how to analyze the data. Chapter 8 focuses on how to describe the data using statistics and graphs. Chapter 9 addresses the question of generalizing from a sample to the population. Chapter 10 explains how to present and analyze information from interviews and open-ended questions.

The next section includes three diverse chapters. Chapter 11 on program evaluation covers research design, a central topic in any introductory methods course. Research designs are of particular importance in collecting and assessing evidence that demonstrate a program’s effectiveness. Community needs assessments, the topic of Chapter 12, are not typically covered in an introductory text. We included it here because persons with research training may be expected to help design and conduct a needs assessment. Learning about community needs assessment is consistent with our effort to link research methods with management applications. The chapter on geographic information systems may seem to be an unusual addition to a book on research methods. The use of maps in conducting research is not fully developed, but it is increasing. Maps may play an important role in needs assessment and whenever you think knowing the location of problems, needs, services, or consumers is important. This chapter is intended to help you think about when maps will help you in designing or presenting research.

Chapter 14 covers both oral and written presentations. A study’s impact will be enhanced if the findings are presented clearly and effectively. Studies and their findings compete for attention. To illustrate the truth of this, think about how many articles you skim or don’t read at all or how many news reports you ignore. Potential users are no different. They may seldom spend time deciphering a complicated report or an unorganized presentation.

A key component of this text is the exercises. Hands-on applications give you immediate feedback about what you understand and what confuses you. We have divided the exercises into the sections Getting Started, Small Group Exercises, and On Your Own. Getting Started asks you to apply the information presented in the chapter to management situations or policy problems. The small group and class exercises mirror what happens in most research projects, that is, each individual participant has ideas about what the research question is, why it was asked, and how to answer it. The exercises in this section require you to develop important skills. You will practice explaining research concepts, presenting your ideas, exchanging ideas with others, and resolving differences. You will find that as you try to explain your ideas, listen to the ideas of others, and incorporate different insights you will understand the methodology better. Through the interaction you should propose an answer that is better than any one person’s initial response. The On Your Own exercise is included in most chapters. If you are working or have an internship, you will find this section of the exercise useful as it guides you through the steps required to apply the methodology and to learn more about research practices in the field.


This section presents a scenario, Sarah’s Story, and asks a series of questions that require you to apply basic concepts and debate how to design a useful study.

This section has two exercises, which should develop your skills in linking a research question to stakeholders’ needs.

•  Exercise 1.1. Learning from Sarah’s Story presents a scenario and asks a series of questions that require you to apply basic concepts and debate how to design a useful study.

•  Exercises 1.2. On Your Own asks you to identify a relevant research question for your agency (where you work or have an internship) and to consider the constraints and opportunities for conducting a useful study.

Exercise 1.1 Learning from Sarah’s story


In 1978 the New Yorker published a story about a young woman, Sarah, whose first job after college was to research public assistance programs. We have embellished Sarah’s story to point out the problems that occur when a researcher starts with no clear direction and to allow you to practice selecting a research question and clarifying the research components.

Sarah’s Story (retold more than 30 years later)

Immediately after graduation Sarah applied for a job with a statewide community action agency. The successful candidate would conduct research on the state’s public assistance programs. The position was funded for 12 months. Sarah was hired.

Sarah’s job interview was short and provided little guidance. The executive director told her not to spend too much time in the library. Rather she should explore and interpret what she saw when she visited the county programs and met county residents who were receiving assistance. She was to submit quarterly reports; her final report would be due at the end of the year.

During the first quarter Sarah drove throughout the state and talked to hundreds of clients in 14 counties. For her first quarterly report she compiled their stories. The executive director called her in and sharply criticized her report. Sarah had totally missed the questions she was expected to answer. The executive director expected a report that the agency could use to advocate with the state legislature.

Section A: Getting Started

1.  During the initial interview Sarah and the executive director could have explored the research questions that Sarah was expected to answer.

a.  Suggest a research question focused on

i. agency performance
ii. client experiences
iii. program evaluation
iv. assessing community needs?

b.  Which question do you think would yield the most valuable information? Why?

2.  After Sarah was hired she should have met with agency stakeholders to hear what research questions they would like her to answer.

a.  Write three specific research questions that you can imagine stakeholders would want answered and briefly explain how the information could be used.

b.  What resources will be needed to answer these questions? to implement the findings?

3.  Identify how the following could constrain the proposed study.

a.  Personnel requirements

b.  Costs

c.  Time requirements

4.  Carry out the following assignments to create and describe a hypothesis and variables.

a.  Write a hypothesis that Sarah could test.

b.  Identify each variable in the hypothesis, state at least two values for each variable, and identify type of each variable (independent, dependent).

c.  State why [name of independent variable] is the independent variable in the hypotheses.

d.  What is the direction of the hypothesis?

e.  Suggest a control variable Sarah might use to further study the hypothesis.

f.  How do you anticipate the control variable affecting the hypothesis?

5.  What unit of analysis should Sarah choose to test her hypothesis?

6.  What population would Sarah want to generalize to?

Section B: Small Group Exercise

1.  Form a group with three to five classmates who suggested a similar research question and (a) consolidate your research questions to state the question that will guide the study, (b) prepare a statement suggesting the value of answering this question, (c) identify types of costs associated with conducting the study and implementing its findings, and (d) identify possible constraints.

2.  A spokesperson for each group will present its proposal. After each presentation the class should discuss and evaluate the value of answering the proposed research question and the identified costs and constraints.

3.  After the presentations the class as a whole should rank the proposals and decide which one Sarah should pursue.

Exercise 1.2 On Your Own

As part of your job or an internship you may be asked to design a research study. Before expending the time, money, and other resources on a research study you should find the answers to the following questions about it.

1.  Why does the management team want the study done?

2.  When the study must be completed by [date]?

3.  What the findings of the study will be used for?.

4.  What resources have been set aside for this study?

5.  How does the study advance the mission and objectives of the organization?

6.  How can the findings of this study be used to [consider all that apply]

a.  justify additional resources?

b.  maintain existing resources?

c.  monitor a program?

d.  improve employee performance?

e.  improve program implementation?

f.  improve program outcomes?

g.  identify client needs?

7.  Identify how the following may constrain this study.

a.  Personnel

b.  Finances

c.  Time

d.  Public opinion

e.  Political realities

8.  How will program staff be involved in developing the research question?



We live in a world where information is being increasingly quantified. Reviewers rate movies with stars, newspapers report the cost of living in the “world’s most expensive cities,” and international watchdog organizations rank countries from the most corrupt to the least corrupt. We refer to this process of quantification as measurement. Consider the importance of measurement both in everyday life and to organizations. People use reviewer ratings to decide what movie to see. Tourists use cost-of-living reports to decide what cities to visit. Businesses use information on corruption to decide what international businesses to engage as partners. Organizations use data to identify community needs, to track performance, to evaluate programs, or to learn about clients. An organization may report data to influence policy decisions or to demonstrate accountability. In some cases, such as identifying community needs, the organization may use existing data. In other cases it will develop its own measures and collect the data.

In this chapter we take you through the steps required to decide what you want to measure and determine the quality of measures you use. Later chapters build on this knowledge. Chapter 4 introduces performance monitoring; you will learn strategies for identifying measures to describe individual and organizational performance. Chapter 5 has a brief section on levels of measurement, which is linked to question wording and the choice of statistics. In Chapter 7 we turn to writing questions and questionnaires. Whether you conduct a survey or design forms the specific wording of questions and items directly affect the quality of your measures and the value of your data.


As we begin to develop measures we may label a variable as a concept and the statement that describes what we mean by the concept as a conceptual definition. Let’s begin with a homey example. What defines a “good restaurant”? Is it one that serves large portions for a relatively low price? Is it one that has interesting, even cutting-edge, food? Or do you look for atmosphere? If you think about your experiences in asking for restaurant recommendations you will understand important characteristics of a conceptual definition. That is, there is no one correct definition of a concept. Whether a definition is appropriate or not largely depends on the users and how they plan to use the information.

The first step is to learn how critical stakeholders define what will be measured and how they plan to use the information. If an adult literacy program manager asked you to help document its effectiveness you might first search for a credible definition of adult literacy. National assessments of literacy use the following conceptual definition: “The ability to use printed and written information to function in society, to achieve one’s goals, and to develop one’s knowledge and potential,” including the ability to read prose, handle forms, and perform arithmetic calculations.1 The measures based on this definition have been used to track adult literacy in the United States and other countries, but this definition may be of limited use to a specific literacy program. Next you might meet with program staff and ask them what they want to achieve. You could ask staff members how they define literacy, how program activities should add to client literacy, and how they know that a program has been successful. You might ask them how their definitions compare with what clients, funders, and donors expect the program to achieve. Not all definitions are relevant to your study. Definitions, such as those used in national assessments, may help the staff organize its thinking, but they may not measure what a specific program is trying to achieve.

Assume that the staff agrees that they define literacy as “the parents’ ability to read to their children and engage them in conversations about the stories.” You would then find a way to measure the parents’ ability to read the stories, understand the stories, and talk about the stories. Let’s just focus on the ability to read stories and understand them. You might present the clients with a few very short stories, ask them to read the stories aloud, and ask a few questions about each story. You would develop a guide to score each client’s ability to read the story and answer the questions. The short stories, the questions asked, and the scoring guide constitute an operational definition of literacy. In other words, the operational definition gives you the complete picture of how clients’ literacy was determined. The stories included in the operational definition may seem appropriate given the criteria of word length and sentence structure. But a problem occurs if the stories are about unfamiliar objects. A story about farm life may confuse readers with no knowledge of farm buildings, crops, and various animals. To avoid such problems you must conduct a pretest. You should ask persons similar to the program’s clients to read the stories and answer the questions. The importance of conducting a pretest is a message that can’t be repeated often enough.

Honestly, in the real world, you will seldom encounter the term conceptual definition, but you should not ignore its importance. It is the same as asking, “What do we mean by X [the concept or term of interest]?” Too often, once an operational definition is stated people focus on its technical details. They may labor over the wording of the questions and responses. They may not stop to consider if they are measuring what they want to measure.

Let’s work through another example focusing on how an organization can evaluate its volunteer orientation program. First, the staff should decide what it wants to accomplish. Should the orientation introduce potential volunteers to the organization, its mission, its history, and its values? Will participants be trained to carry out specific tasks? Are participants expected to sign up as volunteers and recruit other volunteers? Once the goals are decided they are incorporated into a conceptual definition. The conceptual definition serves as a blueprint for deciding what to ask participants at the end of an orientation. The shaded box below uses orientation quality as an example of a conceptual definition and an operational definition and to illustrate their relationship.

Measuring the Quality of Volunteer Orientation

Variable: Perceived Quality of Volunteer Orientation

Conceptual Definition: Orientation attendees understand the organization, value its services, are motivated to volunteer, and feel prepared to work with clients.

Operational Definition, Part 1.

Ask each participant to fill out a form that includes the following items:

For each statement indicate the response that describes your opinion:

1.  I understand the mission of [name of organization].

Responses: Strongly agree, agree, neither agree nor disagree, disagree, strongly disagree

2.  I can describe the services [name of organization] offers.

Responses: Strongly agree, agree, neither agree nor disagree, disagree, strongly disagree

3.  I feel qualified to refer clients to [name of organization].

Responses: Strongly agree, agree, neither agree nor disagree, disagree, strongly disagree

4.  I am comfortable with recruiting others to volunteer with [name of organization].

Responses: Strongly agree, agree, neither agree nor disagree, disagree, strongly disagree

5.  I have signed up to be a volunteer with [name of organization].

Responses: Strongly agree, agree, neither agree nor disagree, disagree, strongly disagree

6.  I feel uncomfortable about working with clients.

Responses: Strongly agree, agree, neither agree nor disagree, disagree, strongly disagree

Operational Definition, Part II

Responses to Questions 1 through 5 are scored 5 = strongly agree, 4 = agree, 3 = neither agree nor disagree, 2 = disagree, 1= strongly disagree. Question 5 uses the opposite scoring: 1 = strongly agree through 5 = strongly disagree. The sum of the six questions represents the perceived quality of orientation. The scores can range from 30 (the highest quality) to 6 (the lowest).

An operational definition determines what we actually measure. In the previous example we learned the perceptions of people who attended orientation. We do not learn if their understanding of the organization and its programs is accurate. We do not know if they are really prepared (or unprepared) to work with clients. We may not know if they actually signed up or actually volunteered. To get that information we would use a different operational definition. For some items we might ask participants specific questions about the organization, its services, and volunteer tasks.

Let’s stop here for a word of caution about the pragmatism of operational definitions. You may be tempted to use questions to evaluate volunteer orientation programs that you find on the Internet or obtain from a friend. But a measure that has been designed to assess a general orientation for a large nonprofit with diverse programs, such as the Red Cross or YMCA, might not be appropriate to assess an orientation for volunteers who will work with survivors of domestic violence. Even if a measure has documentation establishing its quality, this does not mean it is the right measure for your study. You need to consider your purpose and the characteristics of the individuals or organizations who will supply the requested information.


Before you put a question on a survey or interview guide you should establish its quality. First, you may ask, “Is the measured difference between subjects a real difference?” or “Did measured changes over time really occur?” These questions address the reliability of a measure. Second, you may ask, “Does this measure actually produce information on the concept or variable of interest?” This question addresses the operational validity of a measure. Third, you may ask, “Is this measure sufficiently precise?” This question addresses its sensitivity.

Reliability evaluates the consistency of a measure. Differences over time or between subjects may be due to random error. Random errors are just that, random events or features that affect your findings. Random errors occur because of respondent characteristics, the measure itself, or the process of assigning values. Uninterested or distracted respondents introduce errors when they answer questions rapidly without listening or thinking. Respondents may be inconsistent as they answer items with ambiguous terms or inadequate or unsuitable response choices. Raters may be inconsistent in how they assign values or record answers. Random errors cannot be entirely eliminated, but you want to make sure that they are not undermining the value of your data. A measure that yields a large amount of random error should be discarded.

Dimensions of Reliability: Reliability has three dimensions. A reliable measure should have stability, equivalence, and internal consistency. A stable measure yields different results over time only if the phenomenon being measured has changed. Consider choosing scholarship recipients. To ease its work and assure a fair selection process, the selection committee may create a measure to rate the applications. For each application a rater adds up the values of separate items that measure experience, leadership, achievements, and potential future contributions. The assigned values or ratings should not change with the rater’s mood, fatigue, or degree of attention. To make sure the ratings are consistent the raters can re-rate a random sample of applications. If their original ratings and second ratings are the same or nearly the same, the measure may be deemed stable.

An equivalent measure yields different results only if the differences between subjects are real differences. For example, if you and some colleagues use the same measure all of you should assign the same value to the same person or case. If several people are to rate the scholarship applicants they should first rate a sample of applications. The sample should represent a variety of applications, including applicants with diverse backgrounds or applications with wordy answers. If the raters give the same or nearly the same ratings to each application we can assume that the differences between subjects are real differences and not due to differences in the raters. If the raters give different scores the measure has to be examined to identify and resolve problems.

Equivalence also applies if two or more versions of a measure are used. Consider written driving tests. If each applicant takes the same test, cheating becomes a problem, so numerous versions are created. An equivalent measure would ensure that comparable test takers receive the same score no matter which test version they take. Developing alternate versions, however, is time consuming and rarely done except when designing a test for large numbers of people.

Internal consistency applies to measures with multiple items. It establishes if the items are empirically related to each other. A simple example may help explain internal consistency. Imagine a measure of arithmetic skills with five multiple-choice questions. The questions are

1.  35 + 83 =?

2.  83 – 35 =?

3.  83/23 =?

4.  1.83 × 23 =?

5.  5e3 =?

Let us assume that items 1 to 4 relate to arithmetical ability and that item 5 does not. Respondents who have good computational skills may easily answer the first four items, and guess the answer to item 5. Their guesses represent random error. If item 5 was dropped the measure would be more internally consistent.

Qualitative Evidence of Reliability: The next question to ask yourself is, “How much random error should I tolerate?” Think about the examples we have used in this chapter—adult literacy, quality of volunteer orientation, and scholarship ratings. Which measure should have the least random error? Why did you choose this measure? To decide how much random error to tolerate, you should consider its consequences. Among our three examples we would want to have the least error in choosing scholarship recipients, since the reliability of the measure will affect who receives the scholarship and who doesn’t. The decisions based on the other two measures will neither benefit nor harm an individual. Now think about driving tests. Given the large number of drivers and the potential of an unsafe driver to cause great harm to others the need for minimum random error is even greater.

To estimate reliability start with a qualitative approach. Qualitative methods do not precisely estimate the amount of random error. Hence, you may underestimate their value. However, at a relatively low cost, they can identify problems that will thoroughly discredit a poor measure. You should review a measure’s operational definition to decide whether

■  terms have been defined precisely;

■  ambiguous items or terms have been eliminated;

■  information is accessible to respondents;

■  multiple-choice responses cover all probable responses;

■  directions are clear and easy to follow.

A few examples will demonstrate the problems introduced if a measure lacks one or more of these characteristics. Recall that an unreliable measure does not detect actual differences between subjects or in the same subject over time. Before you ask individuals if they are homeless or ask an agency how many of its clients are homeless you need to define homeless. Homeless persons may include persons living on the streets, in a shelter, doubled up with relatives or friends, or couch surfing. Unless you have defined the term some people who are living with relatives “until something comes along” will say they are homeless and others won’t. The same may be true of young people who are sleeping on friends’ sofas. If you do not define what you mean by homeless, differences between people who say they are homeless and those who don’t may not be actual differences but differences in how they interpreted the question.

Gathering data about family size or earnings may seem straightforward, but both are fraught with ambiguities. If you ask “What is the size of your family?” you may be asked if half-brothers and sisters should be counted. What about divorced parents or siblings who live in different households? What about relatives or their partners who are part of the household? Similarly, asking members of a diverse population how much they earn is tricky. If a time period is not indicated respondents may report hourly, weekly, monthly, or yearly earnings. A person who reports “$12,000” may be reporting her monthly or her annual earnings. On the other hand seasonal workers and self-employed persons may not know how much they earn in a year. Similarly, the accuracy of estimates of family income may be wildly inaccurate depending on which family member is asked.

To reduce the random errors and increase reliability you should review the definition of terms, the clarity of items, the accessibility of information, and the appropriateness of response choices. Your review of the operational definition should involve comments from potential respondents. Otherwise you risk falsely assuming that actual respondents will interpret the items the same way you do. Remember, don’t be presumptuous. Spell out what you want to know.

Additionally, the reliability of measures also depends on interviewers and data entry staff. They must be trained and supervised to minimize inconsistent decisions. For example, all interviewers should define family the same way and use the same procedures for resolving problems of inaccessible information or ambiguous questions. Similarly, staff entering data on a spreadsheet may be uncertain how to handle forms on which respondents checked more than one response, embellished a response category with their own comments, or added an alternative response. If individuals decide on a case-by-case basis how to handle ambiguous responses, the decisions may be inconsistent and the data less reliable.

Quantitative Evidence of Reliability: All measures should receive a qualitative review before they are used to collect data. In addition mathematical procedures may be used to estimate reliability. Specific tests estimate the measurement error associated with stability, equivalence, and internal consistency. We will limit our discussion to tests that you may find useful and easy to implement and ones that are commonly reported in research articles. If you plan to construct or work extensively with job tests, achievement tests, or personality tests you need to become familiar with other methods for mathematically estimating reliability.

Test–retest establishes a measure’s stability. You should conduct a test-retest before collecting data. You need to collect the data at two points in time and you should expect that what is being measured will not have changed. Let’s consider the scholarship selection committee. To establish a measure’s stability you may ask the scholarship selection committee to rerate some of the applications. If the raters give the same scores, you cannot automatically assume that the rating measure is reliable. You have to verify that raters did not remember and reproduce their original ratings. If the scores are different you may assume that the instrument is unreliable. After the first round of ratings the raters may have discussed the measure and changed how they valued the various items. The information on the applications will not have changed. The perceptions and ratings of the raters changed. Using test-retest to establish stability is difficult insofar as an instrument itself may be the source of a change in values.

Inter-rater reliability establishes the equivalence of a measure. Inter-rater reliability should be established if two or more people are collecting data. Here, observers apply the measure to the same phenomena and independently record the scores; next, their scores are compared. Our example of several people rating the scholarship applicants and comparing their ratings illustrates inter-rater reliability. You may apply inter-rater reliability to staff training. Imagine training housing inspectors. Trainees may inspect and rate a sample of dwellings. The trainee may be approved to work once her ratings agree with the ratings of experienced inspectors.

Tests of internal consistency establish the homogeneity of a measure. Internal consistency is used only if a measure consists of several items. The key statistic to assess internal consistency is Cronbach’s alpha. The closer alpha is to 1.0 the more internally consistent the measure. If alpha is close to 0.0, the items are unrelated to one another. As is true with many statistics, there are no set criteria for an acceptable alpha. When a research report indicates a measure’s reliability it normally refers to a test of internal consistency.

At a minimum you want to verify that the data are free of gross errors. To do so, start with a qualitative review of a measure’s operational definition. The review will identify serious threats to reliability. Depending on how a measure will be used you may consider a quantitative test as well. On reading research reports you are most likely to see references to inter-rater reliability or internal consistency.

Operational Validity

Operational validity produces evidence that a measure is correctly named and accurately describes what it is measuring. Let’s consider poverty. In the United States a household is under the poverty threshold if its income is less than three times the cost of a minimally adequate diet. The multiplier of three comes from a 1955 finding that a typical family spent one-third of its income on food. The challenge of measuring poverty illustrates several important points. First, operational validity depends on planned use. A measure of poverty may be used to track changes in poverty over time, determine who should receive financial assistance, or describe the effect of poverty on individuals and their community. The poverty threshold has the advantage of being reliable, and changes from year to year may be assumed to be real changes. It may not, however, be a fair criterion to decide if a household needs financial assistance.

Second, operational validity depends on the content of the measure and relevance. A measure of poverty may take into account the cost of food, housing, and medical care. Other possible content includes the cost of child care and transportation. The relevance depends on how the data will be used and interpreted. Over time the poverty threshold has become less relevant, because food consumes far less than one-third of most family budgets.

Third, operational validity always involves judgment. Some people may argue that a poverty measure should consider only if households can meet the most basic needs. Others may argue that education, child care, or transportation should be included, because without them, families may stay impoverished. It is not possible to prove which is the more valid. Judgment also comes into play when measures are modified. Users may have to decide whether to sacrifice comparability or reliability in order to have a more relevant measure.

An important measure, one that will impact decisions, inevitably spawns disagreement about its validity.2 Should a measure of poverty account for entitlements, such as food stamps? Is a measure of good government biased toward the values of businesses? Does gross domestic product (GDP) measure progress if it ignores pollution and social inequities?

Designing a measure takes considerable work, skill, and thought. While you cannot prove that a measure is operationally valid, you can produce evidence that its content is representative and it is relevant. In the following sections, we discuss the evidence based on a measure’s content, its relationship to other measures, and its consequences.

Evidence Based on Content: Establishing evidence that a measure is valid begins with the conceptual definition. The appropriate content may emerge later in discussions with stakeholders about what the concept means to them and how they plan to use the data. The next step is to design an operational definition that represents the conceptual definition. Let’s go back to our example of the quality of volunteer orientation. The conceptual definition was that attendees would understand the organization, value its services, be motivated to volunteer, and feel prepared to work with clients. We chose six items to measure the quality of orientation. For each item the respondent used a scale ranging from “strongly agree” to “strongly disagree.”

Understand the organization:

I understand the mission of [name of organization].

I can describe the services [name of organization] offers.

Value its services:

I feel qualified to refer clients to [name of organization].

I am comfortable with recruiting other volunteers to [name of organization].

Motivated to volunteer

I have signed up to be a volunteer with [name of organization].

Prepared to work with clients

I feel uncomfortable about working with clients.

You have to compile a set of items that you believe adequately captures the conceptual definition. We will leave it to you to decide if the six items listed here are representative. It is a matter of judgment. To ask about every aspect of orientation may require many more questions—far more questions than respondents would want to answer or staff would want to analyze.

As we noted earlier, the operational definition measures attendees’ perception and does not confirm that they understand the organization’s mission and its services, are capable of working with clients, or are willing to actually volunteer. Thus, you cannot assume that attendees’ perceptions are a valid measure of the actual quality of orientation.

A measure’s relevance is directly linked to its intended use. The six items provide a snapshot of attendees’ opinions about orientation. They may identify weak spots in orientation. However, the items may be inadequate in a different context for example, if the information is meant to guide revisions in the orientation program—keeping what works and dropping what is ineffective. Also, the decision to measure perception rather than objective measures of quality may depend on how the data will be used and interpreted. Creating and administering an objective measure may seem infeasible or unnecessary for its intended use.

Evidence Based on Relationship to Other Variables: Reviewing a measure’s content does not tell the whole story. You will learn a lot if you compare the data from one measure with data from another measure. We will refer to other measures as criteria. You might select a similar validated measure as a criterion. For example, an adult literacy program manager may compare instructors’ assessments of the reading ability of individual students with their performance on standardized reading tests. If the instructors’ assessment agrees with the standardized test scores the program can forego the cost of obtaining and administering standardized tests. The program will have evidence that instructor assessments are a valid measure of client reading ability.

You may select independent assessments as criteria that examine self-assessments and perceptions. Self-reports of health may be compared with a physical exam or a person’s medical records. Employee assessments of their job performance may be compared with agency records. Client assessment of their reading ability may be compared with their instructors’ assessments.

You may select criteria that are logically linked to the measure. You would expect people who gave orientation a high rating to volunteer more than those who didn’t. Likewise, you would expect people who are categorized as poor to have a less nutritious diet than other people. You would expect agencies with high client satisfaction to perform better than ones with low client satisfaction.

Finding a weak relationship between a measure and a criterion may be disconcerting, but it may start a valuable exploration of what information a measure is providing and what it is not providing. We may find that people say that they are healthier than what their medical records indicate. Because we cannot argue that perceived health validly measures a person’s health, the information is valuable. A measure of perceived health can help explain what motivates people to seek medical care or why they ignore serious symptoms.

Another type of criteria is a future outcome. Consider selecting scholarship applicants. Assume that a scholarship’s purpose was to train future leaders. The committee chose and measured items that it believed would identify applicants who use the training and become leaders. To estimate if their selection criteria were appropriate the committee might track the career outcomes of applicants. Similarly, organizations would want to choose employees who will perform well in their jobs. From time to time they may review their selection measures to see if they still capture the quality of employee performance. From examining the outcomes of scholarship selection and employee performance we can learn if people who were expected to be successful were successful. If the criterion includes only qualified and selected people, we will not know if applicants who were rejected based on the criterion would have been equally good or better.

The Consequences of Applying a Measure: Traditionally, operational validity asks if you are measuring what you intend to measure. A closely related question is what are the consequences of applying a measure. The more importance stakeholders place on the data a measure produces the more they should consider its consequences. Administrators and policy makers assume positive benefits when they first adopt a measure. They should confirm that the positive benefits were realized and identify any unintended consequences. The No Child Left Behind Act is a vivid example of the consequence of measures. The act was intended to ensure that all American school children are proficient in mathematics and reading. It called for annual testing of children in third through eighth grades. If a school did not meet statewide performance standards, it was expected to take remedial action. The policy makers assumed that the tests would motivate schools to hire qualified teachers, adopt effective teaching practices, and assist children in danger of failing. Critics have argued that the act’s emphasis on test scores has led states to create easier tests and increase the pass rate, shifted the curriculum away from developing critical thinking, discouraged teachers from teaching in low-achieving schools, and encouraged low-achieving students to drop out.3 This example reminds us that what is being measured can affect behavior. Sometimes the impact is not what we expect or want.


The sensitivity of a measure refers to its precision or calibration. A sensitive measure has sufficient values to detect relevant variations among respondents; the degree of variation captured by a measure should be appropriate to the purpose of the study. Measures that are reliable may not necessarily detect important differences. Consider a salary survey. Suppose employees were asked the following question:

What is your annual salary for this year? (check appropriate category)

_____ Less than $25,000

_____ $25,000 to $39,999

_____ $40,000 to $54,999

_____ $55,000 to $69,999

_____ $70,000 to $84,999

_____ $85,000 or more

These categories may be adequate to summarize the earnings of all employees, but not the salaries of senior managers. If most top managers earn at least $85,000, all the responses will fall in one category. You may not be able to learn if salary is related to performance.

The sensitivity of a measure may depend on the homogeneity of respondents. For example, a job-satisfaction measure developed for organizations employing unskilled and skilled laborers; clerical workers; and technical, administrative, and professional staff may be a poor choice for an organization largely made up of professionals. If individual differences are of interest, the measure would not be sufficiently sensitive to compare differences among employees in the more homogeneous group.


No measure can describe fully a concept of interest. At best a measure allows us to estimate what program clients achieve, volunteer and employee productivity, or the number of poor. The information a measure provides has great value, but you should also recognize each measure’s limitations.

Measurement begins with a conceptual definition. The conceptual definition indicates what stakeholders mean by a concept and how they plan to use the data. The conceptual definition serves as a blueprint for the operational definition, which details exactly how a concept or variable is measured and its values are determined. The conceptual definition will be particularly important to decide if the measure’s content is representative and valid.

You should establish the reliability, operational validity, and sensitivity of all measures before data collection begins. Reliable measures allow you to conclude that differences between subjects or over time are real differences, and not due to the measure or the measuring process. A qualitative review of a proposed operational definition will markedly improve reliability and may be adequate. You should verify that directions are clear and easy to follow, that items are clearly defined, that given responses cover all likely responses, and that respondents can provide the requested information. People responsible for collecting or processing data should be trained to avoid inconsistencies. If knowing and limiting the amount of random error are important, you should use mathematical procedures to estimate the amount of error. Unreliable data should be discarded.

A reliable measure is not necessarily operationally valid. A valid measure measures the concept of interest. Evidence based on a measure’s content documents that the operational definition is relevant and representative. Criterion-based evidence empirically establishes the relationship between a measure and a criterion. The criterion may be a similar measure, an alternate measure, or a future outcome. Evidence may also be gathered to document the consequences of implementing a measure. The evidence validating a measure informs and supports the judgment of data users; it does not replace their judgment. A sensitive measure sufficiently distinguishes cases from each other so that they can be compared.


More detailed discussions on measurement, especially quantitative evidence of reliability, can be found in texts on educational measurement or psychometrics. A recommended text is Susana Urbina, Essentials of Psychological Testing (New York: John Wiley & Sons, 2004), which is a basic, accessible resource on measuring, including knowledge, abilities, attitudes, and opinions.

A standard reference on measurement is Standards for Educational and Psychological Testing (Washington, D.C.: American Educational Research Association, 1999). The standards were created by a joint committee of the American Educational Research Association (AERA), American Psychological Association, and National Council of Measurement in Education and approved by their respective governing bodies. The AERA endorsement stated, “We believe … the Standards to represent the current consensus among recognized professional[s] regarding expected measurement practice. Developers, sponsors, publishers, and users of tests should observe these standards” (Standards, viii).


There are three separate exercises for this chapter. Each exercise develops your competence in interpreting and applying measurement concepts.

•  Exercise 2.1 Good Nutrition Survey focuses on the strategies for designing reliable and operationally valid measures.

•  Exercise 2.2 Living Wage: From Idea to Measure focuses on the concept of living wage. The exercise suggests the relationship between measurement and politics, and how a concept is measured and how the data will be used. The exercise implies the complexity of developing a sound operational definition.

•  Exercise 2.3 Selecting Job Applicants focuses on employee hiring and implementing strategies to develop and assess the reliability and operational validity of the selection process.

•  Exercise 2.4 On Your Own asks you to design and assess a relevant measure for your agency (where you work or have an internship).

EXERCISE 2.1 Good Nutrition Survey


A nutritional council has as its mission to improve public knowledge of nutrition and advocate for healthier diets. You have been asked to help the council design a self-administered survey to measure an individual’s knowledge of sound nutrition and eating habits.

Section A: Getting Started

1.  Why should the council develop a conceptual definition?

2.  Find three conceptual definitions of good nutrition.

3.  Which of the three definitions seems most appropriate for the nutritional council to use? Justify your choice.

4.  Based on the conceptual definition write an operational definition.

5.  What steps would you take to develop evidence that your operational definition is reliable?

6.  Should you focus on the qualitative evidence of reliability prior to implementing any procedure to establish quantitative evidence? Justify your answer.

7.  What evidence would you cite to argue that your operational definition is operationally valid?

8.  Consider the appropriateness of using your operational definition to measure the nutrition levels of

a.  a predominately Hispanic community.

b.  pregnant women.

c.  low income children.

9.  In the course of writing surveys, you may become concerned with the reliability or operational validity of specific questions. Comment on each of these proposed questions.

a.  Does your family eat a healthy diet?

b.  How many servings of protein do you eat daily?

c.  How many times did you eat breakfast in the past 7 days?

d.  How many calories do you consume in a typical day?

10.  Long surveys often have a low response rate. Would you argue that asking only one or two questions is sufficient to measure an individual’s knowledge of good nutrition? Justify your answer.

Section B: Class Discussion

1.  Based on the conceptual definitions that class members found, consider which definition seems most appropriate to

a.  design an educational campaign, Eat Smart.

b.  design sample menus and recipes for council publications and fliers.

c.  create a dietary checklist for medical personnel to include in a physical exam.

2.  Choose one of the operational definitions identified and use the qualitative indicators of reliability to assess how well it would work for a

a.  telephone survey of the general population.

b.  self-administered survey of parents of children who receive subsidized meals.

c.  self-administered survey of elderly persons who eat one meal a day in senior centers.

3.  What evidence suggests that the operational definition is operationally valid?

4.  Questions may focus on respondents’ perception of the quality of diet or they may ask for more specific information, such as, listing foods actually eaten in the past 48 hours. Assess the benefits and drawbacks of asking

a.  questions about what people believe regarding different types of food.

b.  questions about what foods people eat.

EXERCISE 2.2 Living Wage: From Idea to Measure

You have been asked to serve on a community task force to consider whether it should adopt a livable wage ordinance.

Section A: Getting Started

1.  What does the term livable wage mean to you?

2.  Search the Web to identify a discussion of the livable wage: what it means, its desirability, the difficulty in determining what constitutes a livable wage. What are the major measurement issues that the discussion focused on?

3.  Do a Web search and find two more conceptual definitions of livable wage.

4.  Deciding on an appropriate conceptual definition is a matter of judgment colored by how the data will be used. For each potential user on the following list, identify which definition you think would be preferred and why.

a.  By employers deciding on an acceptable salary for their lowest-paid employees

b.  By an advocacy group lobbying for an increase in the minimum wage

c.  By a public agency determining who is eligible for services, such as financial assistance, food subsidies, and medical services.

5.  Respond to the following items to develop an operational definition for “livable wage.”

a.  Identify the components you would include in an operational definition of a livable wage.

b.  In your operational definition, will some items receive more weight than others or will all items be weighted equally? Explain.

c.  Do you anticipate that the operational definition will work equally well in urban and rural areas? Explain.

6.  Consider measuring the cost of rental housing.

a.  Write an operational definition for cost of rental housing.

b.  Would you expect to get more reliable data from landlords or renters? Explain.

Section B: Class Exercise

In assigned groups hold a task force meeting. Discuss the following:

1.  Share the conceptual definitions and identify which definition(s) the task force should adopt or present for public debate.

a.  What components of the conceptual definition do group members agree on and disagree on?

b.  Are the reasons group members disagree based on ideology, purpose of the measure, impact of the data, feasibility of developing or implementing an operational definition, or something else?

2.  Identify how you would decide what wage would constitute a livable wage in your community.

3.  Identify what operational definition you would recommend to identify the number of wage earners in your community who do not earn a livable wage.

4.  Based on what you have learned and observed in answering Sections A and B create a list “Lessons Learned about Measuring Concepts.”

EXERCISE 2.3 Selecting Job Applicants

Reliability and operational validity apply to many tasks, especially if we want to rank or categorize something. In this exercise we ask you to use your knowledge of measurement and develop a procedure for reviewing job applications for the position of Director, Whitney M. Young Center. After reading the announcement, respond to the items listed after the job announcement to guide you.

Job Announcement: Director

The Whitney M. Young Center for Urban Leadership (WMYCUL), an affiliate of the National Urban League, is a nonprofit educational institute that exists to foster positive social and economic change through effective leadership development. Our mission is to cultivate and enhance the leadership capabilities of individuals and organizations that serve urban communities. Our role is to convene and link those entities to practical leadership development tools and resources to help them address capacity issues and various leadership challenges. Our philosophy is that through the enrichment and/or development of urban leaders, urban communities can be improved and empowered, thereby helping to create a society whereby all people have equal opportunity to be positive contributors.

Position Description

The director coordinates and manages the professional development/leadership training needs of 100-plus affiliates located throughout the United States and the National Office.

Essential Functions

■  Oversees professional development/leadership training for the affiliates and National Office. Develops videos, resources guides, handbooks, manuals, and guidelines for the affiliates. Identifies 21st century resources to enhance system-wide capacity.

■  Coordinates and monitors training activities, including annual leadership conferences.

■  Works with staff to develop strategies to support strategic direction of center

■  Performs other duties at assigned


■  Master’s degree in social work, public administration, or related area

■  Certified Professional in Learning & Performance (CPLP) a plus

■  Seven to ten years experience designing training curriculum, program design and execution, or leadership development; experience with national professional training network a plus

■  Able to plan, organize, control, delegate, and manage multiple projects simultaneously

■  Familiar with nonprofit governances and operations

■  Strong analytical, presentation, and facilitation skills. Must have meeting-planning experience. Excellent interpersonal and communication skills.

■  Proficiency in Microsoft Office.

To apply, submit resume, cover letter, and writing sample to Human Resources Department, or e-mail Deadline September 7.

The National Urban League is the nation’s oldest and largest community-based movement empowering African Americans and other people of color to enter the economic and social mainstream. The National Urban League is an Equal Opportunity Employer.

Section A: Getting Started

1.  Why is it important to have a reliable and valid process for selecting which applicants to interview?

2.  You plan to create a rating scale, that is, you identify and assign a numeric value to each key requirement. How would you use qualitative methods to make sure that the scale is reliable?

3.  How would you use test–retest to establish the scale’s reliability?

4.  Assess the value of using test–retest to establish the scale’s reliability.

5.  How would you use inter-rater reliability to establish the scale’s reliability?

6.  Assess the value of using inter-rater reliability to establish the scale’s reliability.

7.  Create a plan to establish the scale’s reliability, that is, consider qualitative methods, test–retest, and inter-rater reliability. Which strategy(ies) would you use, why, and in what order would you use them?

8.  What would you look for to establish the scale’s operational validity?

9.  If after you complete the first round of interviews you find that none of the interviewed applicants were suitable, would you conclude that the problem was with the reliability or the operational validity of the scale? Justify your answer.

10.  Review the job description and identify the requirements that should be included in the scale. Also note if any of the requirements should be given more weight.

11.  Examine the following two resumes and create a scale that could be used to rate each resume.

J. Q. Public

123 Main Street

Anytown, Yourstate, USA

(111) 555-1234


To apply my education and strong work ethic toward a career in the public relations, specifically event planning and program coordination.


Event planner, event coordinator, meeting planner, wedding planner, project manager, communications, advertising, public relations, entertainment.


Promotions Coordinator, Local Radio Station (2007–present) Planned, coordinated and executed over 200 on-site promotions and remotes. Assisted the Marketing Department in developing and executing station contests, promotions and events. Worked with programming department to carry out station programming agenda. Directed promotional street teams to implement on-site promotional campaigns. Created new contests and promotions that drive station-marketing objectives. Engineered remote broadcasts. Assisted in building and maintaining client relationships. Helped to execute many large-scale events including the Annual Ball, Community Jam, and Cultural Fest.

Executive Director, Any Magnet school, Inc. (2004–2007). Planned major special events and lead extracurricular activities. Worked with administration, faculty and student teams to plan homecoming events and fundraising dinners; tracked all milestones and ensure on-time, successful completion. Planned fundraising events such as International Day, including advertising/promotion, decorations, costumes, student presentations and sponsorship. Directed annual African American Heritage Festival, student competition with judges and awards ceremony attended by more than 500 people; program included dinner buffet and presentation of group projects on educational topics.

Project Manager. Human Services Agency, Inc. (1999–2004) Responsible for implementing the Independence Program, a welfare to work program that assisted heads of households transition from welfare to self-sufficiency through employment. Developed relationships with area employers and promote program services. Serve as liaison to collaborating agencies including State Department of Human Resources, City Department of Social Services, and other community based organizations. Hired and supervised a professional and paraprofessional staff of 15. Developed and revised forms, manuals, brochures, and MIS systems. Compiled and provided monthly and quarterly reports to funders and stakeholders.

Program Director. Children’s Human Services, Inc. (1994–1999) Responsible for the successful implementation of a childcare center for homeless children with special needs. Managed program budget of $275,000. Supervised a staff professionals and paraprofessionals and college level human service program interns. Coordinated program activities with collaborating agencies for service provision, grant writing, planning, resource sharing, and development. Compiled and provided reports for funders and stakeholders.


MSW and MA in Organizational Leadership, Midstate University (1995). BA, Communications, Atlantic State University (1990).

Additional Information

Computer skills include Windows, Word, Excel, PowerPoint and Internet research.

Jane Doe

9876 Elm Street. Springfield, (111) 555-1234

I am interested in a Event Planning position for a corporation or nonprofit organization. In addition to the Bachelors of Science Degree in Hospitality Tourism Management/Business Administration, with an emphasis in Events, Attractions and Conventions Management, I also possess the following qualifications and areas of expertise:


Columbia Southern University:  MBA with Concentration in Hospitality and Tourism

University of Delaware:  Bachelor of Science Hotel, Restaurant, and Institutional Management

Work Experience

Independent Contractor – Open Events, LLC. 2004–present

■  Assist with the planning and management of annual sales meetings for international high-tech companies,

■  Work with and coordinate food and beverage staff, audio-visual personnel, travel department, ground transportation, security and labor.

■  Oversee the planning and execution of seminars in the US, Caribbean, and Europe

■  Perform site research and negotiation, menu selection, registration and travel coordination, shipping and receiving materials between destinations and the on-site management of the entire event program.

Corporate Affairs Director – Telephone company Public Relations 2000–2004

■  Established and leveraged strategic relationships with key stakeholders in the Hispanic, Asian, and businesswomen’s markets. Successfully positioned these associations to win support for Telephone Co. sales initiatives and public policy.

■  Partnered with Supplier Diversity division to ensure engagement with minority suppliers of all groups, i.e., African American, Asian Pacific American, Hispanic, and Women.

■  Directed Telephone co. multimillion dollar philanthropic investments with these and other national organizations via the Telephone Co. Foundation and business unit funding. Garnered significant coverage in mainstream as well as Spanish-language and Asian-language media for Telephone Co. grants and sponsorships.

■  Secured more than 3,100 constituent letters sent to Congress and several op-eds in support of Telephone Co. public policy positions.

Promotions Coordinator-Local Radio Station 1994-2000

■  Planned, coordinated and executed over 200 on-site promotions and remotes.

■  Assisted the Marketing Department in developing and executing station contests, promotions and events.

■  Worked alongside programming department to carry out station programming agenda.

■  Worked in conjunction with promotional street teams to implement on-site promotional campaigns.

■  Helped to create new contests and promotions that drive station-marketing objectives.

■  Assisted traffic department with entering sales orders, filing, and writing contracts.

■  Engineer remote broadcasts.

■  Assisted in building and maintaining client relationships.

Other Skills & Experience

Self-starter, goal orientated, assertive, and possess a warm outgoing personality. • Excellent time management skills, detail orientated, strategic thinker and a track record of making sound decisions. • Style which exhibits maturity, high energy, sensitivity, teamwork, and the ability to relate to a wide variety of professionals. • Strong interpersonal communication skills with ability to effectively identify and fulfill customer needs • Fluent in Portuguese and Spanish • Familiar with Mac, PC and Internet platforms, Microsoft Word, Excel and Quicken

12.  Now rate both applicants using your scale.

Section B: Class Discussion

1.  In small groups review each member’s rating scale.

a.  Use the qualitative method to evaluate the reliability of each person’s scale reliability. Indicate what changes need to be made to improve any problematic item’s reliability.

b.  Assess each scale’s operational validity. Taken as a whole does each scale’s choice of items seem relevant? Do the items seem representative? What evidence supports your conclusion?

c.  What empirical evidence would you seek to demonstrate that the scale was operationally valid?

2.  Choose one of the scales and have each member of the group apply it to the two resumes. To assess the inter-rater reliability of the selected scale. Compare your ratings.

a.  How similar are they?

b.  On which items are your ratings the most different? How can you improve these items?

3.  Based on this exercise draft a guide “Suggestions for developing and assessing the reliability and operational validity of measures to select applicants for jobs and promotions.” Compare your guidelines with those developed by your classmates.

EXERCISE 2.4: On Your Own

This section is applicable if you are working or have an internship. This section of the exercise guides you through the steps required to design a reliable and operationally valid measure and to observe the value of stakeholder feedback.

Consider the concepts you are trying to measure. List and prioritize them.

1.  From among these concepts select a key concept. Find out how similar organizations define it. Be sure to look at the relevant literature for definitions.

2.  Consider how these definitions are similar or different. Is the concept or definition contested? Has it evolved over time? If so, in what ways and what influenced the change?

3.  Adapt an existing operational definition for the concept or develop your own.

4.  Assess its reliability and operational validity.

5.  Review your conceptual and operational definition with relevant colleagues or other relevant stakeholders.

a.  Do they agree that you are measuring what you intend to measure?

b.  Do they agree that your operational definition is both relevant and representative?

c.  Do they raise concerns about the wording of items and the responses, the willingness and ability of respondents to provide the information?


1This definition is used by the National Assessment of Adult Literacy, an assessment conducted by the National Center for Educational Statistics. For more information go to

2You can find information on the Internet about many types of measures and indicators. For example, if you want to get an evaluation of indicators of volunteerism, you could enter “volunteerism indicator critique” into a search engine. You will find sources that present a critique that will add to your understanding of measurement and the construction of measures. For this paragraph, we relied on the following articles and Web sites (all accessed on June 7, 2010): Tina Soreide, “Is it right to rank? Limitations, implications and potential improvements of corruption indices,” Chr. Michelsen Institute, Norway,; Daniel Kaufmann, Aart Kraay, and Massimo Masruzzi, “The Worldwide Governance Indicators Project: Answering the Critics,” The World Bank, ContentServer/IW3P/IB/2007/02/23/000016406_20070223093027/Rendered/INDEX/wps4149.txt; “Measuring Progress,” Friends of the Earth,

3L. Darling-Hammond, “Evaluating ‘No Child Left Behind,’” The Nation, May 2, 2007. Posted at (accessed December 7, 2009); J. E. Ryan, “The Perverse Incentives of the No Child Left Behind Act,” New York University Law Review, June, 2004.

Ethical Treatment of Research Subjects

At some point during the research process investigators may require research subjects, that is, people who will answer surveys, agree to interviews, participate in focus groups, or enroll in a demonstration project. People who agree to answer questions or participate in a study expect to be treated respectfully and ethically; they do not expect to be harmed by merely participating in a study.

Subjects who provide information for community assessments or answer stakeholder surveys are unlikely to experience physical harm or acute psychological distress. Rather the “harm” they experience may be subtle, such as losing some privacy, wasting their time, or experiencing unpleasant emotions such as anger, defensiveness, or distrust. Staff that design, implement, and evaluate a program must be attuned to a study’s potential to cause significant harm to participants.

This chapter covers U.S. regulations and professional standards that apply to community and organizational research involving human subjects. To put current standards in context we will summarize the Tuskegee Syphilis Study, a landmark case in unethical research. The Tuskegee study is important for two reasons. First, its research procedures were identified as unethical practices that regulations needed to cover. Second, it demonstrated the serious social consequences of failing to protect research subjects.


The Tuskegee Syphilis Study is the best known U.S. example of an egregious abuse of human subjects. The study’s subjects, all of whom were poor African American men, were explicitly denied effective treatment for syphilis, that is, they were harmed simply by participating in the research.

In 1932, U.S. Public Health Service researchers started monitoring the health of two groups of African American males. One group had untreated syphilis; the other group was free of syphilis symptoms. The objective of the research was to document the course of untreated syphilis. At the time the study began, treatments for syphilis were potentially dangerous, so the researchers may have not questioned the ethics of not treating subjects. By the mid-1950s penicillin was widely available and known to be an effective treatment for syphilis. Yet the study continued until 1973, and the untreated subjects still had not received penicillin, and they had been actively discouraged from seeking treatment elsewhere. The failure to treat the subjects was particularly disturbing because the study continued unchallenged despite the Nuremberg Code.1

At the end of World War II, disclosure of Nazi atrocities included reports of doctors and scientists who performed human experiments on Jewish prisoners. Military tribunals were held to try the Nazi leadership. One of the Nuremberg Military Tribunal verdicts listed 10 principles of moral, ethical, and legal medical experimentation on humans. The principles, referred to as the Nuremberg Code, have formed the basis for regulations protecting human research subjects. The Tuskegee study violated the code’s principles, including not asking the subjects to give their free and informed consent to participate, ignoring the researchers’ obligation to avoid causing unnecessary physical suffering, not allowing the subjects to terminate their participation at any time, and disregarding the researchers’ obligation to discontinue an experiment when its continuation could result in death.

Traditionally experiments include a control group, that is, a group that does not receive the experimental treatment. Ethical research practice prohibits with-holding beneficial treatment from subjects. If the most beneficial treatment remains unknown, the control-group subjects must be assigned to an alternate beneficial form of treatment. Subjects cannot simply be denied treatment. For example, if you were to conduct a study of depression, all subjects would need to be assigned to some form of treatment (e.g., psychotherapy, narrative therapy, medication). Because we know so much about effective depression treatment, each form of treatment would be better than no treatment at all. If the subjects receiving an experimental treatment show marked improvements the study must be discontinued and the treatment made available to the control group. The Tuskegee study had a lasting effect in assuring that no research subject was denied beneficial treatment.

The shaded area below summarizes four other well-known studies that raised questions about whether the subjects had been treated ethically. Although no documentation exists that indicates these cases caused harm to individuals, they raised issues that have informed ethical practice.

Four Classic Ethics Cases Involving Human

Jewish Chronic Disease Hospital (1963). 2

What Happened: Elderly patients were asked to give consent to be injected as part of research on immune system responses. They were not told that the injections were unrelated to their disease or its treatment or that the injections contained live cancer cells.

Ethical Concern: Investigators found that vague request for consent did not constitute informed consent.

University of Chicago Law School Taping of Jury Deliberation (1954) 3

What Happened: Discussions among juries hearing civil cases were recorded without their knowledge, but with the permission of the judge and litigant’s attorneys. Recordings were to be kept with judge until the case was closed.

Ethical Concern: Potential loss of confidence in public institutions by compromising the secrecy of jury deliberations.

The Tearoom Trade (1970) 4

What Happened: A doctoral student served as a lookout and observer for men having sexual encounters in public restrooms. Later he used license plate numbers to track down the men and interview them under the pretense of conducting a public health survey.

Ethical Concern: An example of deceptive research, where a subject is not told the purpose of the research and potentially the researcher discloses something the subject considers private.

The Milgram Experiments (1961) 5

What Happened: Subjects were recruited to give other subjects (actually actors) what they thought were electric shocks.

Ethical Concern: Another example of deceptive research. In this case subjects might gain unwanted self-knowledge. ■


In response to the Tuskegee study and other reported abuses, the U.S. government formed the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research and charged it with identifying the basic ethical principles for research involving human subjects. The commission summarized its findings in the Belmont Report.6 The report identified three basic ethical principles: respect for persons, beneficence, and justice. Respect for persons requires that subjects participate voluntarily and with adequate information about the proposed study to give informed decisions regarding participation. Beneficence requires doing no harm and maximizing possible benefits and minimizing possible harms. Justice requires that research subjects not be selected “simply because of their easy availability, their compromised position, or their manipulability, rather than for reasons directly related to the problem at hand.”

These principles underlie U.S. regulations, which apply to investigators conducting research for a federal agency or on a federally funded project. The principles should guide the behavior of all members of a study team whether or not the study is covered by federal regulations. Implementation of these principles requires that subjects give informed consent, that benefits and risks be identified and weighed, and that selection of subjects be fair. Informed consent demonstrates respect for persons; assessing risks and benefits demonstrates beneficence, and fairly selecting subjects demonstrates justice.

Informed Consent

A potential research subject must have adequate information to make an informed, voluntary decision to participate. You must tell potential subjects in words that they can clearly understand about the study’s purpose, its potential risks, and possible benefits. They need to know what they will be expected to do, what will be done to them, and what will be done with the collected information. You should provide them with information that answers the following questions. Who will receive the information? What type of information will be disseminated? What steps will be taken to protect the identity of the subject? What will happen if you learn of an illegal act or identify a health risk? If photos, movies, recordings, or similar research records are being produced, what will be done with them? We know of one student who was shocked to learn that an interview tape, produced as part of an experiment, was being shown to classes at her college.

Specific circumstances may impede a subject’s ability to make a voluntary decision. Prisoners, members of the military, and schoolchildren, all of whom are in controlled settings, may interpret requests for information or participation as commands. Research involving prisoners is particularly difficult. Even modest incentives can compromise a prisoner’s ability to assess the risks of participation and may preclude a voluntary decision to participate. Similarly, patients may mistakenly believe that their research participation will have a therapeutic benefit. They should receive clear, realistic information on the benefits and risks of participation. Still, the ability of seriously ill persons to give informed consent, no matter what they are told, is questionable. A New York Times article summed up the problem as follows: “Potential participants are often desperately ill and may grasp at any straw—even signing a document without reading it. For this reason, many say there is no such thing as informed consent—only consent.”7 Vulnerable populations, such as children, aged people, and mentally disabled people, may not be fully capable of making an informed decision or protecting their own interests. In general, researchers try to get informed consent from both the potential subjects and from their legal guardians.

An authority relationship between the investigator and potential subjects may raise questions about voluntary consent. Physicians, social workers, or professors may ask their patients, clients, or students to participate in a research study. A patient may agree because she doesn’t want to sour the relationship with her physician. A client may agree because he may suspect that he will be overlooked for other opportunities to alleviate his problems. A student may agree because she suspects that it won’t hurt and may help her course grade.

Finally, voluntary participation requires the ability to withdraw from a study at any time. Potential subjects must be told this as part of the process of obtaining informed consent. Furthermore, they must also be told that other benefits they are entitled to will not be affected by their decision to participate or to discontinue their participation. For example, a client receiving social services must be told that his continued eligibility for these services does not depend on participating in a research study.

Informed consent and a signed informed consent form are not one and the same. A subject must be free to make her own decision about whether she wishes to participate or not. To make this decision she must be informed about the study and give her consent to participate. A signed consent form merely documents that she received the information and consented to participate. For projects where a subject experiences no risks beyond the risks of everyday life or ordinary professional responsibilities, signed statements may be reasonably viewed as unnecessary. Online surveys typically open with a statement describing the research; subjects indicate their willingness to participate by clicking on an “accept” button. Below we give you an example of obtaining consent for an opinion survey. Just because the recipient of a survey does not sign an informed consent form does not relieve you of the responsibility of giving the subject sufficient information so her consent to participate is truly informed.

Identifying and Weighing Costs, Risks, and Benefits

People may not agree on the risks and benefits of research participation. Individuals’ educational, social, and professional backgrounds contribute to how they define and rank risks and benefits. You may incorrectly assume that a proposed study presents minimal or no risk. You should be especially vigilant if potential subjects differ from you. For example, you may underestimate the potential harm in studying recent immigrants. You may misjudge what constitutes a risk or a benefit for an immigrant and erroneously assume that your requests for consent are unbiased and informative.

Request for Consent to Participate on an Online Survey

This is a study about opinions regarding various issues in the news. This is an anonymous survey and your identity is not connected to your responses in any way. Clicking “Yes” below indicates that you agree to participate in this study.

□  Yes, I agree to participate.

□  No, I do not wish to participate. ■

What are the benefits of participating in research? The research may involve a treatment or program expected to relieve a psychological problem or increase economic opportunities. The research may seem to benefit a group that a potential subject values. Survivors of floods and wildfires may agree to participate in studies because they believe that the findings will contribute to better future response and recovery efforts. For some studies—perhaps most—the subject may participate because the research question seems somewhat interesting and the inconvenience is minimal.

Remuneration may be a valuable benefit. We know of a few graduate students who subsidized their incomes by participating as subjects for biomedical research studies. Paying subjects for the inconvenience of participating is not unethical, unless the amount is so large that it may be viewed as a bribe or a questionable inducement to participate. What distinguishes reasonable reimbursement from “questionable inducements”? Federal regulations offer no guidance, and opinions vary as to whether participants should receive more than compensation for their direct costs.

What are the risks of participating in research? The most commonly cited risks are physical harm, pain or discomfort, embarrassment, loss of privacy, loss of time, and inconvenience. A study could potentially uncover illegal behavior. As part of informed consent potential subjects must be told what risks they may experience during the study or as a result of the study. They must be told if the risks are unknown, or if researchers disagree on the risks.

Subjects can only be told about benefits that can be reasonably expected. Theoretically, a study may be groundbreaking; however, most studies are not. Consequently, a subject should not be told that a study has a probability of generating significant knowledge. Nor should potential subjects be led to believe that they will gain benefits that are possible but unlikely.

Although not usually covered as part of informed consent you may wish to consider that risks and benefits may apply beyond individuals. For example, families may be affected by the time a family member spends participating in a study, by his reactions after sharing personal information, or by the outcome of participating in a program to encourage lifestyle changes.

Selection of Subjects

Recruiting subjects is the first step in assuring informed consent. Fliers, Internet sites, or other recruiting materials and media should state that participants are sought for a research project. The words used to solicit subjects may act as a questionable inducement. Consider the attractiveness of being asked to test out a “new” or “exciting” treatment, to participate in a “free” program, or to receive “$1000 for a weekend stay in our research facility.” If you recruit participants through personal contact, you must be particularly sensitive to not pressuring them to participate.

Simply contacting potential subjects can raise privacy concerns. Privacy refers to the ability to control disclosure and dissemination of information about oneself. One dimension of privacy is physical privacy, that is, not having to endure unwanted intrusions. If a person is likely to wonder “how did they get my name?” you may have violated his privacy. For example, if you plan to collect information from food pantry users, a staff member who knows the users should ask their permission to be contacted. They should be assured that whether or not they choose to participate will not affect services they normally receive.

Sample size is not included as a component of informed consent, nor is it discussed in behavioral and social science texts on research ethics. Still, the desired number of subjects may affect how vigorously subjects are recruited and open the door for subtle coercion. The American Statistical Association includes sample size in its ethical guidelines. Its guidelines state that statisticians making informed recommendations should avoid allowing an excessive number of subjects or an inadequate sample.8 An inadequate sample can affect the quality of the statistical analysis, for example, decisions about whether findings occurred by chance. Conversely, an overly large sample may squander resources, including participants’ time. An appropriate sample size partially depends on how the data will be analyzed. For example, a sample of 400 might be adequate to describe volunteer activities of state residents, but it may be too small if you want to compare volunteers in the state’s counties.


You may encounter the terms privacy, confidentiality, anonymity, and research records in discussions on collecting individual data. As we noted, privacy refers to an individual’s ability to control the access of other people to information about himself. Confidentiality refers to protection of information, so that you cannot or will not disclose records with individual identifiers. Anonymity refers to collecting information so that you cannot link any piece of data to a specific, named individual. Research records refer to records gathered and maintained for the purpose of describing or making generalizations about groups of persons. Research records are different from administrative or clinical records, which are meant to make judgments about an individual or to support decisions that directly and personally affect an individual.

The requirements of voluntary participation and informed consent uphold the individual’s right to have control over personal information. While you may promise anonymity or confidentiality, a potential subject may not necessarily trust you to follow through. Guarantees of confidentiality neither ensure candor nor increase propensity to participate in research; rather, limited research has found that respondents tend to view promises of confidentiality skeptically.9 Nevertheless, you must respect participants’ privacy and maintain confidences as part of your professional responsibilities to subjects.

Some questions may seem unduly intrusive. They may stir up unpleasant recollections or painful feelings for subjects. Such topics include research on sexual behaviors, victimization, or discrimination. People who read pornography, have poor reasoning skills, or harbor controversial opinions may prefer to keep this information to themselves. Disclosure of behaviors such as drug use, child abuse, or criminal activity may cause a respondent to fear that she will be “found out.” For a study of a sensitive topic to be ethical (1) the psychological and social risks must have been identified, (2) the benefits of answering the research question must offset the potential risks, (3) the prospective subjects must be informed of the risks, and (4) promises of confidentiality must be maintained.10

In deciding whether to keep responses confidential or to collect anonymous data you may first consider subject anonymity, that is, ensuring that no records are kept on the identity of subjects, and data cannot be traced back to a specific individual. If you are conducting a study for an agency, an approach that approximates subject anonymity is for agency staff to select the sample and distribute questionnaires or collect information from agency files. The staff should delete any identifying information from the records before allowing you to examine the records.

Often, however, anonymity is impossible. You must know subjects’ names to follow up on nonrespondents or to compare respondents and nonrespondents; to combine information provided by a subject with information from agency records; or to collect information from an individual at different points in time. Subject names must be available for research supervisors or auditors to verify that the research was done and that accurate information was collected and reported. Although research auditing can raise concerns about confidentiality, without audits, incompetence or malfeasance may go undetected.

Confidentiality may also be breeched by carelessness, legal demands, or statistical disclosure. To avoid accidental disclosure of personal information, identifying information should be separated from an individual’s data. A code or alias can be assigned to each record and the list of names and codes stored separately in a secure place. For longitudinal studies and other studies that collect information from more than one source, aliases can be used to identify individuals and combine the individual information. A respondent may choose her own alias, that is, information that others cannot easily obtain, such as her mother’s birth date. The success of having respondents choose their alias depends on their ability to remember each time they are asked.

Theoretically, researchers can have their records subpoenaed. Federal policies offer some protections to participants and researchers. The Confidential Information Protection and Statistical Efficiency Act (CIPSEA) requires that federal statistical agencies inform respondents and get their informed consent if the information is to be used for nonstatistical purposes.11 Certificates of confidentiality by and large protect researchers investigating sensitive subjects, such as mental illness or drug abuse, from having to provide identifying information. The U.S. Department of Health and Human Services offers certificates to researchers whose subjects might be harmed if they were identified. The Department of Justice offers certificates to researchers conducting criminal justice research. However, a certificate cannot be obtained after data collection is completed.12

When reporting data or sharing research records, you should be sensitive to unintended disclosures. For example, when reporting on a case study you should consider using pseudonyms and altering some personal information, such as occupation. Before any research records leave your control, you should identify potential breaches of informed consent and confidentiality. A subject may have agreed to participate for a specific purpose, without giving blanket authorization for other uses. As part of obtaining informed consent you should identify anticipated future uses of the data, including their availability for independent verification of the study’s implementation and replication of its analysis.13 Before releasing data, make sure that identifiers such as names, addresses, and telephone numbers have been removed.

A more formidable problem is that of deductive disclosure. If the names of participants in a study are known, someone could sort through the data to identify a specific person. With a list of respondents, someone may sort the data by age, race, sex, and position and deduce a respondent’s identity. One way to protect against such abuses is to not disclose the list of respondents. Deductive disclosure is a concern with publicly available electronic databases. For example, if census data were released as reported, one could learn detailed information about the only three Hispanic families in a county or a state’s six female-owned utilities companies. To prevent such abuses the Census Bureau has developed procedures to release as much information as possible without violating respondents’ privacy. As is true with many of the Census Bureau’s statistical practices these procedures can serve as a model.

An emerging concern is the ethical implications of Internet research. Research might be done on Web sites, electronic bulletin boards, Listservs, or social networking sites. It may be naïve to assume that any communications sent out into cyberspace is private, and there is no consensus about what online communications should be considered public as opposed to private. Information found on a Web site may be treated the same as other textual material, that is, you would not normally seek informed consent or be concerned about privacy.


In 1991 a uniform federal policy for the protection of human subjects, the Common Rule, was published. Its hallmark was a requirement that every institution receiving federal money for research involving human subjects create an Institutional Review Board (IRB) and appoint its members. The IRB determines if a proposed project meets the following requirements: (1) risks to subjects are minimized, (2) risks are reasonable in relation to anticipated benefits, (3) selection of subjects is equitable, (4) informed consent will be sought and appropriately documented, (5) appropriate data will be monitored to ensure the safety of subjects, and (6) adequate provisions exist for ensuring privacy of subjects and confidentiality of data.14 Two of these criteria merit further mention. First, the long-range effects of the knowledge gained from the research are explicitly excluded in determining the risks and benefits of participation. Second, the need to address possible abuses of vulnerable populations is stressed. IRBs are encouraged to consider whether research on a specific population is consistent with an equitable selection of subjects and to make sure that these populations’ vulnerability “to coercion or undue influence” has not been exploited.

An IRB reviews all research under the purview of the institution that involves human subjects. To review only publicly or privately supported research would imply that only funded projects have to conform to ethical practices. An institution that fails to comply with the Common Rule can have its federal funding terminated or suspended. Some university IRBs have been accused of being overly cautious and throwing unnecessary roadblocks in the way of research that includes surveys or field studies, to avoid potentially harmful consequences.15

To what extent do you have to concern yourself with IRB review? Federal policies protecting human subjects require compliance by federal agencies, institutions, and individual researchers. If you are a student or an employee of a university, a medical facility, or other institution that receives federal research funds and you will be conducting a study that involves interacting with people, asking them to do something, or using identifiable private information, consult with your IRB before you start your research. Putting off learning about your institution’s IRB procedures can lead to long delays and frustration while conducting your study.

The IRB chair or the chair’s designee will determine if your project is exempt from further review, appropriate for an expedited review, or must be subject to full review. Projects exempt from review or eligible for an expedited review receive less close scrutiny. The categories of exempt projects or expedited review allow research involving minimal risk to proceed and avoid the long delays associated with a full IRB review. Minimal risk applies to those projects where the risks of participating in the research are similar to the risks encountered in daily life. For example, an IRB may waive written documentation of informed consent for surveys and observation of public behaviors. Waivers, however, are not permitted if responses could be traced to a specific individual and if disclosure could result in civil or criminal liability or damage subjects’ financial standing, employability, or reputation. The federal regulations are more detailed than what has been presented here. Furthermore, the regulations receive frequent scrutiny, and the practices are still evolving. Members of an IRB or the National Institutes of Health Office of Human Subjects Research Web site ( are good sources for detailed or up-to-date information.


Public agencies and nonprofits that do not conduct federally funded research may not have an IRB. Still, the agencies should adhere to the values and practices identified by the Belmont Report. Recall that these values and practices were

Respect for persons requires informed consent;

Beneficence requires that benefits outweigh risks;

Justice requires a fair selection of subjects.

These values cut across professions, disciplines, and organizations. Whether you are an investigator or part of management you should ascertain that studies intended to describe groups of persons adhere to these principles and practices. The practices specifically apply to research efforts; they do not necessarily apply if the data are collected for administrative or clinical records. For example, ethical research practices do not require supervisors get informed consent from an employee to evaluate her or to demonstrate that the benefit of preparing and conducting a performance evaluation justifies its cost.

You and others involved in deciding whether to implement a study may ask the same questions that are asked to obtain informed consent. There are no right or wrong answers, but the answers will help you decide if the informed consent content and procedures are adequate, if the risk of participation is outweighed by the benefits, and if selection of participants is fair and unbiased. Key questions include the following: Will subjects be anonymous or confidential? What steps will be taken to protect their identities? What will they be asked to do? How much time will it take? Are there other potential risks? How are subjects going to be recruited and selected? What information will potential subjects be given to obtain informed consent? How will informed consent be obtained?

You should consider how the proposed research may affect the agency’s reputation. No matter what the agency’s role, its reputation may be enhanced by research that others consider valuable and harmed by research that others consider worthless, intrusive, or harmful. You may question whether a planned study will unduly infringe on respondents’ privacy or abuse their time. You should decide if a study requiring agency resources, including time, represents a good use of money or donor contributions. You should be convinced that a study will likely yield valued information. You should decide if assumptions that others will act on the findings are realistic.

Unreliable items, unwanted items, or unused studies waste respondents’ time. Items not operationally valid may abuse respondents’ goodwill, insofar as their responses contribute to incorrect or misleading conclusions. The detrimental effects of an unwanted or poorly designed study go beyond the respondents. Future studies also are affected if potential subjects become cynical about research and the value of their participation.

Seeking too much information or seeking it too often can build resistance to future requests for information. Consider the complaints of businesses and state and local governments that churn out data only to meet federal information requirements or the frustration of nonprofits that have to produce frequent reports to assure funders that their money is being well spent. If individual respondents perceive that the data are merely collected but not used, they may not only complain but also refuse requests for information.

The agency is responsible for protecting the privacy of its employees or clients. Potential subjects should be contacted by an agency representative, possibly by someone from the unit sponsoring the study. The agency representative should explain the nature of the study, get permission to give the potential subjects’ contact information to investigators, and in noncoercive language indicate the value of the subjects’ responses. Clients should be clearly told that if they decline to participate they will not lose eligibility for agency services. Public announcements, such as posters, may be used to recruit employees or clients. Voluntary participation is less likely to be compromised with posted announcements than by personal solicitation.16

For employee participation to be voluntary, whether or not employees decide to participate should not affect performance ratings, pay, or similar decisions. This should be clearly communicated to the employee.17 You should not assume that assurances will overcome employee beliefs that refusal to participate will have consequences or that promises of confidentiality will not be kept. If an employee plans to conduct research as part of a graduate program, the research must be vetted by the university’s IRB. The agency administrators should also consider how an employee’s research could impact the organization and participating employees. Is the research taking time away from other tasks? How will participants interpret the topic? Do they assume that the organization is going to address a long-standing problem? Or are they anxious that a major change is in the works?

Demonstration projects bring up concerns about what will happen once the project ends. Projects that are part of a research grant may be funded for a specific period of time, but the participants’ need for services may continue. In prisons or psychiatric hospitals, participants in an experimental program may feel abandoned if the program ends once the data are collected or the funding stops. A related issue is how participation in a demonstration project will affect the clients. For example, clients enrolled in a demonstration project that provides job training may find that no employer can use their new skills or that the training has not qualified them for advanced courses.

Ideally, relevant stakeholders should review a proposal. Input from representatives of the participant community are especially valuable in identifying potential risks. For example, parolees, ex-convicts, and prison guards might review proposals for studies involving prisoners. Even apparently innocuous groups such as employees may identify unexpected concerns. Stakeholders may be helpful in deciding if potential subjects will understand the purpose of the study and what is being asked of them and the risks involved.

You should also check that subjects will be debriefed, if appropriate, at the end of their participation. For example, subjects who are asked to perform job-related tasks may be disappointed or frustrated about their performance. A debriefing offers the researcher an opportunity to observe any negative effects of the research and to answer questions or concerns that a subject may have.

Another issue is what will become of the research documents. What will be done with the completed questionnaires, recordings, or other research materials?18 Will they be put into an archive? If so, the subjects should be told where the data will be stored and how individual identities will be protected. If the data are being collected by a consultant, will the agency receive completed questionnaires, spreadsheets, or electronic databases? Will the consultant remove individual identifiers? Potential subjects need to know who will have access to their information before they can give informed consent.


Ethical values affect us when we gather information and when we report our findings. The ethical issues associated with reporting are covered in Chapter 14. The following cover the major values that should guide our behavior as researchers as we prepare to conduct research.

Honesty: Strive for truthfulness in all scientific communications. Honestly report data, results, methods and procedures, and publication status.

Integrity: Fulfill promises and agreements. Act in good faith.

Confidentiality: Protect the communications of participants. Do not disclose personnel records, trade or military secrets, and patient records.

Nondiscrimination: Do not provide preferential treatment to participants on the basis of sex, race, ethnicity, or other factors that are not related to their scientific competence and integrity.

Competence: Understand and do not exceed your own professional capacity and limits.

Legality: Know and obey relevant laws and institutional and governmental policies.

Human subjects protection: When conducting research on human subjects, minimize harms and risks and maximize benefits; respect human dignity, privacy, and autonomy; take special precautions with vulnerable populations; and strive to distribute the benefits and burdens of research fairly.


Recognizing and protecting research subjects is imperative whether you are an investigator or a manager. In the event that federally funded research is being conducted, an IRB review is required. This process helps to make sure that you have appropriately identified risks and benefits, communicated them to potential subjects, and have adequate procedures to obtain informed consent. Although you may not have an institutional review board to guide your research, you must follow ethical practices in administering surveys, conducting interviews, or collecting data from records.

We may erroneously assume that research that does not cause physical harm and does not have a negative effect. Wasting subjects’ time is harmful, so is invading subjects’ privacy. Respect for persons, beneficence, and justice may seem like lofty ideals, but in practice as you apply these concepts you may identify and address potential problems. You will also save your agency from possibly wasting resources, diminishing its reputation, and facing litigation. The bottom line is that you should protect research subjects because it is the right thing to do.


Paul Oliver, The Student’s Guide to Research Ethics (Philadelphia: McGraw Hill, 2003).

Jay Katz, Experimentation with Human Beings (New York: Russell Sage Foundation, 1972). Although 40 years old this is an excellent resource on cases that informed discussions about protecting human subjects.

James F. Childress, Eric M. Meslin, and Harold T. Shapiro, eds. Belmont Revisited: Ethical Principles for Research with Human Subjects (Washington, D.C.: Georgetown University Press, 2008). This resource carries a series of essays that cover the development of the nearly 40-year-old Belmont Report and its impact on current discussions on protecting human subjects.

To keep up to date with federal regulations, visit the Web site of the Office of Human Research Protections within the U.S. Department of Health and Human Services:


This chapter has three exercises that ask to you consider ethical research practices in different contexts.

•  Exercise 3.1 Learning about Teenaged Mothers asks you to identify and address ethical concerns in conducting a study with adolescent participants. You are also asked to review and comment on how the researchers might solicit participants’ informed consent.

•  Exercise 3.2 Evaluating a Debt Counseling Program asks you to consider ethical concerns in a study that involves both staff and client participants.

•  Exercise 3.3 On Your Own asks you to identify your agency’s protocol or practices about conducting research. This exercise is applicable if you are planning to do a study as part of your job or internship and has questions identifying factors that are part of the process of assuring that human subjects are protected.

EXERCISE 3.1 Learning about Teenaged Mothers


Elaine Bell Kaplan, the author of Not Our Kind of Girl (Berkley, CA: University of California Press, 1997), conducted a study of African American teenage mothers. Her purpose was to examine the causes and consequences of Black teenage parenthood. She gave 32 teenaged mothers living in two California communities a 126-item questionnaire that asked about their experiences before, during, and after the birth of their children. Twenty-five participants were interviewed. Interviews included questions about their family, sexual behavior, the child’s father, their experiences with school and welfare agencies, and quality of life. The interviews were audio-taped and transcribed by the researcher.

Section A: Getting Started

1.  Based on this information what ethical considerations should be most dominant for the researcher?

2.  The following questions explore how you would ensure that you have treated your subjects ethically.

a.  What kind of procedures would be necessary to make sure that the participants in the study are not harmed?

b.  What types of harm might you anticipate? What protections would you build into the research to address such possible ill-effects?

3.  How would you protect access to the participants’ audio-taped interviews?

4.  If you were doing a similar study of adolescent mothers what ethical considerations would affect how you go about recruiting subjects?

5.  What ethical considerations would guide your decisions about offering the subjects incentives? What kind of incentives would be reasonable to offer to this population?

6.  What follows is a hypothetical informed consent form for a similar study being conducted by a county agency. Assume that you are Alpha Greene’s supervisor.

a.  Review each section of the form and assess its strengths and weaknesses.

b.  Identify and discuss the changes you would recommend.

Informed Consent Form for Research


Experiences of Teenage Mothers in Aries County, California

Principal Investigator

Alpha Greene, Research Analyst, Department of Human Services, Aries County, CA

We are asking you to participate in a research study. The purpose of this study is to learn from teen mothers in Aries City about their pregnancy and the challenges of raising a baby. We would like to explore with you (1) what your life was like before you became pregnant, (2) the challenges you faced during pregnancy and how you handled them, (3) how life has been going for you and your child since you became a mother, (4) your relationship with the baby’s father, and (5) your experiences with your school and community human service agencies. The study will compare the answers of teenage methods from different population groups, for example, African American mothers and Hispanic mothers.


If you agree to participate in this study, you will be asked to answer a 126-item questionnaire and participate in an in-person interview that will be tape recorded to assure the accuracy of the information. The questionnaire should take 30 minutes to finish; the interview should last no longer than 90 minutes.


There are no foreseeable risks to participating in this research.


There is no direct benefit to you for participating. Your responses will help the county plan and implement effective programs in schools and human service agencies.


The information in the study records will be kept strictly confidential. Data will be stored securely in a locked file cabinet in the principal investigator’s office. No reference will be made in oral or written reports that could link you to the study.


There is no compensation for participating in this study.


If you have questions at any time about the study or the procedures, you may contact the researcher [contact information given]. If you feel you have not been treated according to the descriptions in this form, or your rights as a participant in research have been violated during the course of this project, you may contact the Department of Human Services Associate Director [contact information given].


Your participation in this study is voluntary; you may decline to participate without penalty. If you decide to participate, you may withdraw from the study at any time without penalty and without loss of benefits to which you are otherwise entitled. If you withdraw from the study before data collection is completed your data will be returned to you or destroyed at your request.


“I have read and understand the above information. I have received a copy of this form. I agree to participate in this study with the understanding that I may withdraw at any time.”

Participant’s signature______________  Investigator’s signature ______________

Date______________         Date _______________

Section B: Class Discussion

As part of preparing for the class discussion, learn if your university has an IRB and if so what its requirements are. For Exercises 1 and 2 you may want to consider in your discussion whether you would handle informed consent differently to meet university requirements instead of conducting the study for an agency.

1.  In small groups, review the informed consent form. Each group should present a strategy to the class for obtaining informed consent from the study participants. The strategies should include (a) the wording of the form and (b) how the form should be presented to potential subjects.

EXERCISE 3.2 Evaluating a Debt Counseling Program


You have been asked to evaluate an organization’s debt counseling program. Several different studies may be undertaken, including the following: an assessment of staff competence, for example, the quality of advice offered, appropriate follow-up, ability to work with diverse clients, whether clients benefit from the program, and what separates clients who benefit and those who don’t.

Section A: Getting Started

1.  Consider how the ethical principles discussed in this chapter apply to staff and client participants.

2.  What type of harm might staff participants experience because of your study?

3.  What types of harm might client participants experience because of your study?

4.  What protections would you build into the research to address possible ill-effects?

Section B: Small Group Exercises

1.  As a class or in small groups, develop a research plan and evaluate how well it protects staff and client participants. The plan should include recruitment of subjects, obtaining informed consent, maintaining assurances of confidentiality or anonymity of information, analysis of information, and storage of data.

2.  Based on your research plan and method of obtaining informed consent how would you handle the following situations?

a.  As part of collecting the information you learn that a specific staff member is consistently giving erroneous advice. Does it matter if the staff member was a research subject or not? Justify your opinion.

b.  As part of collecting the information you learn that a client who is a research subject has lied about his financial status.

c.  After your report is given to the organization, evidence surfaces that it has not taken action on the report’s findings about serious problems that affect program quality.

EXERCISE 3.3 On Your Own

If you are planning to conduct a study for your agency you first should find out what kind of ethical protocols it has. If your agency does not have an IRB, more than likely the study will have to be reviewed by administrators who would certainly have lots of questions about your research. The answers to these questions will help you consider how to proceed in an ethical manner and protect your human subjects.

1.  What methods will you use to collect data (e.g., what will participants be asked to do)?

2.  How much time will it take?

3.  Will this study harm the reputation of the organization if the responses are not as favorable as we would like?

4.  Will the methods you use allow for anonymity or confidentiality?

5.  How will you explain the caveats to confidentiality for your participants (e.g., duty to warn)?

6.  How will participants’ identities be protected?

7.  If this is a demonstration project, how will you make sure that the selection process is fair and equitable?

8.  What other questions do you anticipate from your administrator?


1The Nuremberg Code is posted on several Internet sites. One source is The original is from Trials of War Criminals before the Nuremberg Military Tribunals under Control Council Law No. 10 Vol. 2 (Washington, D.C.: U.S. Government Printing Office, 1949), pp. 181–182.

2J. Katz. Experimentation with Human Beings (New York: Russell Sage Foundations, 1972) pp. 10–65.

Ibid., pp. 68–103.

Ibid., pp. 325–329.

Ibid., pp. 358–365.

6“The Belmont Report Ethical Principles and Guidelines for the protection of human subjects of research,” The National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, U.S. Department of Health, Education, and Welfare, April 18, 1979 (available at, accessed June 12, 2008).

7L. K. Altman, “Fatal Drug Trial Raises Questions about ‘Informed Consent,’” New York Times, October 4, 1993, B7.

8American Statistical Association. Ethical Guidelines for Statistical Practice (1999). The guidelines are available on the association’s Web site (

9A. G. Turner, “What Subjects of Survey Research Believe about Confidentiality,” in The Ethics of Social Research: Surveys and Experiments, J. E. Sieber, ed. (New York: Springer-Verlag, 1982), pp. 151–165. For a summary of more recent research on the public’s attitude about the confidentiality of data see Expanding Access to Research Data: Reconciling Risks and Opportunities (Washington, D.C.: National Academies Press, 2005) pp. 52–54.

10For an extensive discussion of procedures to protect privacy, applicable federal laws, and an extensive bibliography, seeProtecting Human Research Subjects, pp. 3-27–3-37, 3-56. Readers interested in strategies for identifying and questioning subjects about sensitive topics may wish to read the cases in C. M. Renzetti and R. M. Lee, eds., Researching Sensitive Topics (Newbury Park: Sage Publications, 1992).

11For a brief, but fuller, discussion, of federal legal protections, see Expanding Access to Research Data, pp. 56–59. The report also considers threats to confidentiality associated with national security concerns.

12 Ibid., p.56

13T. E. Hedrick, “Justifications and Obstacles to Data Sharing,” in S. E. Fienberg, M. E. Martin, and M. L. Straf, eds., Sharing Research Data (Washington, D.C.: National Academy Press, 1985), p. 136. Hedrick cites sources that discuss this issue in more depth. See also the Ethical Guidelines for Statistical Practice, D4.

1445 CFR 46 Section 46.111.

15For a discussion of the effect of IRBs on social science research see R. J-P. Hauck, ed., “Symposium: Protecting Human Research Participants, IRBs, and Political Science Redux,” Political Science and Politics, 41 (July 2008): 475–511.

16 Protecting Human Research Subjects, p. 6–53.

17 Ibid., p. 6–55.

18Paul Oliver, The Student’s Guide to Research Ethics (Philadelphia: Open University Press, 2003). Chapter 4 has a useful discussion related to record storage concerns.

19Adapted from A. Shamoo and D. Resnik. Responsible Conduct of Research (New York: Oxford University Press, 2003). Source:

Our writing company helps you enjoy campus life. We have committed and experienced tutors and academic writers who have a keen eye in writing papers related to Business, Management, Marketing, History, English, Media studies, Literature, nursing, Finance, Medicine, Archaeology, Accounting, Statistics, Technology, Arts, Religion, Economics, Law, Psychology, Biology, Philosophy, Sociology, Political science, Mathematics, Engineering, Ecology etc.

Need Custom Written Paper?

-Plagiarism free

-Timely delivery