Chapter+Three+-+Data+Collection

=Chapter Three: Data Collection =

The purpose of this chapter is to focus on the various facets of collecting data. Tests and measures, questionnaires and interview protocols, observational methods that are and are not obtrusive, as well as the validity and reliability of these approaches should be addressed. In addition, the relationship of the data collection method and research method should be noted, since not all data collection methods are appropriate for all research designs. Finally, issues related to the data collectors, types of collection methods and recording of data should be discussed.

I. Quantitative Data
// Quantitative data // refers to information that is in the form of numbers. It is generally data that is found in research or studies; and it can be found in the forms of graphs, tables, or charts. This data comes in the form of empirical research. In journals and articles these studies have numbers to prove their findings. This data comes straight from the schools and the students and lets the researcher know what is really happening.

IA. Existing Instruments
An existing instrument is when research has already been developed and measured. Existing instruments may be selected instead of constructing an instrument. When constructing a new instrument to measure there is little research. Therefore, the researcher would have to spend a significant amount of time developing the instrument. Some researchers decide to use an existing instrument to decrease the amount of work it will take to construct their own. The effort and time dedicated to construct a new instrument may increase if the variable is more challenging. With existing instruments, extensive research has already been conducted and the researcher is able to learn more information about the instrument. Existing instruments can also be compared, integrated and synthesized, if using the same measuring instrument for a central variable (Punch, 2009). However, when using an existing instrument one has to consider the characteristic of the operational definition. Operational definition in existing instruments can vary. If a variation in the operation definition occurs, a new measure can be constructed. If the existing instrument is reasonable, effective data can be an addition to the research.

Ia. Reliability
The measuring instruments used in literature must be reliable and valid. Reliability is tested by it’s (the instruments) consistency over time and its internal consistency. Consistency over time (test-retest reliability) refers to a method where the instrument score would be the same to some extent when given to the same group of people under the same conditions at a different time. The consistency of the score would determine the instruments reliability. The method used to determine internal consistency is often reported as a coefficient alpha such as Cronbach’s α.

When measuring an instrument the reliability of that instrument should be considered. It is important to know how consistent the instrument has been over time. If that same instrument was conducted again using the same people and situation, would it still be reliable. Stability and consistency are two components for establishing reliability. Stability consists of the instruments ability to sustain is reliability over time. Consistency involves the extent to which the items are consistent to working with each other. A test-retest reliability can also be used. This type of reliability is used to measure stability at two different points in time. The greater the reliability the smaller the error margin is. If the reliability is smaller, the occurrence of errors is greater. The more substantial the errors determines how unreliable the instrument becomes.

Ib. Validity
====Validity is the extent to which an instrument accurately measures what it is supposed to measure. An instrument is considered valid when it interprets the approximate reality of the phenomena it is explaining or describing. In other words, validity answers the questions: is this data factually accurate and close to reality as possible as can be? And are we measuring what we think we should be measuring? Error and purpose of measurement are key factors when discussing the validity of an instrument i.e.an instrument is valid for whom and for what? Consequently, validity is categorized into the following subcategories: ==== 1) Criterion validity: criterion validity is further divided into predictive and concurrent validity and Discriminant validity. · Predictive validity: In predictive validity, predictions are made, tests are administered, and then a criterion is determined from the measures obtained by the researcher on the subject.  · Concurrent Validity: Validity is said to be concurrent when the test score and the criterion score are determined simultaneously i.e. When a multiple-choice form of spelling test is substituted for taking dictation), or a test is shown to correlate with some contemporary criterion · Discriminant validity: This is a measure of how two phenomena or groups differ from each other.

IB. Tests and Measures
Tests and Measures are methods used for data collection. Tests can be formative or summative assessments. Measures can be cognitive, affective, and projective instruments. They can also be in survey form.

IC. Other Methods of Quantitative Data Collection
There are other methods of data collection that focus on advances in computer assisted methodology and comparisons among various methods (e.g. telephone versus face to face, paper versus computer assisted, interviewer administered versus self-administered. These methods involve administering surveys and interviews with closed ended questions. Surveys and Interviews in quantitative research are more structured than in qualitative research i.e the interviewer will ask a set of standard questions and expect the interviewee to choose responses from a set of predetermined responses--strongly agree, agree, somewhat agree, neutral or do not agree. Computer assisted surveys/interviews are a form of personal interviewing whereby, instead of completing a questionaire, the researcher brings along a computer and enters the data obtained directly into the computer. Face to face interviews are another way of collecting quantitative data and are always the best because they allow the researcher the opportunity to probe for clarification on responses. Another method is internet based questionaire. In this method, links are sent to people's emails and once they click on the links, they are taken to the appropriate websites to complete the surveys or interviews.

II. Qualitative Data
Qualitative data refers to data that is about specific categorizes,  and may describe something in words specific. This kind of data has nothing to do with numbers. Whereas quantitative data uses a lot of numbers, this type of data does not. Qualitative data learns about the individuals, assesses processes over time through description and note keeping, generates theories from the perspective of the participant, and really focuses in on a few people and places instead of the whole.

IIA. Interviews
The most common interview types are: key informant, survey, and focus groups. In key informant interviews, interviews are conducted individually. Key persons of expert are identified for the interview process and key persons who may have a different perspective from the targeted group are the ones sought out in this process. Surveys are generally used in conjunction with other data collection methods previously used. Surveys can be used to sum up the data, understand the participants’ thoughts or can be used find out subconscious perceptions of the participant. Focus group interviews are conducted among a specialized group. It most likely has about seven to ten participants and has a purpose of evaluating the group members’ response to one another. The discussion is relaxed, comfortable, and often enjoyable for participants as they share their ideas and perceptions. Group members influence each other by responding to ideas and comments in the discussion. The researcher creates a permissive environment in the focus group that nurtures different perceptions and points of view, without pressuring participants to vote, plan, or reach consensus. The group discussion is conducted several times with similar types of participants to identify trends and patterns in perceptions. Quantitative interviews can be structured, semi structured, or unstructured. The most difficult types of these interviews to conduct are the unstructured kind because the interviewer has to be skilled in order to guide the interview back on course when it falls off. Qualitative interviews can be open-ended, generally guides, and informal. Each research method has it's strengths and weaknesses. When designing a research study it is important to decide what the outcome (data) the study will produce then select the best methodology to produce that desired information.

IIB. Observations
Observation is a way that a research can gather a sampling of information and data to construct a picture of a particular behavior. Observation can be an **important** part of the collection process in research. Its job can be to either add //validity our dispute// a particular situation- that is being tested. When an observation is being made there can be many factors that can influence the outcome of an observation. Particularly, in educational research- the observer has to consider all the factors that may/or may not influence children’s behavior and skew the result. If an observer wants true valid results, the observer has to think of and consider how to illuminate distracters. Observation can be divided into two main categories; structured and unstructured depending on the need. Structured operation will rely heavy on a schedule, numbers, and a plan that may be in place, when the observation is being conducted. But, an unstructured observation will be more informal and causal and will look, and listening to the environment.

III. Data Collection Validity Threats
There are two primary criteria for evaluating the validity of an experimental design. Internal validity it deetermines whether the independent variable made a difference in the study? Can a cause-and-effect relationship be observed? To achieve internal validity, the researcher must design and conduct the study so that only the independent variable can be the cause of the results (Cozby, 1993). External validity, refers to the extent to which findings can be generalized or be considered representative of the population. Errors can also arise and they are conditions that may confuse the effect of the independent variable with that of some other variable(s). There are various types of errors. For example premeasurement, maturation, history, selection bias and insturmentation errors are a few.

IIIA. Researcher Biases
When a researcher is conducting an interview or collecting data for a research project, it is important for that researcher to be able to stay __completely objective__- so that personal influences do not affect the findings of the research. A researcher has to bring to the table, complete objectivity and the ability to separate, and remove any bias and pre-conceived notions about the outcome. Without this ability, a researcher may never know if the results they get are valid or reliable. Along with this idea researchers must be sure that their data collection is not biased. For example, if they are conducting interviews all of their questions need to be completely objective and not have any hint of their own personal views and opinions on the subject.

IIIB. Sample Characteristics
You sample a group in terms of purpose size, composition and procedures. Identify the target market (people who possess certain characteristics). Provide a short introduction and background on the issue to be discussed; Then have the focus group members write their responses to the issue(s).

IV. Summary
In summary, data collection is a crucial part of the research process. This is the meat of the research. Without numbers, interviews, observations, and other forms of data then research cannot be substantiated. Data validates theories. While data is important, it is as equally important that this data be reliable and valid. There can be no bias in the research or it loses its creditability. Researchers have to come from a completely objective standpoint and they must also ensure that the participants are objective as well. In order to ensure no bias is present, they must be careful to address their questions and research in a way that gives no clue as to their personal views on the topic.