Coding:
The process of turning information on a respondent or a response into a coded, usually numerical, value. For example, the gender of a respondent may be coded 1 for female and 2 for male. This code is then entered into a spreadsheet or other software for easier analysis.
Control:
In an experimental design or during research, the group that does not receive the program or experience (treatment). This group is used for comparison to the treatment group to determine the impact of the program or experience.
Data:
The plural of datum; a set of facts or numbers from which conclusions can be drawn.
Demographics:
Basic statistical information about a population, such as age, income, gender, etc.
Descriptive Statistics:
The branch of statistics that involves summarizing, tabulating, organizing and graphing data for the purpose of describing a sample of individuals that have been measured or observed. No attempt is made to infer the characteristics of individuals or make inferences about relationships.
Ethics:
The rules or standards governing the conduct of the members of a profession; a set of moral principles or values.
Experimental Design:
A type of research design in which the conditions of a program or experience (treatment) are controlled by the researcher and in which experimental subjects are randomly assigned to treatment conditions.
Evaluation:
The systematic collecting, analyzing and reporting of information relative to an audience’s knowledge, skills or attitudes regarding specific content for the purpose of making informed decisions about programming.
Evaluation Plan:
A document describing how a program evaluation is to be conducted, including who the audience is, how the sample with be selected, what instruments or other data-collection methods will be use, how the data will be analyzed and results reported.
Formative Evaluation:
The gathering of information/data about an audience’s reactions to and learning from a pilot program. Changes are made as a result of formative evaluation.
Front-end Evaluation:
The gathering of information/data about an audience’s knowledge, skills and attitudes. Front-end evaluation is conducted during the early stages of program development and the information is used to develop goals and objectives.
Goal:
A broad statement of what a program is supposed to accomplish.
Indicator:
A numerical measure of a quality or characteristic of some aspect of a program; evidence that something is occurring, that progress is being made.
Inferential Statistics:
The branch of statistics that involves making inferences about one or more populations on the basis of information about a sample. An example of inferential statistics is using the data gathered through a sample survey (such as the Gallup Poll) to estimate the proportion of voters (the population) who favor a political candidate.
Instrument:
A method for gathering data from a sample. Survey forms, questionnaires, observation forms are all instruments.
Item:
An individual question on an instrument.
Literature Review:
A process and documentation of the current relevant research literature regarding a particular topic or subject of interest.
Logic Model:
A logic model is a visual representation, a road map, showing the sequence of related events connecting the need for a planned program with the programs’ desired results/outcomes.
Median:
Value midway between the highest and lowest values
Mode:
Most often score
Objective(s):
A clearly stated, measurable outcome or change in a program participant as a result of the program.
Outcome:
The measurable result or achievement of a program; how the audience is impacted, that is, different after the program.
Output:
What gets generated or developed as a result of the program, the activities, materials, products, presentations, etc.
Population:
Any collection of individuals that have at least one characteristic in common.
Psychographics:
Information about a population’s values, that is, what their values, concerns and cares are.
Random Sampling:
A sampling procedure in which every member of the population has the same chance of being sampled and each person is sampled independently of others.
Reliability:
The degree to which an instrument (or observers) is consistent and dependable.
Research:
The application of the scientific approach (observation, hypothesis, experimentation, communication) to the study of a problem or question.
Results:
The outcomes indicated by the processed data; different from conclusions, which are interpretations of the results.
Sample:
A subset of a population.
Sampling:
The process of selecting a sample.
Standard Deviation:
The number that indicates whether most of the scores cluster closely around their mean or are spread out; a deviation is the distance of a score from the mean for its group.
Statistic:
A number that provides a summary of some characteristic of a set of data.
Summative Evaluation:
The gathering of information/data about an audience’s knowledge, skills and attitudes after the delivery of the program/end of the project. Also, the gathering of information about the process of program development. Summative evaluation information informs the development of the next project or informs funders about the success of the program. Changes are not made to programs as a result of summative evaluation.
Treatment:
In an experimental design or during research, the program or experience that the experimental group receives.
Validity:
The degree to which an instrument actually measures what it is supposed to measure.