Research design consists of the different options and choices to be made when conducting empirical research. A study comprises a sequence of choices, which all influence the validity and plausibility of the research. Among things to consider are research questions and objectives, operationalization of variables, reliability and validity, and data-gathering methods. Some of these areas will be covered in the further sections. Here, we will briefly make a bold separation between two common research designs in social and communication research-experimental and non-experimental designs – and briefly discuss validity and reliability.
Experimental design is based on a well-prepared and framed experiment in which some particular causal relationship is tested under controlled conditions. In experiments people are usually randomly divided into separate groups thus controlling the possible bias caused by the variables that are not studied. Then the interesting variable is manipulated and the possible effect is observed and measured. Good experimental studies test real causal mechanisms.
Non-experimental research design refers to observational studies, such as a survey or a content analysis. In observational studies a researcher collects observations using a research instrument, such as a questionnaire in surveys, and then performs statistical tests on the data. Good sampling techniques can make observational studies reliable and generalizable to the population within certain limits. No causal mechanisms can, however, be tested as reliably as in controlled trials.
There are two important criteria that should always be considered with respect to a given research design. The first of them is validity – is the study really measuring what it claims to be measuring? For example, if the study design is a survey that intends to measure people’s attitude towards social media, we can evaluate how well the theoretical constructs are operationalized into survey questions and how well the items used describe attitudes towards social media. Validity will in the end determine how well the study can predict the behaviors or attitudes it measures.
Reliability is the other criteria, which could be called “repeatability”. It describes how accurate the measurement is, i.e. if the same study were conducted again, would similar results be drawn? It is useful to start thinking about reliability by asking how consistent the measurable constructs would be from one sample to another. There are many ways to test reliability, such as test-retest (how much two different samples correlate together for the tested questions) or reliability estimates, such as Cronbach’s alpha.