The quantitative research carried out as part of this project included five surveys run online with a UK sample of adults aged 18 and over. Public First is a member of the British Polling Council (BPC) and a company partner of the Market Research Society (MRS) and abides by their rules.
Survey design
Survey questionnaires went through several rounds of iteration. A key consideration for the survey design was the level of information which the survey respondent had at any specific point in the survey – for example, whether they had been given an official definition of terms such as “R&D”. Given the intention for the results of this work to be easy for other researchers to use, we made the decision to keep the order of key sections of the survey identical between respondents, so that it would be easy to interpret the level of information and “priming” which participants have before answering each question.
Quantitative opinion data is always subject to a number of possible biases, most importantly a tendency for participants to agree with statements or “satisfice”. To counteract this, we design the survey to vary the types of questions shown, and incorporate binary choice scales and multi-select questions as well as agree-disagree likert-style questions.
Our Phase 3 survey also had the objective of producing an attitudinal segmentation of the public, based on their views about R&D investment. We aimed to keep the questions for this segmentation focussed on fundamental drivers and motivations, such as people’s propensity to worry about the future, or a desire to take risks, rather than on specific political issues which R&D could play a role in. While we knew the latter would likely offer interesting splits, and as our analysis showed political issues were important to support for R&D, we felt there were existing political segmentations which would prove more direct in this regard (such as the More in Common political segmentation, which our Phase 3 sample included).
In Phase 3, the broad order of the survey was as follows:
Reaching the sample
Participants were recruited through online proprietary panel providers, where individuals register their interest in taking part in online research in return for incentives. By their nature, online samples exclude those with low levels of digital access, however they enable strong control over the demographic make-up of the final sample. The sample is targeted to be representative of the UK public, along interlocked age and gender, regional and socio-economic grade lines. All surveys are subject to some level of recruitment bias, where those who are interested in taking part may represent a more “engaged” group of people than if the selection was truly random. By reaching the sample through online panels, incentives can be tailored to meet specific groups to encourage the participation of those who are less likely to volunteer.
The survey also features attentional checks for inattentive responding, both in the form of trap questions where participants are instructed to select a specific answer option in order to proceed, and open-response questions which are retrospectively assessed for “nonsense” responding. All surveys also feature “reCAPTCHA” checks for bots, to prevent surveys being flooded by scripted responses, with a second check for such problems during analysis of any open responses.
Analysis
Our section – Download the Data – provides access to the full data tables for Phases 2 and 3, broken down by demographic groups. In general, sample sizes of less than 100 are considered too small for statistical analysis, and are avoided throughout our analysis. For a sample of over 8,000 the margin of error is +/- 1%, meaning we can be 95% confident that the real proportion lies within 1% of the reported statistic at the top level.
When samples are split, either through different message testing, or crossbreaks, this margin of error will increase. As a rough guide, the margin of error for different samples sizes are:
-
- 4,000 sample size: 2%
- 2,000 sample size: 2%
- 1,000 sample size: 3%
- 500 sample size: 4%
- 100 sample size: 10%
Segmentation Analysis
See our Segmentation section for the results of our segmentation analysis.
The attitudinal segmentation carried out on the larger sample as part of this work was produced through a combination of factor analysis and K-proto clustering, which allows for the combination of continuous and categorical data. A number of segmentations were produced throughout the analysis, varying in: the number of “clusters” or “segments”, the technique used (which included K-means with and without prior factor analysis, Hierarchical Clustering), the questions involved in the segmentation and how the answer options were grouped together (e.g. whether “Strongly Agree” and “Agree” responses were considered identical).
The final segmentation framework was decided through qualitative analysis of the different segment outputs. There were some segments (namely the Ideologically Aligned, Ideologically Conflicted and Issue Driven) which were relatively consistently found across methodological approaches. The segmentation which split the remaining cluster of people into Future Focussed and Present Focussed was chosen because we felt it to be useful for understanding how different issues and focuses could appeal to this group who were typically inclined to be supportive R&D, but spanned political and ideological outlooks. An alternative approach could have involved participation in different R&D-related activities as part of the segmentation, although this was ultimately rejected as it mainly split out those who gave to charity, which we felt was not a granular enough distinction.
Attitudinal segmentations like this are by their nature noisy, with segments often including participants who do not fit the qualitative interpretation applied to each segment. Further, given that segmentations tend to be a way of identifying latent information in the data, it is always possible that future research which explores the same segments in more detail uncovers a novel way of understanding a group which is a better fit for the data. The outputs of this segmentation are intended to demonstrate one way in which the data could be interpreted and used, alongside more traditional demographic splits.