Part 1: The Mission
- To describe the importance of having process and outcome goals and the differences between them.
- To develop process and outcome goals in conjunction with evaluative criteria for those goals.
- To provide examples of ways in which process and outcome goals could be evaluated.
Now that you have identified the general purpose for your communication campaign, it is time to set specific goals. A tried and true tool for goal setting is the SMART acronym. Specifically, good goals are:
- Specific (What will you do? What will you accomplish?)
- Measurable (How will you know you have achieved your goals?)
- Attainable (Are you reasonably certain of success?)
- Relevant (Why is the goal even important?)
- Time bound (What is the time frame for completion?)
But how can you know for sure your goals are SMART? The best way to go about goal setting is actually to start with the end — the evaluation. What?!? That seems a little backward. Are we really asking you to think about how you will evaluate your communication program before you have even designed it? The answer is…yes! This is because goals and evaluation criteria are best developed in conjunction with one another. Having a good idea in mind about how your program could and should be evaluated actually helps you to design a better program — just like a grading rubric helps to focus you and keep you on track for a class assignment!
Developing goals and evaluative criteria
You should have goals and evaluative criteria related to both the process and the outcome of your communication campaign.
Ultimately, the success of your communication plan can be measured by answering one simple question: Did you achieve your goals? This illustrates why it is important to spend time thinking about and verbalizing your goals and evaluation from the outset.Process goals and evaluations are focused on the mechanics of the plan and its implementation. For example, you might want to set goals and evaluative criteria around whether your message reached the target audience, the consistency and timeliness of messaging across different platforms, how information flowed within the communications team, etc. These areas of focus will be highly dependent on the scenario but could include components that you view as particularly critical to the success of the program and/or components where you anticipate having problems. As much as possible your process goals/evaluation criteria should be mapped out in advance, but you should also make sure you revisit them throughout the communication process, because chances are that you are going to encounter barriers (or opportunities!) that you did not anticipate, and you may even have to adjust or change your goals accordingly. One component of process evaluation that is often ignored is efficiency. Did you manage to implement your program and achieve your goal with the least amount of wasted time, energy, money, etc.?
Outcome goals and evaluations are focused on the impact of your plan on its target audience. If your goal was to inform, did people understand more about the issue in question after versus before your communication effort? If your goal was to change behavior, did the behavior of your target audience actually change? ‘Unmonitored for outcome, risk communication consumes and wastes valuable resources, are ineffective, and create a false sense of achievement on those who are responsible.’ – Gaya Gamhewage, World Health Organization That being said, these are lofty goals and can be very difficult to measure. That does not mean that you should not strive for them, but it might be wise to come up with some creative but still impactful outcome aims. For example, did your communication effort increase your audience’s access to information about the issue in question? Did people feel empowered as a result of your communication? If you are involving stakeholders in the communication process, an aim could be whether those stakeholders felt that they were engaged in a positive and meaningful way.
Once you have decided on your goals and evaluative criteria, you will need to outline how you will conduct the evaluation. Again, this is important to do at the beginning because it will help you to assess the feasibility of your goals and evaluative criteria.
Communication program evaluation
There are three key times when an evaluation can take place:
This is called pre-testing. This involves implementing all or part of your communication plan on representatives from your intended audience (ideally). For example, perhaps you might have a focus group with stakeholders to review your risk communication materials and make sure that everyone understands the message and finds it compelling.
It can be helpful to have a mid-point review. You often can’t know what works and what does not until you have started to implement. Additionally, there may be unforeseen barriers and opportunities. This is usually more of a process-focused check-up than a full evaluation — the purpose is to make sure that everything is on track. For programs that take a long time to bear fruit, you may need several mid-point check-ins.
Once there has been time for your communication effort to bear fruit, it is time for a more formal and in-depth evaluation. This is often focused on outcomes, but, don’t forget the process component because the two go hand in hand. Without a process review, it will be hard to identify the root causes of your success or failure.
The importance of audience and stakeholder pre-testing
by Taneille Johnson (SPPH 552 2020W1)
For today’s blog post, I’m looking at a campaign run by the BC Government on overdose prevention and de-stigmatization. The reception of this campaign by an important advocacy group is an example of the importance of identifying stakeholders and pre-testing communication messages.
This campaign originally ran in 2018. To me, the target audience was likely upper-middle class individuals and aimed to raise awareness that anyone could be a drug user (i.e. drug users are NOT only lower income individuals, and those who use drugs are “real people too.”
Shortly after this campaign aired, there was outcry from the Canadian Association of People who Use Drugs. This group argued that the government’s campaign shifted the focus onto individual drug users and left out many of the complex systems issues that arguably play a larger role in the opioid epidemic.
This advocacy group quickly released remixed posters onto social media (see below, taken from Facebook: https://www.facebook.com/notes/canadian-association-of-people-who-use-drugs/capud-launches-new-anti-stigma-campaign-aimed-at-bc-provincial-government-/1063640897122610/)
Could this reaction and outcry have been prevented by consulting this advocacy group ahead of time? Was audience and potential audience reaction considered? Did the government consider partnering with advocacy groups already working on de-stigmatization? The government responded by saying that they stood by their original ad campaign and that the goal was to “bring humanism” into the opioid crisis (see https://www.cbc.ca/news/canada/british-columbia/advocacy-group-takes-issue-with-b-c-government-s-ad-campaign-to-fight-opioid-crisis-1.4773781).
For me, this leads to a larger question of the duty of government to engage with stakeholders prior to communication campaigns. The sticky point is that it is impossible to expect the government (or any communicator) to anticipate every hurdle and consult with every actor.
While the advocacy group may have disagreed with the overall message, perhaps there could have been a middle ground. Perhaps the government could have run the original ad as the first in a series and then followed up with a 2nd ad drawing attention to the systems issues. Regardless, I think this ad (and the ensuing outcry) serves as a reminder to identify your stakeholders or key advocacy groups and pre-test the message beforehand!
A process evaluation relies largely on internal data sources, for example risk communication plans, messages, and communication products (i.e., media interviews, written and visual materials, internet posts, etc.), and interviews with those involved in the communication process.
An outcome evaluation, on the other hand, relies largely on external data sources such as surveys, interviews, and focus groups with segments of your target audience, usage tracking for websites and social media, and data to identify a change in behavior.
This is not rocket science! Program evaluations are often omitted from communication plans and we think it is because people make them more complicated than they need to be. For some reason you use the word ‘evaluation’ and the tendency is to think that is needs some complicated, energy-draining, and time-consuming process. In reality, if you need this sort of process, your goals and evaluative criteria are probably way too complex. The goal of your evaluation is not to completely, utterly, and undeniably prove that you have achieved your goal, but rather to provide reasonable evidence to support that you made steps in the right direction.
Step 1: Identify the goals for the evaluation. These will be derived from your process/outcome goals and evaluative criteria.
Step 2: Determine what data you will need to perform the evaluation.
Step 3: Collect those data.
Step 4: Analyze those data.
Step 5: Draw conclusions and act on those conclusions by modifying your plan.
We developed a communication campaign for rural Sri Lankans in communities with a high incidence of dog and human rabies.
The purpose of this campaign was to educate adult community members about rabies. A process goal was that 75% of adults in the target communities had seen and read the poster. An outcome goal was that people who had read the poster demonstrated an increased understanding of the risks associated with rabies and how to mitigate those risks. The process evaluative criteria was self-reported rates of seeing reading the poster using one-on-one interviews conducted after the communication campaign. The outcome evaluative criteria was a demonstrated increase in understanding regarding rabies risk and risk mitigation between one-on-one interviews conducted before and after the communication campaign.
The evaluation process was as follows:
Step 1. The goal of the evaluation was to estimate the proportion of the target audience who had seen and read the posters and to assess whether or not those who had read the posters demonstrated an increased understanding of rabies.
Step 2. Data were collected through one-on-one interviews with a subset of adults identified by a community leader.
Step 3. Participants were interviewed before and after the educational campaign.
Step 4. Data were analyzed quantitatively and qualitatively.
Step 5. Data were used to determine whether goals had been met and to identify whether important gaps remained.
- Your goal is to clearly define what success will ‘look like’ in the context of your communication campaign. Goals should be specific, measurable, attainable, relevant, and time-bound (SMART).
- Goals should be developed in conjunction with evaluative criteria, which are objective measures that will help you determine if (and prove that) you have achieved your goals.
- You should have goals and evaluative criteria related to both the process and outcome of your communication plan. Process goals and evaluations are focused on the mechanics of the plan and its implementation. Outcome goals and evaluations are focused on the impact of your plan on its target audience.
- Once you decide on your goals and evaluative criteria you should outline how you will conduct the evaluation, including the timing, data sources, and process.
- Fig 1.2.1 Sri Lanka July 2010 © john.nousis is licensed under a CC BY-NC-SA (Attribution NonCommercial ShareAlike) license