“Not everything that counts can be counted, nor everything that can be counted counts” is one of my favourite quotes. I use it as inspiration every time I approach a new adoption initiative. I think of it when trying to understand and define an adoption opportunity (see Awareness of Practice/Problem Definition) and also when understanding and analysing the enablers and barriers to adoption. It is also just as relevant when trying to measure adoption and the success of an initiative. The thought of counting/measuring something that doesn’t provide any utility, insight or benefit genuinely hurts. To avoid this “pain” you need to spend a lot of time thinking critically about the outcome you’re trying to measure: why it is important and what will the metrics tell us. My personal approach to measuring success is to think of the data as a source of light; the degree of illumination varies, from very poor to very clear. I’m always looking to shine a spotlight rather than hold up a candle.

One metric to rule them all
There is only one way to determine if adoption has occurred and that is to quantify the number of producers who are currently using, have never tried or are no longer using the practice you are focussing on. This will provide you with a baseline data-set. This data is critical as it may indicate the practices you are interested in are more widely adopted than you thought, in which case you may rethink the need to influence change in this area. Or it may indeed confirm that the level of adoption is low, or lower than desired and you therefore want to develop an adoption plan to see if there is an opportunity for greater uptake. However, seeing changes in practice takes time.
Take for example, the adoption rate of the practice of no-till, an agricultural technique for growing crops or pasture without disturbing the soil through tillage to decrease the amount of erosion: it is considered an adoption “success” story given that most (>95%) grain producers switched to this practice. But as the graph below shows, the rate of adoption was not fast It took over 30 years to become a well-established practice. So when it comes to measuring adoption be prepared to be patient.

It is important to note though that not every practice will be as widely adopted as no-till, and the uptake could occur more rapidly (but we’re still talking years), or more slowly. The peak level and rate of adoption will be influenced by the size of the target population and how well you’ve understood and conveyed the relative advantage and learnability of the practice to that population. If you are aiming for 100% uptake across an entire industry then be prepared for a life-time (or 3!) of work. While most people will rightly agree that this is an unrealistic expectation, it begs the question, what is realistic? Without an adoption plan you’ll never know…….what was the expression, “if you don’t know where you’re going any road will take you there”.
Unfortunately, within the RD&E system there is a tendency to only measure success within the context of outputs from research and extension projects and badge them as a measure of adoption. These types of metrics hold some value if used in the right context but claiming adoption success because 85% of participants said they intend to adopt or 95% increased their knowledge doesn’t cut it (more on that below). Simply put, if you haven’t established what current practice is then you don’t know what level of adoption has occurred and, if you are not prepared to ask producers at a point later in time if they have changed to your target practice then you won’t know if your research and extension efforts have been adopted.

Other measures of success
While measuring practice change sits alone at the top of the steps to adoption, there are other metrics that can be collected to indicate whether the activities at each step are successful. For example, the “Awareness” step focuses on communication so success should be measured in this context eg. how many people did we want to make aware of a new practice and how many are aware due to our activities? Another way to measure success for this step (Awareness) might be if you provide links to further information eg. a website or web article; was there an increase in “hits” following the communication campaign? These kinds of measurements and having some target in mind means you can choose whether to move to the next step of adoption (Interest) or reinvest in further Awareness activities to reach the desired target.
Having created curiosity through awareness it is important to retain potential adoptees Interest in a practice. Ensuring that knowledge around the innovation is easily accessible and digestible by the target population is critical to achieving this. Measuring the success of knowledge transfer activities can as straight forward as measuring their understanding of the topic pre and post extension event, or checking to see if they learnt something new from the experience. These types of metrics allow you to see if you’ve filled a gap in producers knowledge and understanding eg. the National Renewables in Agriculture Conference and Expo in 2019 saw a 600% increase in participants knowledge of on-farm opportunities for renewables i.e. they went from an average of 0.8 out of 5 to 5 out of 5 when rating their knowledge (I think you could call that a success). Similarly, if knowledge is low pre and post then you know you haven’t met the mark or, if the knowledge/understanding is high to begin with then maybe lack of knowledge is not the issue (so don’t keep pushing out information!). This metric doesn’t necessarily have to be associated with an extension activity either. It may be something you measure prior to developing an extension plan to determine whether lack of knowledge around a target practice is a problem worth solving.
The “Intent to adopt” step to me is a fascinating one to measure. The reason being is I think it is flawed to ask producers directly “do they intend to adopt a practice”. Often this question is asked on the back of a workshop/seminar/R&D update and is as far as we go when supporting producers through the steps to adoption. The metric is flawed because while producers may indicate they do intend to adopt we have no idea of knowing if they will follow through. They may have answered in the affirmative because it was the polite thing to do, they may genuinely be inspired to adopt in the moments following a workshop or seminar but, once they get back on farm, other priorities take place, or they may have said yes because it is easier than saying no i.e. I just want to get home and don’t want to have explain why I’m not going to adopt. Intending to do something, and actually doing it, are two very different things. We all have intentions to do things and then fail to follow through…….for instance I intend to become healthier but right now I’m reaching for a chocolate bar to get me through to the end of the day……😔.
The intent to adopt challenge leads me to another point: how often do we follow up with people who indicated they do not intend to adopt? It would be far more insightful from an adoption perspective to understand these respondents reasoning, rather than gushing over the numbers/proportions of people that indicated they intend to adopt. Remember not everything that can be counted, counts!
When measuring for intent to adopt we really should be seeking to understand what factors prevent/challenge them from adopting a practice? Another metric/question to consider is checking to see if producers would be interested in trialling the focus practice (i.e. the next step in the adoption pathway). A “no” to trialling could be inferred as a failure to change their perception about the innovation i.e. they don’t intend to adopt. A “yes” would be a positive response but even that doesn’t mean it will happen. An additional question may be “do you need support in trialling” in order to understand whether the intent will translate to action. Either way, when it comes to the intent to adopt step (and associated activities), there is considerable scope to develop a better understanding of producer’s perception of practices. We just have to ask the right questions. This means leaning into, not away from, the discomfort of seeking and receiving feedback that isn’t always positive.
“The trouble with most of us is that we would rather be ruined by praise than saved by criticism” – Norman Vincent Peale
But that’s just my view on measuring success. I look forward to hearing people’s perspective when it comes to adoption metrics and what they think counts, and what doesn’t!
