How Do We Know When Change Has Happened?

| GS INSIGHTS

Riddle me this: What do parenthood, youth work, training, and systems change have in common?

The answer is simple: positive change is not easy to measure.

I am a parent, began my career in youth work, and established myself as a trainer.

In the last half of my career, I have sought to understand and facilitate change for the greater good in organizations, communities, and systems.

In each of these four areas we can’t usually know we have been successful except through retrospection when we look back and see that:

 

  • our children have grown into the people we hoped they would become;
  • the youth we tried to guide have found their path;
  • the people we trained have learned, used, and enhanced the skills we taught; and,
  • the systems we have attempted to change have so completely adopted a new normal that everyone thinks “this is the way it has always been.”

True confession—I do evaluation research, but I do not love it as some of my colleagues do. It has purpose, it has value, and I learn from it. However, the pracademic in me says it has been misused in the field of system change. We need to know we are making progress amid system change and we need for evaluation research to inform practicable work in real time, not just to establish “evidence-based” practices.

This now brings me to the subject of this article: pragmatic, real-time measurement of change in systems, whether it be a community or organization or a complex adaptive system consisting of multiples of these.

What I am referring to here often goes by the name of developmental evaluation. I am quite fond of this definition from Cameron Norman’s Censemaking.com: “An approach to understanding the activities of a program operating in dynamic, novel environments with complex interaction…it focuses on innovation and strategic learning rather than standard outcomes and is as much a way of thinking about programs-in-context and the feedback they produce.”

In adopting and utilizing a developmental evaluation approach, I have learned several lessons. Whether you are a funder, a grantee, or a practitioner, you may find them useful. I offer them here for those times when you are trying to monitor and measure change in real time.

First, think ahead to what you hope to be different because of the change effort. I heard that collective “Well, duh!” out there. Of course, you already do this! Let me add a little twist: do not just think ahead but engage in wild informed speculation about what can be different. System change is made unpredictable by nonlinear interactions that create many unintended consequences, some good, some bad, and some incredibly innovative. Because of the dynamic nature of complex systems, we can do little more than speculate. However, wild informed speculation is not pure guesswork, it is guesswork supported by your homework.

Years ago, I worked on behalf of a federal agency to find a way to measure “community mobilization” (CM) as part of a national project. The hope was to come up with a theoretical framework that could be widely applied to initiatives which utilized CM and inform “best practices.”

Put briefly, it mostly happened. However, the rest of the story is that the effort did not quite yield what the agency wanted because I was propelled into the field of systems theory and change before that really became a “thing.” It would be a few years before that agency, or many other federal agencies, were embracing systems theory, either as systems thinking or systems change. In that moment they would have preferred a recipe, formula, or at least a checklist that grantees could use for assessing change.

Job one was to clarify the meaning of CM and then to clarify what could be different as result of it. A deep dive into the existing literature and many days reflecting and writing on the lessons learned from working over the years on various community change projects followed. It became more useful to differentiate between “community engagement” (relationship building) and “community mobilization” (issuing calls to action) and consider them as complementary processes. In the end, this led back to the wild informed speculation that what would be different because of community engagement and mobilization would be sustained community change.

Second, dare to ask “Okay, but what does that mean?” Once the wild informed speculation has been made, It is necessary to query it with questions such as: What does that look like? What are the factors within the community that need to change?

The focus shifted to unpacking the concept of “sustained community change.” This unpacking process, by the way, still involves a fair amount of wild informed speculation. Speculation led to identifying the key factors of attitudes, policies, practices, and systems within the community. Further, speculation led to the belief that sustained community change would be characterized by an evolving community culture that embraced and institutionalized supportive attitudes, policies, practices, and systems in relation to the community change. Because the original work was done in and with communities which were trying to embrace new norms regarding adolescent sexual health education, a historically controversial topic in many communities in the United States, the context of social and cultural controversy has always informed this work.

Third, take a flying leap into a process you can monitor. Once the question “What does that mean?” is answered, it is time to get practicable. The next question up is: What will tell us when these factors are moving in the direction of being supportive of the change? When we focus on this question what emerges are elements of change that we can monitor, and which inform us in real-time if the change effort is moving forward.

Again, it was time to return to wild informed speculation. It involved long discussions with colleagues about the elements of each factor and how they worked together to bring positive change. In the end, the hypothesis emerged that “community engagement guided by a high-performing infrastructure leads to sustainable community change progressing through a measurable sequence” (Klaus & Saunders, 2015[1]).

Finally, accept that your wild informed speculation will miss, and your flying leap will belly flop. Ah, this is the beauty and fun of working with communities and other complex adaptive systems! No two communities are alike, the factors and elements informing community change are different in each, and the people and players are never the same. As a result, we cannot be wedded to our wild informed speculation and our hypothesis. We must be open to adapting them to the unique context and nature of each system and community.

The original work I described briefly here has evolved into a framework for “doing” community and system change based on operating principles which are animated by core tasks. The core tasks, in turn, need to be tailored to each community or system. The developmental evaluation strategy remains true to monitoring the movement on the core tasks and operating principles while being adapted for each community in which we work today. You can learn more about the framework and developmental evaluation today by downloading Tenacious Change: Unlocking the Power of Collective Change Leadership.


[1] Thomas W. Klaus & Edward Saunders (2016): Using collective impact in support of communitywide teen pregnancy prevention initiatives, Community Development, DOI: 10.1080/15575330.2015.1131172