Saturday, October 18, 2008

Impact assessment of knowledge management interventions

Let me use (part of) a rainy saturday afternoon to try and organize some of my new thoughts about impact assessment. We (team of 3) are doing a study about Monitoring and Evaluation of Knowledge Management Interventions. I have been in 8 interviews and read a pile of papers around this topic. We are blogging some of it on the Giraffe team blog on this topic. We are currently working on a draft of the publication hoping to do justice to all interviews and literature. Here are my personal insights- developments in my own thinking. When I worked in development project I wasn't always enthused about monitoring and evaluation, apart from some participatory impact monitoring (PIM) efforts. However, working with various communities of practice, I come across the need to monitor and evaluation the impact. After all, the aim is not to have a community of practice, but to have a functional community of practice and which functional I mean stewarding knowledge and innovating practices. What has struck me in the interviews and literature is the following:

1. One remarkable thing is the concept of Knowledge Management itself. It is such an accepted, ordinary term for me, but not all others. It is a discipline I like, because of its integrated nature, fed by information technology, psychology, social sciences etc. The people who have negative associations with the term seem to find it difficult to see how you manage 'knowledge'. Personally, I don't think knowledge management is about managing knowledge, but about managing knowledge workers, organisational structure, process and systems. In our definition it is very close to terms as Organizational Learning. Chris Mowles felt knowledge management is coopted too much by management science, but I don't think it is in the development sector.

2. I see that I need to be more careful in distinguishing between measuring (using metrics like nr. of kilos etc), assessing (to determine the value), and reading (picking up signals, evolving understanding). Often I used these terms interchangeably, but they are different. Reading organizations is a powerful term that we used in Ghana when I worked with SNV (thanks Laurent!), but it took some time to see the link with this subject, I think reading is underestimated. Formal evaluation exercises take quite some investment, but informal readings by actors and sponsors take place all the time. Maybe we can more often trust the informal readings by actors instead of waiting for the report of an 'objective' evaluator.

3. The need for distinction (and often separation) between a developmental assessment and an extractive assessment. The first type of assessment has the aim to improve the situation and make sure the actors gain insights themselves, upon which they can act. An extractive assessment (or sometimes called for accountability) is aimed at proving value to sponsors or donors who are outsiders, to secure continued support. I think it's useful to keep the two aims more often separate. Lumping them together in one process can be very dangerous, as you will never be completely open when you are keen to prove something. A developmental assessment, or 'monitoring in the service of learning' as CDRA has neatly coined has its limits there is an extractive purpose too. I'm not saying it is impossible to combine the two purposes in one process, I think it is very well possible where there are fairly equal, mature relationships between actors and sponsors. However, in the development sector this is always a tricky thing. Who judges? What kind of decisions will be made on the basis of the results of the assessment? I've done it too, evaluate a network with the purpose to 'learn' but at the same time, everybody is aware that the donor has serious doubts.

4. The language of learning is another powerful insight- the way you talk about learning, the way you experience learning determines your choice of knowledge management interventions and the way you measure. Do you belief in informal learning or in training or both? When the purpose of an assessment is extractive and the language of the sponsors is not aligned with the language of the assessment, it may not have the intended effect. Continuous conversations between the knowledge actors and the sponsors may be more useful in that case. But if a person doesn't believe in an intervention will he/she be convinced by an assessment? Probably only when accompanied by the right conversations, not by a report. Discuss what the change process is that the knowledge management should bring about and choose an appropriate way of monitoring the changes.

5. Balance the cost of an assessment with the expected outcomes - it was very, very remarkable in the interviews with people working in the profit sector, that the whole topic of monitoring and evaluation of impact of knowledge management interventions seems less relevant. Why invest in an assessment when as a manager you see that something is changing in the right direction? When the cost of the knowledge management intervention is relatively small? Unless the outcomes will really be the basis for a change in strategy, or a major decision, why do you want to assess the impact in the first place? The fact that impact is not formally assessed does not mean there is no impact and that people don't have a sense of the impact. Managers have to stay in touch with reality and trust their professional observations. Investments in a formal assessment may only be justified when it's really important and strategic to know. On the other hand, a very light mechanism may do the work.

6. The need to discuss and decide where you stop to assess impact. I am a big fan of the INTRAC ripple model. Which makes the distinction between output, outcome and impact very clear. Etienne Wenger shared another similar model, but focused on communities of practice. Using such a model, you can decide where you stop your assessment. Unless data is readily available going to further levels may mean investing a lot of energy in collecting information that may be useless. The further away from the centre the ripples are, the more uncertainty in attribution. I enjoyed the example of Mattieu Weggeman of a management team that was more open and collaborative. Only to find out that the director had fallen in love with a legal person and this relationship rather than the intervention changed his attitude. However, stories may go a long way in explaining changes in all their complexities.

7. Figures, stories and conversations
Etienne Wenger stressed that figures don't mean a thing without the stories explaining the figures. And without conversation about both, it is not likely that it will lead to change. I find it interesting that a person from outside the sector mentioned this, since in the development sector this 'rule' would apply even stronger because in the south the preference for oral communication above textual communication is even stronger. Chris Mowles added his observation that there are far too many monitoring and evaluation figures and reports, and far too little sense-making.

One thing I'm struggling with is how to avoid that stories become success stories. Maybe by focusing on the right questions?

3 comments:

Unknown said...

Any idea where can I get the full article?
----------------------------
Jack

Drug Intervention

Joitske said...

Hi Jack, the article is not finished, it is submitted to the km4dev journal. If you contact me, I can ask someone if you can have the draft version.

Unknown said...

In my definition it is very close to terms as Organizational Learning.
------------------------------------
Sumie
Drug Intervention