DFATD’s Evaluation of Canadian Aid to Afghanistan: A Missed Opportunity

In March 2015, the Department of Foreign Affairs, Trade and Development released the Synthesis Report: Summative Evaluation of Canada’s Afghanistan Development Program. On April 14, CIPS and its Fragile States Research Network (FSRN) held a panel to discuss the evaluation’s methods, findings and recommendations. This series of blog posts by Nipa Banerjee, Stephen Baranyi, Sarah

In March 2015, the Department of Foreign Affairs, Trade and Development released the Synthesis Report: Summative Evaluation of Canada’s Afghanistan Development Program. On April 14, CIPS and its Fragile States Research Network (FSRN) held a panel to discuss the evaluation’s methods, findings and recommendations. This series of blog posts by Nipa Banerjee, Stephen Baranyi, Sarah Tuckey and Christoph Zuercher summarizes key issues discussed at the event, with the aim of fostering informed debate and learning about Canada’s involvement in fragile and conflict-affected states.

Between 2002 and 2012, the international aid community has spent around $50 billion in Afghanistan, of which $1.5b came from Canada. Every evaluation is an opportunity to learn what went well and what should be improved. It is clear that we can only learn from good evaluations. Weak evaluations are, basically, opinions, quite often massaged by the politics of the day. Strong evaluations provide robust and unbiased evidence of what has worked, and what has not worked. DFATD’s summative evaluation is somewhere in the middle. It is methodologically somewhat underwhelming, its findings are quite predictable, and its recommendations are overly general and not based on the evidence that the report provides.

We should not be too quick to put the blame on the evaluators. Assessing a decade of development aid is a very difficult task, and evaluations can only be as good as the data that are available. The nuts and bolts of evaluations are baseline data and follow-up data; the difference between the two, while controlling for other factors, gives an indication of the effects of the programs. Apparently, no baseline data was collected. Aid workers were on the ground for more than a decade and at some point, they could have made an effort to collect data (preferably sector-wide). No such effort was made, and it is unlikely that field personnel will ever make these efforts until headquarters and political leadership demand better data. As it is, evaluators were left with 220 interviews and whatever data the monitoring and evaluation systems of the projects provided; this is not a good database for a robust evaluation.

Because the report does not make a serious effort to engage with such questions, it provides little help for the next time policy makers decide how to allocate aid in conflict zones.

Another difficulty for the evaluators was that the objective of Canada’s aid quite frequently changed. The most dramatic shift occurred in 2008 – 2011, when the focus shifted to Kandahar and plans were made to spend half of all aid there; however, lack of absorption capacity led to a disbursement of only 29% for the period 2008–11. Education became more important in 2006, health in 2008, humanitarian assistance became prominent for 2008–11, and human rights and women’s right became more important in 2007. The overall impression is that these shifts were often ad hoc, and were rarely documented in an overall strategic framework, so the evaluation team had to re-construct ex post log-frames and theories of changes for different phases. There was no country strategy in place with clearly defined objectives.

So what did the evaluation find? Aid contributed to progress in sectors such as health, education, and community infrastructure, but had little or no impact on economic growth, the capacity of provincial administration in Kandahar, human rights and governance in general. The report mentions that more efforts should have been made to build up capacity at the level of district and provincial administration. It also says that progress in women’s rights were made—referring mainly to equity of access to services, as opposed to change in gender relations.  These findings make sense: in a poor and conflict-affected country, it is much easier to provide small infrastructure than to spur social and cultural changes in sectors such as governance, human rights or gender relations.

One of the most intriguing aspects of the report is the issue of development in hot conflict zones such as Kandahar. The report notes that “understanding the political economy and main drivers of conflict and fragility received relatively little attention in Canada’s Development Program, but Canada is not exceptional in this regard”. The understanding of the local political economy seems to have made some progress during the Whole of Government phase in Kandahar, when military and civilians made efforts at better understanding the environment and shared information. The report also notes that there was no clear risk assessment or risk mitigation strategies in place for major aid programs. Perhaps most importantly, it suggests that the assumption that increased service delivery would lead to the population being less inclined to support the insurgency might be simply wrong. Buying ‘hearts and minds’, it seems, requires more than better services.

Canada’s Sharpest International Affairs Commentary
Don’t miss future posts on the CIPS Blog. Subscribe to our email newsletter.

While these are important insights, the evaluation report does not provide clear lessons as to what can or cannot be done in conflict zones. It is clear that much of what aid has achieved in Kandahar has been wiped out by deteriorating security (especially the large infrastructure projects requiring follow-ups and maintenance), and by a hasty closing-down of aid programs. But it is not clear whether aid could have done better, given the circumstances.

In order to get closer to answers to such important questions, evaluators need to embrace counter-factual thinking. Granted, estimating a robust counter-factual requires tons of high-quality data, which are often not available. But ‘counter-factual’ is not only a fancy term for statisticians with access to great data; it can and should be a mindset. Those who commission, conduct and consume evaluations should apply this mindset much more often. They should ask what would have happened without Canadian aid to Kandahar: did aid make a difference at all? What results could have been achieved with a different aid allocation (for example, less investment in large signature projects and more investment in community-driven development)? Would it have made a difference if more aid had been disbursed via the Afghan government (between 19 and 34% was on-budget) and less through NGO? Was it a good decision to allocate 12% of the overall Canadian aid to Kandahar? Would the same aid dollars have had more impact in a less violent region? Or, to the contrary, would more aid to Kandahar have made a difference?

Because the report does not make a serious effort to engage with such questions, it provides little help for the next time policy makers decide how to allocate aid in conflict zones.

Articles liés


Le blogue du CÉPI est écrit par des spécialistes en la matière.

Les blogs CIPS sont protégés par la licence Creative Commons: Attribution – Pas de Modification 4.0 International (CC BY-ND 4.0).


[custom-twitter-feeds]

DFATD’s Evaluation of Canadian Aid to Afghanistan: A Missed Opportunity

In March 2015, the Department of Foreign Affairs, Trade and Development released the Synthesis Report: Summative Evaluation of Canada’s Afghanistan Development Program. On April 14, CIPS and its Fragile States Research Network (FSRN) held a panel to discuss the evaluation’s methods, findings and recommendations. This series of blog posts by Nipa Banerjee, Stephen Baranyi, Sarah

In March 2015, the Department of Foreign Affairs, Trade and Development released the Synthesis Report: Summative Evaluation of Canada’s Afghanistan Development Program. On April 14, CIPS and its Fragile States Research Network (FSRN) held a panel to discuss the evaluation’s methods, findings and recommendations. This series of blog posts by Nipa Banerjee, Stephen Baranyi, Sarah Tuckey and Christoph Zuercher summarizes key issues discussed at the event, with the aim of fostering informed debate and learning about Canada’s involvement in fragile and conflict-affected states.

Between 2002 and 2012, the international aid community has spent around $50 billion in Afghanistan, of which $1.5b came from Canada. Every evaluation is an opportunity to learn what went well and what should be improved. It is clear that we can only learn from good evaluations. Weak evaluations are, basically, opinions, quite often massaged by the politics of the day. Strong evaluations provide robust and unbiased evidence of what has worked, and what has not worked. DFATD’s summative evaluation is somewhere in the middle. It is methodologically somewhat underwhelming, its findings are quite predictable, and its recommendations are overly general and not based on the evidence that the report provides.

We should not be too quick to put the blame on the evaluators. Assessing a decade of development aid is a very difficult task, and evaluations can only be as good as the data that are available. The nuts and bolts of evaluations are baseline data and follow-up data; the difference between the two, while controlling for other factors, gives an indication of the effects of the programs. Apparently, no baseline data was collected. Aid workers were on the ground for more than a decade and at some point, they could have made an effort to collect data (preferably sector-wide). No such effort was made, and it is unlikely that field personnel will ever make these efforts until headquarters and political leadership demand better data. As it is, evaluators were left with 220 interviews and whatever data the monitoring and evaluation systems of the projects provided; this is not a good database for a robust evaluation.

Because the report does not make a serious effort to engage with such questions, it provides little help for the next time policy makers decide how to allocate aid in conflict zones.

Another difficulty for the evaluators was that the objective of Canada’s aid quite frequently changed. The most dramatic shift occurred in 2008 – 2011, when the focus shifted to Kandahar and plans were made to spend half of all aid there; however, lack of absorption capacity led to a disbursement of only 29% for the period 2008–11. Education became more important in 2006, health in 2008, humanitarian assistance became prominent for 2008–11, and human rights and women’s right became more important in 2007. The overall impression is that these shifts were often ad hoc, and were rarely documented in an overall strategic framework, so the evaluation team had to re-construct ex post log-frames and theories of changes for different phases. There was no country strategy in place with clearly defined objectives.

So what did the evaluation find? Aid contributed to progress in sectors such as health, education, and community infrastructure, but had little or no impact on economic growth, the capacity of provincial administration in Kandahar, human rights and governance in general. The report mentions that more efforts should have been made to build up capacity at the level of district and provincial administration. It also says that progress in women’s rights were made—referring mainly to equity of access to services, as opposed to change in gender relations.  These findings make sense: in a poor and conflict-affected country, it is much easier to provide small infrastructure than to spur social and cultural changes in sectors such as governance, human rights or gender relations.

One of the most intriguing aspects of the report is the issue of development in hot conflict zones such as Kandahar. The report notes that “understanding the political economy and main drivers of conflict and fragility received relatively little attention in Canada’s Development Program, but Canada is not exceptional in this regard”. The understanding of the local political economy seems to have made some progress during the Whole of Government phase in Kandahar, when military and civilians made efforts at better understanding the environment and shared information. The report also notes that there was no clear risk assessment or risk mitigation strategies in place for major aid programs. Perhaps most importantly, it suggests that the assumption that increased service delivery would lead to the population being less inclined to support the insurgency might be simply wrong. Buying ‘hearts and minds’, it seems, requires more than better services.

Canada’s Sharpest International Affairs Commentary
Don’t miss future posts on the CIPS Blog. Subscribe to our email newsletter.

While these are important insights, the evaluation report does not provide clear lessons as to what can or cannot be done in conflict zones. It is clear that much of what aid has achieved in Kandahar has been wiped out by deteriorating security (especially the large infrastructure projects requiring follow-ups and maintenance), and by a hasty closing-down of aid programs. But it is not clear whether aid could have done better, given the circumstances.

In order to get closer to answers to such important questions, evaluators need to embrace counter-factual thinking. Granted, estimating a robust counter-factual requires tons of high-quality data, which are often not available. But ‘counter-factual’ is not only a fancy term for statisticians with access to great data; it can and should be a mindset. Those who commission, conduct and consume evaluations should apply this mindset much more often. They should ask what would have happened without Canadian aid to Kandahar: did aid make a difference at all? What results could have been achieved with a different aid allocation (for example, less investment in large signature projects and more investment in community-driven development)? Would it have made a difference if more aid had been disbursed via the Afghan government (between 19 and 34% was on-budget) and less through NGO? Was it a good decision to allocate 12% of the overall Canadian aid to Kandahar? Would the same aid dollars have had more impact in a less violent region? Or, to the contrary, would more aid to Kandahar have made a difference?

Because the report does not make a serious effort to engage with such questions, it provides little help for the next time policy makers decide how to allocate aid in conflict zones.

Articles liés


Le blogue du CÉPI est écrit par des spécialistes en la matière.

 

Les blogs CIPS sont protégés par la licence Creative Commons: Attribution – Pas de Modification 4.0 International (CC BY-ND 4.0).


[custom-twitter-feeds]