What can municipal sustainability reports achieve?

due to popular demand..: reposting from VNG International’s blog (28 May 2015)


Our struggle for global sustainability will be won or lost in cities” is a quote attributed to UN Secretary-General Ban Ki-moon and a sentiment shared by many people. Cities across the world take their role and responsibility very seriously and are keen on applying effective management and communication tools to promote sustainable development.

From a previous assignment (employed by GIZ to support local monitoring in Ecuador) I know that many Latin American municipalities have developed interesting practices themselves and search for good examples elsewhere. In this context many local decision makers in various countries wonder about municipal sustainability reports – What are their benefits, are they worth the cost and should we start publishing them?
Surprisingly there are hardly any publications dedicated to these questions. I thus started a search for practical answers and am glad to have teamed up with VNG International. The purpose is to

  • study current practice of municipal sustainability reports (in cooperation with the University of Twente and Erasmus University Rotterdam)
  • make practical use of research results and to inform the capacity building work of VNG International in various countries.

This blog post is to outline the project and share emergent findings. A first question is of course: What is a municipal sustainability report?

There is broad consensus about the meaning of sustainability (long-term viability) and of reports being a review of the current state (plus ideally trends and forecasts) and the municipality’s relevant actions and effects.

However, there is no universal definition of and even within countries a surprising variety of sustainability reporting forms and formats, making the research question trickier than expected. One key difference is frequency: Some cities write sustainability reports annually (e.g. Dublin) while others have issued reports in four-yearly intervals (e.g. Zurich, Nuremberg).

Further, some documents are called “sustainability reports” while others have similar content under a different title. Yet other cities have combined reports: In Utrecht, for example, the city issues a yearly statistics report (“Utrecht monitor”) with many sustainability-relevant indicators and a discussion of trends which is more elaborate than what some small municipalities may write in stand-alone sustainability reports. Moreover, some claim (e.g. the Integrated Reporting Council) that “integrated reporting” is superior to separate financial and sustainability reports because of better coupling of reporting with decision-making. This argument is especially made for commercial companies but also extended to the public sector.

The city of Melbourne has actually integrated some sustainability reporting guidelines (GRI 4) into its Annual Report and Basel is exploring something similar. However, Amsterdam recently opted for the opposite: it found that integrating broad sustainability indicators into the Annual Report was not effective in all regards and will restart the publication of separate sustainability reports to reach wide audiences.

What to conclude from this quick review? It appears that municipal sustainability reports continue to be a rarity. On the other hand, several cities that are known as sustainability “frontrunners” engage in sustainability reporting and do so in varying ways and for various purposes. In this situation, a case study analysing a set of different cities with different approaches seems most revealing. Six European cities (Amsterdam, Dublin, Nuremberg, Freiburg, Zurich and Basel) were thus chosen for a more detailed analysis of effectiveness. This study is underway with several interviews yet to be held and detailed results to be published by September.

Guy Morin, President of the Government Council of Basel-Stadt
Simone Pflaum, Head of Sustainability Management, City of Freiburg
Marijn Bosman, Municipal councillor, Amsterdam

Some interim conclusions are worth sharing already: Municipal sustainability reports are no “Swiss army knife” that can simultaneously improve internal management and external communication. If they are started with such expectations, they risk becoming “jack of all trades and master of none”. However, if purpose, design, and integration into institutional processes match well, they can be effective tools. I look forward to continuing the reporting on the relevance of such reporting.

Sustainability by data revolution? Let’s not drown but drive


Suppose there’s a data revolution..

In case you haven’t heard: The UN is calling for a revolution! To be precise, a data revolution. Such officially prescribed upheaval always reminds me of the old peace movement slogan to “imagine there’s a war / revolution … and nobody shows up”. It turns out that this particular phrase is an adaptation of “Sometimes they’ll give a war and nobody shows up” which in turn is at times wrongly attributed to poet Bertold Brecht but actually traceable to Carl Sandburg. So much for wrong sources, an issue that occasionally afflicts sustainability data too. But what about the question of who will show up? Will data revolutionise sustainable development as its proponents hope for?

Having moved back to the Netherlands last year I currently enjoy the privilege to do research in cooperation with VNG International, the international cooperation agency of the Association of Dutch Municipalities. This is the perfect opportunity to restart blogging at regular intervals (inviting you to comment and to share feedback) and to dedicate this post to some thoughts about the global data revolution.

Let’s start with the positive side. The argument goes that one of the shortcomings of the Millenium Development Goals (MDGs) was untimely monitoring information – too few and too unreliable statistics that come too late (global compilation often lagging several years) to be useful for effective management. Therefore, the UN wants to complement the forthcoming Sustainable Development Goals (SDGs) with a beefed up international monitoring system. The aim is annual updates of all major stats. Thank goodness the SDG’s will also improve on what gets measured, reducing the use of dangerous proxy indicators (such as the MDG’s lazy equation of drinking water with any kind of tap water, contaminated or not – cf my previous post).

Map of PM2.5 according to NASA

Further, there is evidence that novel data on sustainability relevant issues can be tremendously powerful – consider the case of air quality in Beijing where the US embassy pioneered the publication of data which seems to have impacted the political and public agenda in China. I also have no doubt that new data developments (big data, open data, surveillance known and unknown to us) will revolutionise our lives in ways that we can’t yet imagine.

However, will data really lead by itself to more sustainable development? Some people seem to be incredibly optimistic and tout the very existence of (standardised, city-based) indicator data as “game changers“. I have to admit that I used to be equally enthousiastic about data production but now consider this naïf and doomed to lead to disappointment. Consider the case of air quality in Europe. Plenty of cities struggle with unhealthy pollution levels but it appears that it’s neither the publication of data (e.g. on Amsterdam) nor of city rankings that’s causing inmediate change – what’s causing action is the threat of legal action and especially of financial fines from the EU. Further, it seems that various sustainability monitoring projects once started enthousiastically in European cities have ebbed away – presumably because they failed to influence public policy and agendas in the way their initiators had hoped for.

So why does air quality data seem to be directly impactful in one case but not another? A quick consideration of the context suggests that there are differences in absoludata overloadte levels of pollution encountered in Chinese and European cities. Another evident difference is the level of available data – limited in one case, abundant in another. In some places there is little competition for sustainability relevant data, in some there is a lot, and many experience (just as with food) the co-existence of hunger and overflow. To just advocate for ever more data is arguably a waste of resources and even counterproductive. Especially in already data rich settings there’s a real risk of data overload!

Put differently: What type of new data (and to be fair to fans of benchmarks: of comparisons) would be needed to make a difference to your municipality? What type of data are we currently lacking to revolutionise or “sustainablise”..?  The measurement of subjective wellbeing is one potential answer in terms of an alternative and additional metric that is much needed. A few years ago this idea was popularised by various reports (e.g. NEF, Stiglitz-Sen-Fitoussi, OECD, UN General Assembly) yet it is unfortunately not prominent in the (very abundant) set of SDGs that now dominate the UN’s agenda. In most other regards and most rich countries, however, there no scarcity of sustainability-relevant data. There, the challenge is much more about the use of funneldata by various stakeholders including decision-makers and the public. In this context, less can be more, as evidenced by the search for “headline” or “key” indicators designed to funnel attention.

To sum up, the UN’s effort to foster a ‘data revolution’ is generally laudable. So is the use of ‘revolutionary rhetoric’ a sales tactic to muster political and financial support for something as “boring” as statistical systems. However, it would be foolish to just rely on the old saw of “what gets measured gets managed“. Instead, we need to complement the strive for data with a strive for knowledge. How can one drive the actual uptake and use of sustainability-relevant data?

Regarding sustainability information, a group of Finnish researchers (in this article by Lyytimäki et al) have suggested to distinguish use, non-use and misuse of indicators. In below table they’re presenting a couple of interesting examples.

use nonuse misuse

I’m planning to employ this in my current research with VNG International about municipal sustainability reports – stay tuned for more posts on this matter. This way we can hopefully make a little practical contribution to a positive data revolution.

How to find indicators? The case for an online library

As a project manager, have you ever wondered how similar projects chose their indicators and data collection methods? Working for an urban planning department or citizen observatory, are you keen to know how other cities measure quality of life or sustainability? As evaluator, have you seen planning tools that are utterly incoherent, suffering from indicators that aren’t appropriate or just not measurable..?

Ever since I´ve first been confronted with the need to plug indicators into logframes (yet the standard planning tool for international development projects with my previous employers Caritas, CARE and Oxfam) I´ve longed for a library. To browse, to learn, to consult, to gain inspiration. It´s just not convincing that I´d have to keep on inventing wheels (that later turn out to be squar-cut instead of round..), without recurring to existing knowledge. How often have we seen indicators being made up the night before a project funding deadline that are later ignored throughout a project´s lifetime.

Many people working in the field of monitoring and evaluation share the impression that in results-based frameworks, the selection of appropriate indicators tends to be the trickiest and often poorly done part. On reviewing any given logframe one can generally spot indicators that are incoherent with objectives. In the case of community monitoring systems, the selection of suitable indicators is of course key to the whole endeavor. For both applications, there currently is a marked lack of information resources.

20130913 logframe indicators

To improve indicator use a plausible first step is to increase the publicly available knowledge base. Several years ago some aid agencies started compiling “indicator menus”, e.g. CARE in its “proposed new menu of impact indicators” from 2003 and Save the Children in its 2008 document. Some NGOs had gone as far as to mandate the use of certain indicators which later turned out to be costly and not workable.  Based on World Vision´s experience, Alex Jacob nicly concluded that mandatory indicators don´t work.

These are useful first resources but their scope is limited and the presentation as a document never as effective (user-friendly) as a web-based library.

In order to obtain feedback on my library idea I put it twice (2010 and 2013) to the discussion of the list serve MandEnews (with more than 3500 subscribers all over the world). On both occasions, there was a lively debate about risks and benefits of standardisation yet no objection against the idea of better information resources. To the contrary, many people were keen on this, writing e.g. to “Please keep us posted on how this goes as I imagine this is of great interest to a lot of people”.

20130916 library_cartoon

Expected benefits

The idea is that this indicator library will be and bring

  • a resource for those searching ideas for an indicator for a particular objective or monitoring theme
  • a means through which non-specialist audiences can start to understand indicators more, and fear them less
  • a greater consensus as to what makes a ‘good’ indicator ‘good’
  • convergence towards more uniform use of certain indicators, thus facilitating comparisons and cross-learning
  • a means through which we promote higher standards and accuracy
  • a contribution to a better design process, making monitoring and evaluation more efficient and ultimately increasing the quality of projects
  • increased interest in and ultimately growth in local sustainability monitoring / community indicator systems


A search for related initiatives reveals a fair number of existing indicator compilations. Most are sector specific (e.g. on humanitarian aid, health, AIDS, environment, peacebuilding, municial management). A few are web-based, many others are presented as simple lists or Excel sheets. The existence of this prior work presents an opportunity as authors or publishers are unlikely to claim intellectual property rights but will presumably be interested in making a free contribution. Setting up a website used to be costly the relative costs for both creation and maintenance have decreased, as have costs for translating material into several languages.

What to include? Defining scope and characteristics

The challenge is making this manageable which requires defining its scope carefully. What to include? If one considers all topics (from archaeology to zoology) and types (outcome, output, process, etc) the number of possible indicators is infinite. There are commercial  initiatives such as www.kpilibrary.com claiming on its website that “Over 445,000 members find & discuss over 6,500 key performance indicators”. I´d thus suggest to target

  • the field of (community) development, sustainability, and quality of life
  • the level of outcomes

Something similar had been tried in the past by NEF but isn´t extensive nor updated; the other most related initiative seems to the “impact builder” developed by BOND for aid and development projects. This database, however, is unfortunately not open to the public.

Considering all that´s around and missing this is what I believe would be ambitious but really adding value: an indicator library that is

  • open access (possibly with wiki-style open contributions, though that may require extra maintenance and editing)
  • multi-lingual (say English, French, Spanish to begin with)
  • making indicators searchable per topic (e.g.: environmental monitoring), with observations and a discussion of pros and cons, as for any topic one generally has a choice of more sophisticated and expensive and more approximate and cheaper methods
  • per indicator, specific advice on appropriate data collection methods

Ideally, to make this a really useful one-stop shop the emerging website could also include

  • general  support to relative newcomers about the whole area of logframes, indicators, data collection methods, etc
  • a conceptual framework and practical advice on potential data sources that makes it easy for local sustainability initiatives to select indicatators (taking into account very different data availability contexts in industrial countries and poorer places)

So how to go forward..?

Who wants to take part? Based on the above “dream” I wrote a concept paper and have started discussing it with a few potential partners. IISD in Canada are quite interested (as this would be an excellent supplement to their compendium of indicator initiatives) and have relevant expertise, just need a bit of funding that needs to be found elsewhere. Any ideas?

Is tap water always drinkable? Beware of international proxies

Can I use tap water here? In many cities across the world visitors even fear brushing their teeth with what comes out of the pipe. People spend huge amounts of money on buying bottled water, paying in some countries a 2000 fold premium even though bottled water may come with safety problems too. On the other hand, according to many official statistics, piped water equals drinking water. Consider the Millenium Development Goals: Their Target 7c is to “Halve, by 2015, the proportion of the population without sustainable access to safe drinking water” and the UN proudly announce that this goal has been achieved already!

MDG drinking water and sanitation - underlined

Only in the smallprint we learn that the UN do not actually have information about water quality. They recur to a so-called proxy indicator instead, equating drinking water with (estimations about the percentage of households with) access to “improved sources” such as pipe systems. The problem of course is that piped water, especially in developing countries, is often contaminated. Just taking bacterial contamination (one of many possibilities) into account, technical studies suggest that the the current UN calculation “underestimated  the  progress  required  to  meet  the  drinking-water component  of  MDG  Target  7c  by  10%  of  the  global  population” – this represents hundreds of millons of people!

One ought to be aware of the risks of “too aproximate” proxy indicators. Summary statements based on poor equations may mislead decision makers (who will see no need to invest in water quality testing if the UN tells them that improved sources are fine) and the public alike.

In a way, defining the level of approximation is a challenge for virtually all indicators that are prevalent in public policy discussions and the media. This applies to politically more contested topics (is GDP an appropriate measure of economical progress; what do PISA test scores say about the performance of education systems?) but also to quite technical ones. For example, in the case of drinking water the testing for E.coli bacteria is interestingly the application of another proxy since E.coli itself is generally harmless but a good predictor for the presence of other pathogens. The underlying issue for all of this is “quality criteria for indicators” which I feel like writing about in another post!

What I´d like to put up for discussion now are some general thoughts derived from  the frankly “scandalous” MDG drinking water proxy indicator:

1) The more global comparisons one wants to make, the more one needs to deal with issues of poor data quality and limited data management capacity. In the case of the MDGs  universal scope implies using an indicator that is manegeable by the weakest country. More local indicators can be fine-tuned to local capacities.

2) The more global comparisons one wants to make, the greater the diversity in settings that indicators need to capture and standardise. For example, MDG indicator definitions need to consider any type of conceivable water supply from desalination in the desert to snowmelt in polar regions. This is also worthwhile considering for any type of indicator that is strongly influenced by culture, e.g. subjective wellbeing which is currently much promoted as a global alternative to GDP but difficult to standardise cross-culturally. More local indicators (incl. definitions and methodologies) can be finetuned to local circumstances.

What to take a away? Given the evident benefits from international comparisons and frameworks such as the MDG I woudn´t suggest to stop working on them. To the contrary, there´s so much to learn from international and cross-cultural definitions,  “indicator banks” and associated data collection methodologies. On the other hand, it would be foolish to only use poor international proxies and neglect the potential of  better, locally more appropriate indicators and monitoring systems, would´t it? In the case of drinking water, plenty of countries and cities can surely do better than just classify types of sources but publish actual quality data so inhabitants and visitors know whether tap water is safe. How to get this going?