Home / Case Studies / Lessons learned from monitoring and evaluation experiences in Zambia
Back to case studies

Lessons learned from monitoring and evaluation experiences in Zambia

12 September 2017

For the Association of Anaesthetists of Great Britain & Ireland and University Teaching Hospital, Zambia partnership, the unanticipated challenges of data collection, monitoring and evaluation proved to be a steep learning curve.

We hear how the partnership resolved these problems, and the lessons they learned for the future.

Partnership formation

The partnership between the Association of Anaesthetists of Great Britain & Ireland and the Department of Anaesthesia at University Teaching Hospital (UTH) Lusaka, Zambia, was originally established to develop a post graduate anaesthesia training programme at the University of Zambia. The need for a THET funded project arose when it became clear that serious deficiencies in patient care

would continue unless the systems used by clinical staff improved. This project aimed to improve perioperative systems, focussing in particular on creating sustainable and increased capacity for safe anaesthesia care. This would be done by developing the skills of local anaesthesia staff in patient safety, education and leadership.

Background

In April 2016, as part of THET’s HPS Webinar Series, three partnerships presented their approach to monitoring and evaluation. Speaking candidly about the shortfalls of the project he co-directed, Dr Dylan Bould highlights the problems the partnership encountered in the monitoring and evaluation of their project, explains the lessons learned, and gives useful advice for other partnerships.

Problem 1: Data collected for goal level indicators was telling a different story to the one the partnership had initially anticipated: things were getting worse, not better. Why was this the case, and what could the partnership learn? 

The partnership had problems demonstrating positive change from the data that they collected. The table below details the original project plan with goal, indicators and sources from which the data for the indicators would be collected.

See Figure 1

Prior to the project there was no mechanism for reporting or recording Serious Untoward Incidents (SUIs) at UTH. As well as implementing a system for recording these reports, the project would also focus on a culture change within the hospital to encourage staff to report incidents of this nature.

Staff embraced the new reporting system and the partnership successfully improved the reporting culture, however this meant that ‘over time we had an increase in the number of SUIs, because one of the things we were simultaneously trying to do was to improve reporting culture’ (Dylan Bould).

Consequently, collecting data on the number of SUIs revealed not only the scale of the reporting problem among anaesthetists at UTH, but also that the measures of progress they had chosen at the start of the project would not demonstrate the actual changes occurring. [EB1] To capture this change they included a new outcome of staff demonstrating good practice in patient safety, with the indicator being an increase in the number of SUIs reported.

See Figure 2

Problem 2: The UK partner had assumed that the data they needed to collect was available and has been accurately recorded at UTH.

The partners had not given enough consideration to the practicalities of collecting data to demonstrate a decrease in the perioperative mortality rate. They had planned to measure this by comparing the mortality rate at the start and end of the project. To do this, the perioperative mortality rate had to be established at baseline, i.e. before the partnership’s intervention. Dylan explains that collecting this data was much more difficult than anticipated.

‘We hugely underestimated how hard it was going to be… We realised there was a huge amount of data missing; many things were recorded inaccurately’. It was almost impossible to establish the baseline mortality rate because the theatre registers and mortuary records were inaccurate, which meant that demonstrating a decrease in the perioperative mortality rate would not be feasible.

Being unable to evidence their project goal of ‘improved patient safety’ the partnership changed it to ‘improved service’, which they thought could be more easily measured. They still deemed it important to have an idea of the baseline perioperative mortality rate starting point for recording improvements in the future. Moreover, the partnership deemed the process of establishing the baseline as a mark of improved anaesthesia service in itself.

See Figure 3

The change of focus proved to be beneficial to the partnership. Dylan explains that collecting this data enabled them ‘to go through patient case notes in quite some detail…we discussed what had happened before mortality with the surgical teams. We were able to get independent surgeons, obstetricians and anaesthesiologists to review the cases and got some detailed and interesting data about why patients had died’. The partnership had involvement from staff in other departments at UTH, which enabled them to understand common avoidable causes of mortality and gaps in record-keeping, as well as identifying other departments within the hospital in need of support.

However, in practice, collecting the data needed to establish the perioperative mortality rate posed a number of problems. Not only had it been assumed the necessary data would be available to collect, Dylan explains that the partnership ‘hadn’t budgeted for data collection or analysis in our original grant and a lot of it was done in an ad hoc and rushed way’. It became difficult to meet the demands of data collection; it was time-consuming and required input from many different people who often had conflicting priorities. Data analysis also took longer than expected and it was difficult to find the budgetary and human resources mid-way through the project.

Advice: Gather and analyse data early on in the project to review the relevance of your indicators; this will help you assess the feasibility of collecting data to evidence your objectives with the resources available (time, finances, skills etc.).

Dylan admits ‘I wish we had done that a little bit earlier on in the project.’ The problems during this project do highlight the importance of:

–          Allocating adequate time, financial and human resources to do data collection in the planning phase

–          Starting the data collection and analysis process early

–          Assessing the quality of the data and the data collection methods

–          Understanding whether your data is useful to evidence your objectives

Advice: Explore a variety of data sources to strengthen your monitoring and evaluation approach.

To improve the patient service at UTH, staff would be encouraged to use the World Health Organization surgical safety checklist.

See Figure 4

Initially they audited use of the checklist by collecting self-reporting forms from anaesthesia staff. According to the initial data there was widespread use of the checklist, however during spontaneous observations it became clear that this was not always the case.

The team decided to follow up this method of data collection with systematic unplanned observations and Dylan explains ‘we realised that compliance was a lot lower than we expected’. The team verified the data from one highly subjective source by using a more objective method of data collection, leading to better quality data. Subsequently they were able to put measures in place to further embed the use of the checklist within anaesthesia care practice, which they would not have done had they relied on data from self-reporting. Gathering data from more than one source- triangulation- lead the partnership to verify and question their results so that they had a better understanding of what had been achieved, as well as where additional capacity building was needed.

Problem 3: Assumptions made at design phase resulted in unrealistic targets that were unobtainable within the project time frame.

See Figure 5

Another obstacle in achieving the partnership’s original goal was due to the partnership not having considered how other aspects of patient care, such as infection prevention control systems, limited medical supplies or overstretched staff, could undermine their efforts to improve patient safety in perioperative care: ‘We were very ambitious and enthusiastic but realistically trying to show a patient level outcome in mortality over just 2 years, in a very complex institution with many complex problems, was unrealistic’. The partnership should have taken more time during the design phase to interrogate the assumptions they were making about the environment for change at UTH: what factors in a ‘complex institution with many complex problems’ might hinder the partnership’s progress? Without this interrogation unrealistic targets for change were set.

The team had also made assumptions about the jump from output to outcome. Although they were able to evidence the increased use of the WHO Surgical Safety Checklist, ‘often there simply wasn’t blood or antibiotics available even though it was identified as a need. The problem for us was that even though staff were using the checklist and this was intended to achieve our goal of improved patient safety, it still wasn’t happening because the resources weren’t available in the UTH’.

Advice: Assumptions are the conditions and resources required for the success of your project, but which are beyond your control. Carefully thinking about your assumptions at the start of the project will allow you to consider how your project will be impacted if they are not met, and will encourage you to monitor whether your objectives are being impeded throughout the project.

Problem 4: There was little collaboration between the UK and overseas partner in the design of the monitoring and evaluation plan, resulting in a lack of local input and stakeholder consultation.

‘In retrospect we had nowhere near enough local input in designing the monitoring and evaluation before the project started. I think this was partly why some of objectives were not as realistic as we wanted them to be ’.

The UK team realised this too far into the project by which time a large amount of time and resources had already been used to collect data that was not necessarily valuable. Had there been more local input in the monitoring and evaluation design, these issues would perhaps have been identified earlier on and a clearer focus would have been established.

The United Nations (2012) outline three main purposes of monitoring and evaluation; to provide an overview on the progress of planned activities, to identify problems in the implementation of planned activities, and to inform a change in activities if they are not conducive to reaching objectives. In each instance, local input is essential; local partners are key to implementing the monitoring and evaluation activities and will be able to inform whether they are feasible within the local context.

All of this will save time and resources, allow your project to have a clear focus from the start, and will ensure local ownership of monitoring and evaluation from the outset. ‘We should have narrowed the scope of our project much earlier on to some things that we could have measured more effectively’. In the case of Dylan’s project, it would have become clearer much earlier on in the project that perioperative mortality rate data was not readily available, had the local partner been more actively involved in monitoring and evaluation planning.

Conclusions

Getting the monitoring and evaluation right from the very start of the project may not always be easy, and you may need to change your approach depending on the issues you come up against. The important thing is to be prepared for these changes, but more importantly to mitigate against them by planning for monitoring and evaluation, which includes allocating enough budget, skills and time for data collection. Ensuring local input at the design stage, and early data collection and analysis will allow more time to evidence the capacity building that your project is working towards.

References

United Nations Women. (2012). Why is monitoring and evaluation important?. Available: http://www.endvawnow.org/en/articles/331-why-is-monitoring-and-evaluation-important.html. Last accessed 3rd October 2016.

Useful resources

Presentation Series: Monitoring, Evaluation and Learning – full audio recording

Project Planning: Theory of Change

Monitoring and Evaluation: guidance and lessons learned from the IHLFS

Monitoring and Evaluation plan