Features

The Bane of Monitoring and Evaluation in Public Sector Management

Governments are under increasing pressure from development partners, stakeholders and civil society groups for policy, programme and project’s outcomes and impacts instead of mere inputs, activities and outputs. In fact, the three-legged functional Human Resource, Financial, and Accountability systems are inadequate without the Monitoring and Evaluation (M&E) framework. This is because feedback information, which is generated by the M&E process, is key forpolicy decisions. The older Implementation M&E; involving inputs, activities and outputs -can be employed together with the newer Results-Based M&E; encompassing outcomes and impacts – for public policy success. 

While Monitoring is a continuous and systematic data collection on specified indicators of an on-going development intervention, signifying progress and objectives achievement as well as progress in the utilisation of allocated funds, evaluation is anorderlyappraisal of an on-going and completed policy, programme and project interventions, including designs, implementation, and results. This determines goals relevance, efficiency and effectiveness, impact, and sustainability. The evaluation provides credible and useful information, which is fed into the process of decision-making.

Monitoring (M) of the M&E may not be the same as Evaluation (E), yet they are complementary. Meanwhile, it takes M to conduct observation and gather data throughthe implementation and results monitoring for evaluation and analysis to produce the evaluation information findings subsequent reporting. It must, however, be emphasised that M&E requires its own data generation through implementation monitoring and results monitoring. The only situation in which proxy data (data generated outside the M&E process) is used is when data gathering is a difficulty, as in conflict zones, epidemic outbreaks to mention a few. In short, it is not ideal to rely on proxy data.For purposes of education, an administrative data cannot be used for M&E data because it is limited in scope, quality of data is in doubt due to data collection decentralisation and difficulty in obtaining such a data is prevalent; either existing on paper or haphazardly stored.

The above notwithstanding, Monitoring and Evaluation is fraught with many challenges,especially in Third World countries. The reasons are not far-fetched. Lack of interest in obtaining policy, programme and project outcomes and impacts is the first and foremost debilitating factor working against a fully-functioning M&E system. This lack of interest has also fed into insufficient capacity development of staff to handle the rigorous tasks of M&E process, which is, the design of the framework or matrix; involving outcomes, performance outcome indicators, baseline and performance indicator targets. Though a readiness assessment might not be part of the overall steps in building a performance framework, it provides an analytical contour,which assesses organisational ability and governmental preparedness for the M&E system. Readiness assessment, in part, involves critical elements, including institutional roles, responsibilities, capacities, incentives, understanding, and sustainability.

Another pointer to the ominous challenge of effective M&E operationalisation is the rigidity to devote funding for monitoring and evaluation to generate relevant data which will then be analysedfor performance findings. M&E is important because evaluation information is fed into policy decisions. Therefore, no M&E means no progress and, thus, liken to doing business in the dark. The issues of value for money and accountability also rear up their ugly heads. Meanwhile, M&E provides information on progress toward achieving stated targets and goals, and substantial evidence for any necessary mid-course corrections in policies, programmes and projects being implemented.

Furthermore,the fusing of Policy Planning (PP) with Monitoring and Evaluation (M&E) as in the traditional PPME is very problematic. This is not only problematic but also strange because Policy Planning is totally different from Monitoring and Evaluation and the reason, in dispensing the needs and tasks of PPME, M&E is always a casualty.For instance, in the hands of a manager or director, who is policy bias, the survival of M&E is in doubt.

Monitoring systems maintenance is critical and necessary to forestall systems decay and collapse. Monitoring systems, including auditing and budgeting, must continually be managed. Management and maintenance of M&E systems with the right incentives and sufficient financial, human and technical resources ensure execution of monitoring tasks. Individual and organisational responsibilities should be delineated. Clear relationships need to be established between actions and results. Monitoring systems need periodic upgrades, involving technology and modernisation as well as training of managers and staff to keep their skills current. This goes a long way to ensure credibility and ownership of the Results-Based M&E system of an institution.

As monitoring system does not address a policy, programme and project implementation strengths and weaknesses, evaluation information is necessary to generate appropriate results and to provide lessons learnt, which will then inform and influence the decision-making process.

Performance findings must and should be used to improve projects, programmes and policies because they yield important and continuous informationon the status of activities being undertaken. They provide solutions to problems and create opportunities for improvements in implementation strategies.

DR ALPHONSE KUMAZA

Director, Monitoring and Evaluation

Ministry of Tourism, Arts and Culture

Email: getdrkumaza@gmail.com

Show More
Back to top button