Training in Service Delivery

 

Welcome to the programmatic area on training in service delivery within MEASURE Evaluation’s Family Planning and Reproductive Health Indicators Database. Training is one of the subareas found in the service delivery section of the database. All indicators for this area include a definition, data requirements, data source(s), purpose, issues and—if relevant—gender implications. In the past, training was viewed as an isolated set of activities to address a range of challenges in a service delivery system. Today, training programs are expected to address a broader range of issues, including contextual factors that affect a person's ability to perform satisfactorily, and that go far beyond the traditional limits of training. Consequently, competency-based training has become the standard in organizations worldwide. A large proportion of the personnel to be trained in the context of reproductive health programs will work in a clinical setting. However, a growing proportion of persons to be trained will work in a non-clinical setting; such groups including community health workers, teachers, peer educators, journalists, women's groups, and others. Although this section focuses on training for service delivery, many of the concepts can be adapted to other types of program implementation. The training indicators presented here distinguish two levels of effects: individual and organizational. Key indicators to monitor and evaluate training can be found in the links at left.   Full Text On the surface, one might consider training an "easy" area to evaluate, thanks to the pre- and post-tests often used in connection with training activities. Although such instruments continue to serve a useful function, they by no means capture the full range of training effects. But the evaluation of training has changed substantially. First, organizations are no longer content to evaluate based on the number of training events, number of participants, improved scores on post-test instruments, or other process indicators. Instead, competency-based training has become the standard in organizations worldwide. Second, whereas in the past, when training was viewed as an isolated set of activities -- often the panacea for whatever was ailing a service delivery system -- today training programs are expected to address a broader range of issues, including contextual factors that affect a person's ability to perform satisfactorily, that go far beyond the traditional limits of training. Programs have moved beyond conventional training to a process known as "Performance Improvement (PI)". The rationale for PI and the role that indicators play in this process are summarized in the "About" section of this database. Third, where possible, evaluators attempt to measure the quantity and quality of training on the service delivery environment itself (i.e., improved access, enhanced quality). However, this type of "linkage" cannot be accomplished without a special study based on an experimental or quasi-experimental design or multivariate longitudinal analysis to demonstrate that the facilities receiving training are superior on one or more specific measures than those that did not receive the training.  Unless program managers and donors are willing to commit funds to such special studies, they basically operate on the assumption that good training results in improved performance and enhanced quality of care in the service delivery environment. No universally accepted word exists in English to describe the person that attends a training event. We have used "trainee" in this section, but recognize the existence of other terms, such as "participant" (which implies more active involvement), "learner" (which reflects the absorption of new knowledge and skills), or "student" (especially in a pre-service education institution). Readers are encouraged to use the term most widely accepted in their local work environment or most appropriate for the activity in question. A large portion of the personnel to be trained in the context of reproductive health programs will work in a clinical setting, such as a family planning clinic, STI treatment center, or obstetrical care ward. However, a growing proportion of persons to be trained will work in a non-clinical setting; such groups include community health workers, teachers, peer educators, journalists, women's groups, and others. Whereas this section focuses on training for service delivery, many of the concepts can be adapted to other types of program implementation. Methodological Challenges of Evaluating Training Programs Specific methodological challenges of evaluating training programs include the following: "Training" takes many different forms and levels of intensity. A given training program may address learning objectives that require as little as a couple of hours to achieve, or it may last a month or more. Moreover, "training" may constitute an isolated activity (which has generally been the case in the past), or it may be one part of an ongoing and integrated program to deal with multiple problems in the service delivery environment. As such, the evaluator must clarify the type of training event that is being evaluated and the intended objectives. Training is designed to have multiplier effects, but the evaluation of training rarely captures such effects. Training occurs in numerous forms and levels of intensity. Some training programs are set up explicitly to have multiplier effects, such as "cascade training" where one level of program personnel is trained at a central location then these trainers begin training groups of providers, all based on specific training standards and materials. Other training programs may in fact produce a spin-off effect when the trained person returns to the service delivery setting and shares content and skills with co-workers, either formally or informally. Because a trained provider may be immediately promoted to an other level of care or to an administrative position, the evaluator may have trouble ascertaining the added/amplified effects on that level. In theory, one could conduct a special study to capture the effects of the training at different levels of the system, but such a study would be complex and expensive. In practice, the multiplier effects of training tend to get overlooked in the evaluation process. However, if such effects were overt objectives (and adequate human and financial resources were available), evaluators could measure them. The training -- however well executed -- may be of little value to the program if organizations select inappropriate participants. Traditional group-based training is often considered a "perk." It allows an individual to obtain new (and generally marketable) skills, often in an enjoyable environment away from the pressures or routine of the workplace, with the added benefit of cash payment to cover living expenses (in the case of traditional, off-site training). As a result, the demand to attend a given training course may outstrip the number of slots available. Moreover, officials in high positions may use training opportunities as a means of repaying favors, whether or not the person selected is the most appropriate for the task. Seniority as well as politics also plays a role in selecting participants for training. Whereas one hopes that this practice is on the decline, it represents a problem in evaluating the effects of the training on the service delivery environment. Training organizations have identified several means of addressing the problem. Some have developed ways to encourage appropriate attendees for training while ensuring that the administrator-level staff members (who are sometimes sent to a training course to enlist their support) are actually involved in the training process in a different way. Alternatively, many organizations are developing other approaches to training, such as distance learning, self-directed learning, peer learning, and on-the-job training. The guidelines and standards against which to evaluate performance may differ by country. A number of the indicators refer to guidelines or standards, against which service provider practices are to be evaluated. Some international standards do exist, such as WHO's Medical Eligibility Criteria for Contraceptive Use (2010). However, most governments prefer to establish their own standards and guidelines (or to adapt international ones to their own situation). The benefit of country-specific standards is their relevance to the local context; it is unrealistic to think that a very poor developing country will be able to provide the same quality of care as a country that has "graduated" from donor funding in a given area. Commitment from key constituencies tends to be greater if the standards are developed with local input. However,this results in non-comparable results across countries. Since the major purpose of program evaluation is the improvement of service delivery program implementation in a given country setting, the difference of standards across countries should not be considered a major limitation. Ideally, evaluators should assess training in terms of changes in the service delivery or program environment, but doing so requires technical and financial resources. Training programs are generally designed to improve performance in a service delivery or program setting. However, evaluating the extent to which a training intervention achieves positive change requires an experimental or quasi-experimental design or multivariate longitudinal analysis. Many training organizations recognize the effectiveness of training, but they lack the financial or technical resources to conduct such evaluations. Although training programs are often asked to "justify" their work through concrete examples of their effectiveness, few program administrators or donor agency representatives are willing to fund evaluations of training effectiveness. This problem is by no means unique to training, but has hindered the advancement of evaluation in this area. Those who attempt experimental or quasi-experimental designs run into problems of "clustering" and intra-class correlation in evaluating training. Evaluators often use the individual as the unit of analysis, but individuals from the same service delivery point or those taught by the same trainers using a classroom or group approach are more likely to perform in a similar manner (have less variance) than are those from different locations or those taught by other trainers. This clustering has important ramifications not only for the analysis of the data, but also for evaluators. sample size calculations. Evaluators should consult a statistician or expert in sampling to discuss the best strategy for addressing this problem in the design of their evaluation. The training indicators presented here distinguish two levels of effects: individual and organizational. Whereas the evaluation of training has tended to focus on the individual service provider in the past, there is increased emphasis on evaluating training programs in terms of their effects on the service delivery system (e.g., of the Ministry of Health in a given country).

Number of trainees by type of personnel and topic of training

Definition:

"Trainee" refers to any type of participant, student, or learner in a training event, regardless of its duration. "Type" refers to the different categories of participants (e.g., physicians, nurses, social workers).  "Topic" is the subject matter covered (e.g., IUD insertion, universal precautions for HIV/ AIDS prevention, use of a partograph during delivery, etc.).

Data Requirements:

Number of persons (based on an actual list of names for potential verification purposes), their professional positions, and topic of training

If targeting and/or linking to inequity, classify trainees by areas served (poor/not poor) and disaggregate by area served.

Data Sources:

Records, usually kept by the training division, which are used both for administrative purposes during the training (e.g., distributing per diem) and for monitoring trainees at a later date

Purpose:

This indicator serves as a crude measure of activity.  Evaluators can use it for determining whether a program/project meets its target and/or for tracking progress from one year to the next.

Issue(s):

The "unit of measurement" is not strictly speaking uniform, in that one trainee may have attended a course for one day, whereas another may have participated in a course for three months.

Evaluators can further improve the measure in several ways:

Because this indicator does not assess improved knowledge and/or skills, it should be used in conjunction with the indicators, Number Of Trainees By Type Of Personnel And Topic Of Training and Number/Percent Of Trainees Who Have Mastered Relevant Knowledge, as appropriate.

Gender Implications:

A gender perspective on training assesses the following questions:

  1. How are the curricula developed?
  2. What is the content of the curricula?
  3. Who carries out the training?
  4. What training methodologies are used?
  5. Who receives the training?

Number/percent of trainees who have mastered relevant knowledge

Definition:

Evaluators must define "mastery" in terms specific to a given context. "Mastery" conventionally relates to acquisition of knowledge. ("Competency" involves both knowledge and skills; see next indicator, Number/ percent of trainees competent to provide specific services upon completion of training.)

This indicator is calculated as:

(# of trainees that have mastered knowledge / total # of trainees tested) x 100

Data Requirements:

Listing of individuals; scoring criteria to define "mastery;" evidence of mastery of knowledge (e.g., scores on tests)

Data Sources:

Administrative records (training files); written tests (e.g., pre-and post-tests of accurate, up-to-date knowledge)

Purpose:

This indicator, commonly used to evaluate training, measures the trainees' ability to retain key information in the short term (during and at the end of training). Low post-test scores reflect inadequacies in the course and/or the inability of trainees to absorb the information. Every training organization that has developed or uses training manuals has identified the knowledge that a category of trainees should acquire on a specific subject. Pre-and post-tests measure this knowledge.

The test results indicate whether the trainee understands certain key points, even though the number and definition of key points will differ by context. The items included in the test should be those most relevant to a particular training exercise, which relate to program performance. If the same questions appear on subsequent tests, this indicator can monitor trends over time within a program and can determine knowledge retention as part of formal training evaluations.

Issue(s):

This indicator has two limitations. First, tests lack standardized items. Some training organizations have a list of questions they encourage host country organizations to adopt for testing purposes on a given topic, but some countries opt to design their own questions. This lack of standardization makes it difficult to compare the results from this indicator across countries and even across programs within a given country. Second, the concept of "mastery" is not consistent across settings. For example, in some countries, a passing grade may be 60 percent, whereas in others the required score for passing may be 100 percent. Improved knowledge is only one indication of training effectiveness; by itself, it does not necessarily ensure improved performance.

Despite these limitations, training organizations routinely use this indicator to control the quality of training conducted in connection with their activities.

Number/percent of trainees competent to provide specific services upon completion of training

Definition:

“Competence" refers to the trainee's ability to deliver a service according to a set standard, which may differ according to the training context. Thus, the program (or evaluator) must determine the standard that is appropriate for the context. Examples may include clinical guidelines or programmatic guidelines set at the national or international level. Training organizations use "competence" to refer to the acquisition of skills (although performing a skill often requires knowledge). "Upon completion of training" refers to the final assessment given as part of the training event.

This indicator is calculated as:

(# of trainees delivering services according to set standards / total # of trainees tested) x 100

Data Requirements:

Listing of trainees; pre-established operational definitions of criteria determining competency; assessment of each trainee against established standards for a number of service delivery or programmatic tasks, conducted by an expert observer

Data Sources:

Competency tests as determined by the program (often in the form of a checklist administered by the trainers and/or external expert observer)

Purpose:

This indicator measures the technical competence of participants who have completed training in a specific skill set. The indicator reflects both the adequacy of the training and the ability of trainees to absorb the information.

Issue(s):

Several training organizations working in reproductive health have made considerable efforts to standardize the items on the checklist for given program areas (e.g., family planning) as well as the interpretation of each item on the list (e.g., what constitutes satisfactory performance on that item).

However, at the field level, programs and evaluators may use inconsistent criteria to define competency. Some programs may expect a 100 percent grade before they judge the trainee competent in a battery of skills, whereas another organization may judge competency at the 50 percent grade level. In some cases, local standards for the delivery of family planning services may not exist, in which case evaluators can use international standards.

Assessing competency generally is more complex than the simple testing of knowledge. Whereas measuring knowledge is easier than measuring competency (i.e., the correct performance of skills), the latter is more likely to define the quality of care that providers give. Some potential measures of competency are client-exit interviews, observation, self-reporting (acknowledging the inherent bias), or provider-interviews using vignettes.

Number/percent of trainees assigned to an appropriate service delivery point and/or job responsibilities

Definition:

"Trainees" refer to individuals who participated in a specific training course or event. "Assigned to an appropriate service delivery point" refers to a facility that routinely provides the type of service for which they are trained (e.g., counseling and testing for HIV). "Job responsibility" refers to the fact that they are assigned a task at that facility that allows them to perform the skills they obtained during training.

This indicator is calculated as:

(# of trainees in positions where their training is applied in service delivery / total # of trainees) x 100

Data Requirements:

Listing of trainees at the course or event; place of work and job description of each trainee "X months" (e.g., six months) post-training

Data Sources:

Program records of trainees; listing of job postings and job titles for employees within a given organization (e.g., Ministry of Health, NGO network of clinics)

Alternatively, a follow-up survey of trainees who had participated in a particular course or event

Purpose:

This indicator measures the extent to which the organization is taking full advantage of the training it provides to its personnel. Ideally, 100 percent of trained personnel will apply their skills to service delivery at some other selected interval post-training (e.g., six months). This indicator provides a quantitative measure of the efficiency of training because it monitors the extent to which organizations assign trained employees to appropriate positions in the appropriate facilities that tap the service delivery skills learned in training.

Ideally, this indicator will accompany the next one measuring the Number/percent of trained providers who perform to established guidelines/standards. Trained providers must not only work in appropriate facilities, they must also perform the appropriate tasks in the right places; one wants them to be doing the right things as well.

Issue(s):

The limitation of this indicator is its failure to shed light on the reasons for "departures" from service -- if a far lower percentage are deployed to appropriate positions than expected. In such a case, the organization in question should separate the "place assigned" and "job responsibilities" to further understand the dynamics at hand.

Number/percent of trained providers who perform to established guidelines/standards

Definition:

Number or percent of program-supported pre-service education or in-service training participants, students, or learners who perform to established guidelines/standards adopted by the organization for which they work. The trainees should be assessed after a specific period following the training (e.g., three or six months after the training). "Trainees" refers to individuals who have participated in one or more training events. "Guidelines/standards" refer to the written criteria adopted by the organization to outline the processes/or implementing of specific procedures.

This indicator is calculated as:

(# of trained providers carrying out specific procedures according to established guidelines or standards / Total # of trained providers evaluated) x 100

Data Requirements:

Listing of trainees; specification of the skill and established standards for the skill; assessment of skills level of trained providers conducted by an expert observer

This indicator can be disaggregated by age, sex, urban/rural status, cadre, sector, and type of trainee.

Data Sources:

National guidelines/standards for service delivery; and checklists and notes of an expert observer

Written tests can determine knowledge/stated practice of performance to standard.

Purpose:

This indicator measures the retention of skills acquired during training and the application of such skills to the job at hand; it also identifies possible candidates for retraining, or alternatively, for promotion. It measures both the adequacy of the training to impart these skills and the ability of the trainees to assimilate and to retain the information and skills over time.

This indicator goes beyond the indicator, Number/percent of trainees assigned to an appropriate service delivery point and/or job responsibilities, to ensure that providers can do their work (a variety of skills/services) according to the standard of the workplace. It measures performance in a work routine or a work day rather than just the skill learned in training.

Evaluators can apply this indicator at a specific interval post-training (e.g., 6 months, 12 months) among those who attended the training course or event. Alternatively, evaluators may apply it to all service providers in the system to capture both the coverage of training and the quality of the instruction (i.e., number/percent of providers who perform to established guidelines/ standards).

Issue(s):

If a trained provider fails to retain the skills acquired, it is important to explore the reasons. Possible explanations may include a lack of continued practice due to low client load, too much time lapsed since the training, or lack of reinforcement on the job. Conversely, a provider may improve his/her competency by continuously performing the task during the months following the training. In fact, this indicator reflects less the quality of the training than the subsequent work environment of the training (e.g., type and frequency of supervision, demand for the skills).

Number/percent of training events that achieve learning objectives

Definition:

"Objectives" are outlined in the training curriculum or syllabus.

This indicator is calculated as:

(# of courses that achieve outlined objectives / Total # of courses evaluated) x 100

Data Requirements:

[If assessed by participants] Response to the question "in your opinion, did the course meet the objectives outlined in the first session?"

[If assessed by independent observer with expertise in the content area] Review of the course content and observation of trainees. acquisition of knowledge and skills

Data Sources:

Evaluation of the training event by trainees upon its completion; or notes of independent course observer

Purpose:

The purpose of this indicator is to determine whether the content of the training provides trainees with the knowledge and skills outlined in the course objectives. Evaluations by trainers/participants are widely used in training sessions for service personnel. Observation by an independent observer with expertise on the topic is more common in training of trainer courses.

Issue(s):

Evaluations are subject to a courtesy bias, especially if participants doubt the confidentiality of the exercise or if they have developed a positive interpersonal relationship with the trainers over the course of the event. Those administering the evaluation can best reduce this bias if they stress that the answers will remain confidential and that the trainees should not put their names on the evaluation forms.

Organization has the capacity to maintain a functional information system on its training program

Definition:

An organization's ability to use its information system to track its training activity

"Organization" refers to a ministry of health, nongovernmental organization, or other institutions responsible for training at the national/regional/institutional level. "Capacity" refers to the personnel, software, and other mechanisms required for an information system. "Information system" refers to a database with information (preferably computerized) that allows easy retrieval of key information.

Data Requirements:

Evidence of the existence of a functioning system and its use for training-related decision-making

Data Sources:

Assessment by an external expert

Purpose:

One measure of institutionalization of training capacity is the ability within the local system to document the numerous national/regional/institutional level indicators of the training activity. These include number of trainees, characteristics of the trainers and of the trainees, content of the courses, number of events/methods used, number of contact hours, standards of competence used for different categories, percent achieving those standards, and cost of the training.

In the past, training programs tended to track their "performance" by reporting the volume of activity performed: number and type of people trained, number of courses conducted, number of contact hours achieved, and so forth. This type of "bean counting" may serve certain purposes for local institutions, but the more sophisticated training environment, places less emphasis on these measures of activity and greater emphasis on results achieved.

A training information system (TIS) is designed:

The criteria used by one training organization as benchmarks of progress on establishing a TIS are as follows:

A good TIS allows an institution to: avoid redundancy; match training plans with needs; replace lost capacity due to high turn-over with new personnel; and improve training inputs (e.g., better trainers, improved curricula, best training practices applied).

Number of faculty and trainers who demonstrate the use of professional core training competencies on the job

Definition:

"Faculty and trainers" are those persons knowledgeable in the subject area, designated to improve knowledge and skills through the training activities of a given organization. "Use of professional core training competencies" is context-specific. "On the job" indicates that this assessment takes place in an actual work context (when conducting training or providing services).

Data Requirements:

A checklist of competencies that the faculty or trainer should demonstrate

Data Sources:

Observation by an external expert of faculty and trainers performing actual training activities

Purpose:

One important measure of institutional capacity for training is the ability of staff to conduct training activities using state-of-the-art techniques. These include using participatory learning activities, demonstrating and having trainees practice using relevant job aids, summarizing key messages, and using encouragement rather than negative criticism. These contemporary adult learning techniques contrast sharply with the "classroom lecture format" that has characterized training in the past and is far less effective in achieving training objectives among adult learners, especially those with lower educational levels.

Evaluators should share the scores from these assessments and discuss with the persons evaluated, so that the faculty and staff can use this feedback to improve their training techniques and thus the quality of training. The organization achieves little if it documents the quality of training without providing feedback to those involved.

Issue(s):

If the evaluation takes place in a simulated environment and not in an actual training setting, it will not be an accurate assessment of the individual's performance in front of a group of trainees, and will not be the most beneficial assessment for feedback in improving future performance.

Organization has a systematic process for follow-up and support of trainees after the training event

Definition:

The systematic process for "follow-up" refers to the established mechanism that allows the training organization to locate and to communicate with the trainee at specified periods post-training (e.g., six months, one year). "Support of trainees after training" refers to mechanisms that allow the training organization to respond to questions, doubts, or problems that the trained providers experience in the service delivery environment. (Note: this process is part of the continuum of a transfer-of-training process that provides support before, during, and after training.) Refresher training is one mechanism for supporting trainees long after the training event.

Data Requirements:

Lists of persons trained; evidence of attempts to contact each individual post training, including the percentage actually reached, and the result of the contact

Data Sources:

Program records provided by the staff in charge of this activity, to be reviewed by an external evaluator

Purpose:

The new norms for quality training require that organizations follow-up the persons trained in their system, in contrast to the "train and release strategy" used in the past. For example, the USAID-funded training programs stress "Performance Improvement". This emphasis requires the training organization to assess gaps in the service delivery environment that hinder or prevent trained service providers from effectively performing their duties. In this spirit, the current indicator reflects the extent to which a training organization remains in contact with its trainees and attempts to identify and to address problems these employees face in the post-training period when they return to the service delivery environment.

Issue(s):

Some organizations may prefer to develop a parallel or similar indicator, number of training programs linked to other performance support systems. A performance support system not only ensures such transfer of skills to the job, but also increases the potential for enhanced performance because it enables the provider's work environment to support this transfer of skills. In the context of performance improvement, this link between training and subsequent performance support is essential to insuring a positive experience for the clients in the system. However, relatively little work has been conducted to date in measuring and evaluating this type of linkage. Thus, this indicator of number of training programs linked to other performance support systems is presented as an indicator under development and in need of further testing.

Existence of training strategy based on needs assessment to improve quality of service delivery

Definition:

"Based on needs assessment" refers to use of a systematic collection of information from multiple relevant sources that indicates the areas in which more service providers require training and the type of service providers who should receive training.

The "needs assessment" describes the existing service delivery system and identifies the gaps between desired and actual performance of providers. It examines the components described below under the training strategy and may specifically focus on one or a limited set of services. Alternatively, it can be (though rarely is) an overarching assessment of the health services system.

This indicator does not specifically measure the effectiveness of the strategy at improving quality, but it relates to the objectives of training programs, which are performance improvement and enhanced quality of care.

Data Requirements:

Evidence of a needs assessment conducted and used in developing the strategy; information from those involved in developing the strategy

Data Sources:

Program records; interviews with persons responsible for the strategy

Purpose:

A detailed training strategy is essential for effective training. Although a training strategy does not guarantee an effective result, the lack of a training strategy suggests ad hoc efforts with little attention to priorities or the felt needs within the system.

The training strategy shows an integrated approach to improving reproductive health (RH) service delivery in standardizing and implementing both pre-service education and in-service training, supported by national guidelines/standards. It builds on national RH service delivery needs identified (from government documents and plans) and describes the role of the comprehensive RH training and education system in the context of the sector. In addition to describing the various institutions, organizations and personnel, it includes the sector components of:

The training strategy may also include components of a pre-service/in-service reproductive health training program:

Issue(s):

For a training strategy to be effective, it must have local commitment. Ideally, the leading staff from the training organization will play a key role in developing the training strategy, either alone or in collaboration with external consultants. Without this local input, the training strategy will garner little support from the upper levels at the local organization in question. Rather, they will likely dismiss the strategy as irrelevant, erroneous or externally imposed.

The organization systematically evaluates its training program to improve effectiveness

Definition:

To systematically evaluate its training program an organization routinely applies indicators such as the first five in this section to its training activities. This evaluation requires systematic data collection, analysis, and reporting of the results to those involved in the training.

Data Requirements:

A list of all training events; a list of the indicators and instruments used to evaluate them; and a copy of the results

Data Sources:

Program records; occasional special studies

Purpose:

As training organizations attempt to develop a "culture of evaluation" to improve their programs, this indicator documents the evolution of the trend. It provides concrete evidence that training organizations (or units) are attempting to obtain systematic feedback and to discuss it with those involved in training efforts.

Training evaluation should form part of the training strategy; the institution should have an evaluator on staff or a regular consultant. Training evaluations should systematically examine the capacity of the trainers, their training materials, tools and methods, and the actual evaluation methodology (e.g., whether checklists measure intended areas, whether they need updating, how to adapt tests to different audiences/learners).

Examining the job performance after training -- level three evaluation of Kirkpatrick's training evaluation framework (1998) -- should take place every two to three years of a regular training program, if possible. In the interim, training trainers to function as evaluators (working with line supervisors), and adapting training tools (knowledge tests, skills checklists) with which to monitor/ observe can document trends in changes in performance.

The evaluation of training can take various forms, ranging from the simplest to the most sophisticated. At the very least, training programs will monitor increased learning using pre- and post-knowledge tests. However, few training organizations consider tests an adequate evaluation of the course, and most prefer (where funds permit) to track the skill level of trained providers, both upon completion of the course and at a period X months later (e.g., 6 months, 12 months).

Results from a study in Indonesia (Kim et al., 2000) on reinforcement via self-assessments and support groups of providers indicate that providers lost skills and knowledge acquired through training within six months, except those who performed self-assessment exercises, who actually improved.

Issue(s):

These evaluation methods refer to the individual trainee. In contrast, many of the indicators in this section refer to the organizational capacity of the system to design and implement effective training. Yet to truly evaluate the effectiveness of training, one must link the training activity to improvements in the service delivery environment. The linkage requires a special study using a quasi-experimental design, in which one contrasts a group of clinics whose providers are trained with a group of clinics whose providers have yet to be trained. This type of operations research study is relatively rare because of the resources required and the burden placed on service delivery to maintain "everything else constant." However, those wishing to definitively demonstrate the link between training and improvement in the service delivery environment will need to undertake such studies. Other techniques involve using multivariate analysis, combining data from facility-based and household surveys (e.g., Dietrich, Guilkey, and Mancini, 1998). Short of that, one simply works on the assumption that improving the competency of individual trainers and increasing the number of locations in which they operate will improve quality and access to service delivery.

Demonstrated organizational capacity to carry out training on a sustained basis

Definition:

The nature of the "training" depends on the service delivery areas of interest to the organization, but in this case will relate to the different aspects of reproductive health. "On a sustained basis" refers to the demonstrated ability to maintain this activity over a period of time (e.g., 3-5 years) with decreasing external support.

Data Requirements:

Evidence of the implementation and monitoring of a long-term strategy; annual training work plans developed in the country/organizational context to meet identified needs; evidence of review, evaluations, and updating

Budget review with percent of funds for training from internal revenues; alternatively, the institution demonstrates capacity to design and obtain funding for training projects, including evaluation. Review of human resources and equipment; list of training activities completed/replicated in the last three years and projected (long-range, strategic) plans

Data Sources:

Assessment by an external evaluator with training expertise

Purpose:

This indicator measures the ability to continuously provide quality training on a sustained basis with minimal external input -- the ideal of most training organizations.

Issue(s):

This indicator is more difficult to evaluate than are others in the section because of the subjective nature of the "capacity" and the lack of standard operational definitions for "sustained basis." An alternative indicator that is more concrete and possibly more practical is the number of training sites and centers performing to quality standards on a regular basis with adequate resources, where resources again refer to funding, sufficient staff and trainers, and internal organizational systems. The limitation with this alternative indicator is that one may encounter a single training site that was satisfactorily fulfilling the needs of a country in the area of training, whereas one may encounter multiple organizations (in a large country) that still have many shortcomings in terms of training. In this case, the number does not equate with "adequacy."

Adaptability of the organization/system to changing needs in a training environment

Definition:

"Changing needs in the training environment" are identifiable changes that require the organization to adjust its training procedures. Examples include introduction of a new contraceptive method, the growing demand for counseling and testing services in HIV programs, new techniques for cervical cancer prevention, screening for violence against women.

Data Requirements:

A list of changes in the service delivery environment requiring adaptations in training over a certain period; evidence of the organization's willingness and ability to respond to those needs

Data Sources:

Evidence from program records or other sources of regular, periodic meetings to assess needed changes (e.g., at least once every six months); and/or data collected through a special study

Purpose:

This indicator is particularly appropriate in the context of an overall assessment of an organization in terms of its training capacity, conducted by an external evaluator with expertise in the training area. The assessment requires an understanding of the local delivery context and cannot take the form of a simple "checklist" or summation of points.

Examples of an organization's adaptability to changing needs in a changing environment include:

Issue(s):

For training organizations to be effective, they must be able to respond to changes in the service delivery environment and in their operations. Evaluators may have difficulty charting an organization's progress in this area, precisely because no objective list of changing conditions in the service delivery environment exists. Moreover, in any given list of changes, some items may be relatively trivial compared to others that have wide- ranging public health implications. Thus, both types of changes cannot receive equal weight.