Skip Navigation

Global Mental Health

The DIME Program Research Model: Design, Implementation, Monitoring and Evaluation

The DIME model is a process of program Design, Implementation, Monitoring, and Evaluation developed and used by the Applied Mental Health Research (AMHR) Group since 2000. The model is a series of activities that combines evidence-based programming with rigorous monitoring and impact evaluation. The purpose is to provide a rational basis and approach for local programming while also generating information and lessons learned that can inform future services.

AMHR has developed a manual of DIME which consists of 6 modules, each referring to a separate activity. 

Below is a detailed description of the DIME model adapted from the manual introduction. Click on the link associated with each module (below figure 1) to access information.

The DIME Model

The diagram below outlines the steps of the design, implementation, monitoring, and evaluation (DIME) process. Qualitative data collection (Module 1) is the first step in the process and the diagram indicates which of the subsequent steps (2-8) are informed by qualitative data. A brief description of each step follows. 

DIME image

Figure 1: Steps of the DIME Process

Qualitative Assessment to identify and describe priority mental health and psychosocial problems: (Module 1)

Variations in culture and environment affect how people understand mental health and psychosocial problems. By understand, we mean how these problems are described, how they are prioritized, their perceived causes, and how people currently deal with them. This information is vital in selecting problems that are important to local people, for accurately communicating with them about these problems, and for identifying interventions that are likely to be acceptable and feasible for local people and therefore effective and sustainable.

Develop draft instruments to assess priority mental health and psychosocial problems: (Module 2)

Having decided which problems the program will address, we then draft quantitative assessment instruments to address these problems. These instruments have various uses, depending on the program: community or clinic-based surveys; screening persons for inclusion in a specific intervention program (for programs where not all people will be served); identifying those with severe problems who may need specialized services including referral; and monitoring and evaluating the effectiveness of services by tracking changes in severity and/or prevalence.

The process of drafting appropriate instruments includes reviewing the published literature for measures that have already been developed for the selected problems and comparing available measures with the qualitative data to select the measure(s) that best match(es) how local people describe the problem. These measures are then adapted to better fit local concepts.

Drafting includes translation. Terminology suggested by translators often differs from that used by local populations, particularly by poor and uneducated people. Therefore, qualitative data is preferred as the best source for translating key concepts. Employing the words and phrases that local people actually use (as identified in the qualitative data) will improve the understandability of the instruments and therefore their acceptability and accuracy. The translators are instructed to utilize the qualitative data to directly translate all signs, symptoms, problems and topics in the instruments that were mentioned by interviewees in the qualitative study using the same words found in the qualitative data. Only where concepts are not mentioned in the qualitative data do the translators themselves choose the appropriate terms.

Study baseline +/-prevalence surveys: (Module 3)

Both baseline assessments and prevalence surveys are based on the instruments developed in steps 2 and 3. Baseline assessments refer to interviews done using the instrument in order to establish the eligibility of individuals for participation in an intervention program. Prevalence surveys perform the same function at the population level to measure the percentage and numbers of eligible (ie, affected) persons in the population as well as giving some indication about the variation in severity of problems at the population level. 

Overall Program planning: (Module 4)

This includes planning the program goals and objectives and the strategy and the type of intervention(s) for achieving these. It also includes the development of process and impact indicators, and the overall program work plan

Develop interventions to address the identified mental health and psychosocial problems: (Module 5)

The qualitative data on the perceived causes of problems and what people do about them are critical to intervention design. Interventions need to address the perceived causes of priority problems (or explain to participants why they do not) in order to make sense and therefore inspire both confidence and cooperation. The more closely interventions match the ways in which people currently think about and address the selected problems, the more likely the interventions are to be acceptable to them. Where there are differences, they need to be explained and agreed on by the local population. For example, using counseling to address a problem thought to be caused by poverty will take some explaining. 

Implementation and Monitoring: (Materials included in Modules 4 and 5)

This refers to implementing and monitoring of the intervention and the overall program. It includes procedures for iterative changes in the planned activities as needed, according to the monitoring data.

Post intervention assessment: (Module 6)

Upon completion of the intervention, participants are interviewed using qualitative methods to identify potentially important unexpected impacts of the program. They are also re-interviewed using the baseline quantitative instrument, to measure changes in the outcome indicators such as problem severity and function. Where possible, the amount of change is compared with the amount of change experienced by a control group, to determine the true program impact.