ask Ask a question
Favorite

We used data generated using the exact-match approach as the measure of true quality-adjusted effective coverage of management of child illness in the study population. Using the exact-match linking approach, we assigned each child the structural quality score of the specific provider from which care was reportedly sought, which was considered to be the true source(s) of care. Children were not linked using the exact-match method if their caregiver could not recall the name of the provider or facility from which care was sought or the provider could not be located for inclusion in the study, mostly affecting individuals who utilized informal shops.

To simulate ecological linking in the absence of data on specific source of care, each sick child was linked to the closest health provider(s) within the reported category of source of care (Box (Box1)1) using three measures of geographic proximity: (1) Euclidean distance, (2) travel time, and (3) 5 km radius. Each measure of geographic proximity was applied using (1) known household location, (2) undisplaced cluster location, and (3) five sets of displaced cluster locations, each reported separately. Quality-adjusted coverage of management of child illness was calculated using each combination of ecological linking method and measure of sick child location by assigning each child the quality score of the proximal provider(s) to which they were linked.

For both the exact-match and each ecological linking approach, we calculated the quality-adjusted coverage of management of child illness as the average quality score across all sick children in the study. If no care was sought for a sick child, they were assigned a quality score of zero. If care was sought from multiple sources, we averaged the scores of those sources. If a child could not be linked to a provider using exact source or geographic proximity, they were assigned the average score for the provider category.

To quantify the bias introduced into each method by imprecise household location, we compared the estimates of quality-adjusted coverage from each combination of ecological linking approach and cluster location against the (1) exact-match quality-adjusted coverage estimates and (2) estimates generated using each ecological approach with the true household location. We also assessed how accurately each approach identified the actual provider(s) utilized by each sick child by comparing the provider(s) linked to each sick child using the ecological approaches with the specific source(s) of care reported by each child’s mother.

A full description of the construction of provider structural quality scores and methods for defining geographic proximity are presented in a previous publication [6]. Briefly, we defined each provider’s structural quality score as the availability of services, commodities, and human resources needed to appropriately manage common child illnesses (Box (Box2).2). These indicators were considered the minimum inputs for appropriate care: the basic commodities required to diagnose and treat common child illness, along with the human resources and clinical knowledge to apply them correctly. As such, the score reflects an upper ceiling of the potential quality of care offered by a provider. A provider received one point for each indicator if requirements were met and zero if not; each domain (bold italicized in Box 2) received equal weight. We calculated scores as a continuous variable ranging from zero (no capacity to provide care) to 100% (full capacity to provide care).

We employed three measures of geographic proximity in this analysis. For each method, we developed an automated script in QGIS comparable to the process outlined for application in ArcGIS in a previous paper [6]. We conducted all geographic analyses in QGIS 2.18.24 (Open Source Geospatial Foundation Project, Beaverton, OR, USA). Ecological linking was restricted to only assign children to the types of providers (managing authority and level of care) from which care was reportedly sought based on responses during the household survey. For example, if a mother reported care for her sick child from a government health center, then the child could only be linked to another government health center—not a private facility or a government hospital.

Euclidean distance: each sick child was linked to the single closest provider based on Euclidean distance from the child’s location within the reported source of care provider category. This method is the simplest approach for assigning a child to a specific provider.

Travel time: each sick child was linked to the single closest provider by travel time from the child’s location within the reported source of care provider category. Travel time was approximated by grading the relative speed of travel on different types of roads (e.g., paved roads, graded roads, footpaths). Data on road networks were derived from Open Street Maps (OSM) and local expertise where absent in OSM. This method is designed to model the effect of road access and quality on care-seeking.

5 km radius: each sick child was linked to all providers within the source of care provider category within a 5 km radius of the child’s location. This method is designed to approximate a 1-hour walking distance from a home to a provider in any direction.

The central point location for each cluster was generated to capture an area of high population density within each cluster inline the DHS central point measurement procedures. A census of all households within each of the study catchment areas was conducted before the study and included the location of each household. In QGIS, we grouped all the households into clusters of 150 households based on measured latitude and longitude, and we calculated the mean point of each cluster of 150 households as the central point.

Each central point was displaced five times using an R script developed by Measure Evaluation for DHS cluster displacement [5]. In brief, the code offsets each point using a random angle and random distance, capped at 5 km for rural clusters (1% capped at 10 km) and 2 km for urban clusters. The code further restricts the displacement to ensure points are not displaced outside of their true administrative unit (e.g., district). However, this feature was redundant in our analysis due to the small size of the study area. We ran the displacement code in R 3.4.3 (R Foundation for Statistical Computing, Vienna, Austria) and we imported each set of displaced coordinates into QGIS for the linking analyses.

We then substituted each central point and displaced central point for the household location in our measures of geographic proximity. Instead of calculating the geographic proximity of providers from the home of each sick child, we measured proximity from the relevant central point or displaced central point location as depicted in Fig. 1.

Map of link to closest CHW based on true household location, cluster central point, and displaced cluster point

In our dataset, there was limited variability in quality within provider categories. This limited the potential generalizability of our simulation results. Therefore, we ran two additional sensitivity analyses using simulated quality scores to assess the effect of sampling in settings with greater diversity in quality scores.

In the first simulation, we maintained the data on household care-seeking behavior as well as facility, household, and cluster locations (displaced and undisplaced) from the primary analysis. However, each facility was assigned a structural quality score designed to simulate preferential care-seeking in favor of higher-quality facilities within a provider category. The rest of the of the effective coverage estimation methods were implemented in the same manner as in the above section. Facilities that were utilized more frequently based on the household survey data were given higher quality scores than those that were utilized less frequently or not at all. We calculated how many times a respondent reported utilizing a specific source of care. We then ranked each provider within a provider category based on utilization. If a provider saw more than the median number of reported visits within the category, we increased the provider quality score by two standard deviations of the overall provider category quality score. If a provider saw none of the sampled children or fewer than the median number of reported visits within the category, we decreased the provider quality score by two standard deviations of provider category score. This resulted in a data set in which caregivers more frequently accessed care from higher quality health providers, simulating a setting of selective bypassing of lower-quality providers within a given level of care.

In our second simulation, as in the previous simulation, we maintained the data on location and household care-seeking behavior. However, each facility was assigned a structural quality score completely at random. The rest of the effective coverage estimation methods were implemented in the same manner as in the above section.

Do you have any questions about this protocol?

Post your question to gather feedback from the community. We will also invite the authors of this article to respond.

post Post a Question
0 Q&A