MEPS Household Component
Annual Contractor Methodology Report 2017
June 15, 2018
Prepared for:
Agency for Healthcare Research and Quality
Center for Financing, Access, and Cost Trends
5600 Fishers Lane
Rockville, Maryland 20857
Prepared by:
Westat
1600 Research Boulevard
Rockville, Maryland 20850-3129
301-251-1500
Introduction
1 Sample
1.1 Sample Composition
1.2 Sample Delivery and Processing
2 Instrument and Materials Design
2.1 Introduction
2.2 Changes to CAPI Instrument for 2017
2.3 Testing of the Questionnaire and Interviewer Management System
2.4 Changes to Materials and Procedures for 2017
3 Recruiting and Training
3.1 Field Interviewer Recruiting for 2017
3.2 2017 Interviewer Training
3.2.1 New Interviewer Training
3.2.2 Experienced Interviewer Training – Distance Learning Program
4 Data Collection
4.1 Data Collection Procedures
4.2 Data Collection Results: Interviewing
4.3 Data Collection Results: Authorization Form Signing Rates
4.4 Data Collection Results: Self-Administered Questionnaire (SAQ), Diabetes Care Supplement (DCS), and Cancer
Self-Administered Questionnaire (CSAQ) Collection Rates
4.5 Data Collection Results: Patient Profile Collection
4.6 Quality Control
4.7 Security Incidents
5 Home Office Support of Field Activities
5.1 Preparation for Field Activities
5.2 Support During Data Collection
6 Data Processing and Data Delivery
6.1 Data Delivery
6.2 Processing to Support Data Delivery
Appendix A Comprehensive Tables – Household Survey
Table 1-1 Initial MEPS sample size and number of NHIS PSUs,
all panels
Table 1-2 Data collection periods and starting RU-level
sample sizes, spring 2010 through fall 2017
Table 1-3 Percentage of NHIS households with partially
completed interviews in panels 3 to 23
Table 1-4 Distribution of panel 22 sample by sample domain
Table 2-1 Supplements to the CAPI core questionnaire
(including hard-copy materials) for 2017
Table 3-1 Spring attrition rate among new and experienced
interviewers, 2013-2017
Table 3-2 Fall attrition rate among new and experienced
interviewers, 2013-2017
Table 3-3 Components of the 2017 training program
Table 4-1 Data collection schedule and number of weeks per round of data collection,2017
Table 4-2 Average Weekly Completion and Success Rates for
High Potential and Remainder (Non-High Potential) Cases during High Potential
Phase, Spring 2017
Table 4-3 MEPS HC data collection results, panels 14 through 22
Table 4-4 Response rates by data collection year, 2010-2017
Table 4-5 Summary of MEPS round 1 response and nonresponse,
2012-2017 panels
Table 4-6 Summary of MEPS round 1 response, 2012-2017
panels, by NHIS completion status
Table 4-7 Summary of MEPS panel 22 round 1 response rates,
by sample domain by NHIS completion status
Table 4-8 Summary of MEPS round 1 results for RUs who ever
refused, panels 15-22
Table 4-9 Summary of MEPS round 1 results for RUs who were
ever traced, panels 15-22
Table 4-10 Interview timing comparison, panels 15 through 22
(mean minutes per interview, single-session interviews)
Table 4-11 Mean contact attempts by NHIS completion status,
round 1 of panels 20-22
Table 4-12 Signing rates for medical provider authorization
forms for panels 15 through 22
Table 4-13 Signing rates for pharmacy authorization forms
for panels 15 through 22
Table 4-14 Results of Self-Administered Questionnaire (SAQ)
collection for panels 15 through 22
Table 4-15 Results of Diabetes Care Supplement (DCS)
collection for panels 14 through 21
Table 4-16 Cancer Self-Administered Questionnaire
(CSAQ) collection rates
Table 4-17 Results of patient profile collection in 2012
through 2017
Table 5-1 Number and percent of respondents who called the
respondent information line, 2014-2017
Table 5-2 Calls to the respondent information line, 2016 and
2017
Table 6-1 Delivery schedule for major MEPS files, 2015-2017
Table A-1 Data collection periods and starting RU-level
sample sizes, all panels
Table A-2 MEPS household survey data collection results, all panels
Table A-3 Signing rates for medical provider authorization forms
Table A-4 Signing rates for pharmacy authorization forms
Table A-5 Results of Self-Administered Questionnaire (SAQ) collection
Table A-6 Results of Diabetes Care Supplement (DCS) collection*
Table A-7 Calls to respondent information line
Table A-8 Files delivered during 2017
The Household Component of the Medical Expenditure
Panel Survey (MEPS-HC, Contract 290-2012-00005C, awarded September 8, 2012 and
Contract 290-2016-00004I, awarded July 1, 2016) is the central component of the
long-term research effort sponsored by the Agency for Healthcare Research and
Quality (AHRQ) to provide timely and accurate data on access to, use of, and
payments for health care services by the U.S. civilian non-institutionalized
population. The project has been in operation since 1996, each year producing a
series of annual estimates of health insurance coverage, health care
utilization, and health care expenditures. This report documents the principal
design, training, data collection, and data processing activities of the MEPS-HC
for survey year 2017.
Data are collected for the MEPS-HC through a series of
overlapping household panels. Each year a new panel is enrolled for a series of
5 in-person interviews conducted over a two and a half year period. Each year a
panel completing its fifth interview ends its participation. This report
describes work performed for all of the panels active during calendar year 2017.
Design work conducted during the year consisted of updates and testing for the
instruments fielded during the Fall of 2016 and Spring of 2017. Data collection
operations in 2017 were for Panel 20 Round 5, Panel 21, Rounds 3 and 4, and
Panel 22, Rounds 1 and 2. Data processing activity focused on delivery of full
year utilization and expenditure files for calendar year 2015.
The report touches lightly on procedures and
operations that remained unchanged from prior years, focusing primarily on
results of the 2017 operations and features of the project that were new,
changed, or enhanced for 2017. Tables in the body of the text highlight 2017
results, with limited comparison to prior years. A set of tables showing data
collection results over the history of the project is in Appendix A.
Chapter 1 of the report describes the 2017 sample and
activities associated with preparing the sample for fielding. Chapters 2 through
5 discuss activities associated with the data collection for 2017: updates to
the survey questionnaire and field procedures; field staff recruiting and
training; data collection operations and results; and home office support of
field activities. Chapter 6 describes data processing and data delivery
activities.
Return To Table Of Contents
Each year a new, nationally representative sample for
the Medical Expenditure Panel Survey Household Component (MEPS-HC) is drawn from
among households responding to the previous year’s National Health Interview
Survey (NHIS). Households in a new panel participate in a series of 5 interviews
that collect data covering 2 full calendar years. Data from 2 panels—one
completing its first year in the study (Round 3) and one completing its second
year (Round 5)—are combined each year to produce a series of annual estimation
files.
The sample for the new MEPS panel, Panel 22, fielded
in 2017, was selected from the new sample design implemented for the NHIS in
2016 with households selected that participated in the NHIS during the first
three quarters of 2016. They were drawn from NHIS Panels 1 and 4, the NHIS
panels designated for MEPS. This chapter describes the sample drawn for 2016 and
steps taken to prepare the new sample for fielding.
Return To Table Of Contents
Table 1-1 shows the starting sample sizes for all MEPS
panels through Panel 22 and the number of MEPS PSUs from which each panel was
drawn. The change in the number of PSUs for Panel 12 reflects redesign of the
NHIS sample following the decennial census while the number of PSUs for Panel 22
reflects the NHIS redesign for 2016. The MEPS sample units are ‘reporting units’
(RUs)—groups of related persons living at the address selected for the NHIS at
the time of the NHIS interview. Members of the NHIS households who move over the
course of the MEPS interviews are split into separate RUs, followed, and
interviewed at their new address.
MEPS data collection is conducted in two main fielding
periods each year. During the January-June period, Round 1 of the new panel and
Rounds 3 and 5 of the two continuing panels are fielded, with the panel in Round
5 retiring at mid-year. During the July-December period, Round 2 of the new
panel and Round 4 of the remaining continuing panel are fielded. Table 1-2
summarizes the combined workload for the January-June and July-December periods
from Spring 2010 through Fall 2017.
Table 1-1. Initial MEPS sample size and number of NHIS PSUs, all panels
Panel |
Initial sample size (RUs)* |
MEPS PSUs |
1 |
10,799 |
195 |
2 |
6,461 |
195 |
3 |
5,410 |
195 |
4 |
7,103 |
100 |
5 |
5,533 |
100 |
6 |
11,026 |
195 |
7 |
8,339 |
195 |
8 |
8,706 |
195 |
9 |
8,939 |
195 |
10 |
8,748 |
195 |
11 |
9,654 |
195 |
12 |
7,467 |
183 |
13 |
9,939 |
183 |
14 |
9,899 |
183 |
15 |
8,968 |
183 |
16 |
10,417 |
183 |
17 |
9,931 |
183 |
18 |
9,950 |
183 |
19 |
9,970 |
183 |
20 |
10,854 |
183 |
21 |
9,851 |
183 |
22 |
9,835 |
168 |
* RU: Reporting unit.
Return To Table Of Contents
Over the years shown in Table 1-2, the combined Spring
and Fall workload has ranged from a low of 37,555 in 2010 to a high of 41,135 in
2013; the combined workload for 2017 was 39,169 RUs. The interviewing workload
during the Spring field period, when 3 panels are active, is substantially
larger than during the Fall. In 2017, the Spring workload of 24,773 RUs was the
third highest of the 8 years shown.
Each new MEPS panel includes some oversampling of
population groups of particular analytic interest. Since 2010 (Panel 15), the
set of sample domains has included oversamples of Asians, Blacks, and Hispanics.
All households set aside in the NHIS for MEPS that have at least one household
member in any of these three categories (Asian, Black, or Hispanic) are included
in the MEPS sample with certainty. White and other race households have been
subsampled at varying rates across the years.
Table 1-2. Data collection periods and starting RU-level sample sizes, spring 2010 through fall 2017
Spring/Fall Data Collection |
Sample Size |
January-June 2010 |
23,770 |
Panel 13 Round 5 |
7,576 |
Panel 14 Round 3 |
7,226 |
Panel 15 Round 1 |
8,968 |
July-December 2010 |
13,785 |
Panel 14 Round 4 |
6,974 |
Panel 15 Round 2 |
6,811 |
January-June 2011 |
23,693 |
Panel 14 Round 5 |
6,845 |
Panel 15 Round 3 |
6,431 |
Panel 16 Round 1 |
10,417 |
July-December 2011 |
14,802 |
Panel 15 Round 4 |
6,254 |
Panel 16 Round 2 |
8,548 |
January-June 2012 |
24,247 |
Panel 15 Round 5 |
6,156 |
Panel 16 Round 3 |
8,160 |
Panel 17 Round 1 |
9,931 |
July-December 2012 |
16,161 |
Panel 16 Round 4 |
8,048 |
Panel 17 Round 2 |
8,113 |
January-June 2013 |
25,788 |
Panel 16 Round 5 |
7,969 |
Panel 17 Round 3 |
7,869 |
Panel 18 Round 1 |
9,950 |
July-December 2013 |
15,347 |
Panel 17 Round 4 |
7,656 |
Panel 18 Round 2 |
7,691 |
January-June 2014 |
24,857 |
Panel 17 Round 5 |
7,485 |
Panel 18 Round 3 |
7,402 |
Panel 19 Round 1 |
9,970 |
July-December 2014 |
14,665 |
Panel 18 Round 4 |
7,203 |
Panel 19 Round 2 |
7,462 |
January-June 2015 |
25,185 |
Panel 18 Round 5 |
7,163 |
Panel 19 Round 3 |
7,168 |
Panel 20 Round 1 |
10,854 |
July-December 2015 |
15,247 |
Panel 19 Round 4 |
6,946 |
Panel 20 Round 2 |
8,301 |
January – June 2016 |
24,694 |
Panel 19 Round 5 |
6,856 |
Panel 20 Round 3 |
7,987 |
Panel 21 Round 1 |
9,851 |
July – December 2016 |
15,390 |
Panel 20 Round 4 |
7,729 |
Panel 21 Round 2 |
7,661 |
January – June 2017 |
24,773 |
Panel 20 Round 5 |
7,611 |
Panel 21 Round 3 |
7,327 |
Panel 22 Round 1 |
9,835 |
July – December 2017 |
14,396 |
Panel 21 Round 4 |
7,025 |
Panel 22 Round 2 |
7,371 |
Return To Table Of Contents
The annual samples also include a percentage of
households classified as ‘partial completes’ in the NHIS, reflecting the fact
that less than a full NHIS interview was obtained. The partial completes are, as
a group, more difficult to complete in MEPS than the NHIS ‘full completes’ and
therefore receive special attention during data collection. Table 1-3 (column 2)
shows the percentage of NHIS interviews classified as “partially complete” in
Panels 3 through 22.
Beginning in 2011 (Panel 16), a new sample domain was
created by dividing what in prior years had been a single domain, the
‘White/Other’ domain, into two domains, one consisting of NHIS partial
completes, the other of NHIS completes. The partial completes were sampled at a
lower rate than the full completes in order to lessen the impact on the field
effort resulting from the difficulty of gaining the cooperation of these
households. The last two columns in Table 1-3 show the subsampling rates for the
two groups. The partial completes in the white, other domain were sub- sampled
at rates ranging from 53 percent in Panel 20 to 40 percent in Panel 17.
Table 1-3. Percentage of NHIS households with partially completed interviews in panels 3 to 22
Panel |
Percentage with partially completed interviews |
Subsampling rate for NHIS completes in “White, other” domain |
Subsampling rate for partial completes in “White, other” domain |
3 |
10 |
|
|
4 |
21 |
|
|
5 |
24 |
|
|
6 |
22 |
|
|
7 |
17 |
|
|
8 |
20 |
|
|
9 |
19 |
|
|
10 |
16 |
|
|
11 |
23 |
|
|
12 |
19 |
|
|
13 |
25 |
|
|
14 |
26 |
|
|
15 |
21 |
|
|
16 |
25 |
79 |
46 |
17 |
19 |
51 |
40 |
18 |
22 |
63 |
43 |
19 |
18 |
66 |
42 |
20 |
19 |
84 |
53 |
21 |
22 |
81 |
49 |
22 |
15 |
77 |
49 |
Return To Table Of Contents
* The figures in the second column of the table are
the proportion of partial completes in the total delivered sample, after
subsampling. The figures in the third and fourth columns are subsampling rates
applied to the two White/Other subdomains in Panels 16 through 23.
Table 1-4 shows the distribution of the Panel 22
sample by sample domain.
Table 1-4. Distribution of panel 22 sample by sample domain
Sample domain |
Number |
Percent |
Asian |
687 |
7.0 |
Black |
1,438 |
14.6 |
Hispanic |
1,877 |
19.1 |
White, other |
5,833 |
59.3 |
NHIS complete |
4,959 |
50.4 |
NHIS partial complete |
874 |
8.9 |
Total |
9,835 |
|
Return To Table Of Contents
The 2017 MEPS sample was received from AHRQ and NCHS
in two deliveries. The first delivery, containing households sampled from the
first two quarters of the 2016 NHIS, was received on September 19, 2016 with a
re-delivery on September 23 to correct a weighting issue. Households selected
from the third quarter of the NHIS were delivered on December 2, 2016.
The September delivery of the first two-thirds of the
new sample is instrumental to the project’s schedule for launching interviewing
each year in early January. The partial file gives insight into the demographic
and geographic distribution of the households in the new panel. This
information, when combined with information on older panels continuing in the
new year, guides project decisions on the number and location of new
interviewers to recruit. With the change in NHIS sample design, the September
receipt of the first two-thirds of the new sample was particularly important. In
addition to the standard review of the sample files, Westat also began work
organizing the new sample, blending it within the structure of the current
sample in the old design. As a result of this initial work, PSUs were mapped and
MEPS supervisory regions were re-configured to accommodate the differences in
sample sizes between the old and new NHIS designs. Of the 168 PSUs in the
combined 2017 sample (Panel 20 Round 5, Panel 21 Round 3, and Panel 22 Round 1),
120 are continuing as part of the NHIS sample redesign, 29 will leave with the
end of Panel 21, and 19 new PSUs came into MEPs with the 2017 panel.
Upon receipt of the first portion of the 2017 sample,
project staff also reviewed the NHIS sample file formats to identify any new
variables or values and to make any necessary changes to the project programs
that use the sample file information. Following this initial review, staff
proceeded with the standard processing through which the NHIS households are
reconfigured to conform to MEPS reporting unit definitions and prepared the
files needed for advance mailouts and interviewer assignments. The early sample
delivery also allows time for checking and updating NHIS addresses to improve
the quality of the initial mailouts and to identify households that have moved
since the NHIS interview.
Return To Table Of Contents
Each year, the project makes a number of changes to
the computer-assisted (CAPI) instrument used to collect MEPS data and to the
field procedures followed by the interviewers who collect the data.
With the concurrent work on the CAPI modernization
task as part of the technology upgrade and the planned implementation in 2018,
instrument changes to the 2017 instrument were kept to a minimum. Any change
made to the current instrument must also be part of the specification for the
modernization instrument and be folded into the specification, programming, and
testing routine. Changes that were made to sections that have been programmed
are risky to make and require an additional testing iteration on an already
challenging schedule.
Return To Table Of Contents
The only revision for 2017 CAPI instrument involved a
modification so that only new RU members or persons reporting cancer for the
first time were prompted to complete a Cancer Self-Administered Questionnaire in
Round 3. All adult member RU members who reported having cancer in Round 1 were
asked to complete the CSAQ, following the pattern established in the prior year.
Supplements to the CAPI Instrument
Table 2-1 shows the supplements in the CAPI instrument
for the rounds administered in calendar year 2017. The pattern for 2017 remained
unchanged from the prior year.
The SAQ, Your Health and Health Opinions was
redesigned for administration in the fall rounds of 2017. The Medicare Health
Outcomes Survey (HOS) replaced the SF012v2 Health Survey as the new source of
the 2017 SAQ questions. In addition, several questions had wording changes or
response category changes, such as reversed scales or an additional value in a
scale. Replacing the open text field for respondent/proxy completion to coded
relationship categories eliminated the need for manual review and upcoding of
text fields.
Table 2-1. Supplements to the CAPI core questionnaire (including hard-copy materials) for 2017
Supplement |
Round 1 |
Round 2 |
Round 3 |
Round 4 |
Round 5 |
Child Health |
|
X |
|
X |
|
Quality Supplement |
|
|
X |
|
X |
Preventive Care |
|
|
X |
|
X |
Access to Care |
|
X |
|
X |
|
Income |
|
|
X |
|
X |
Assets |
|
|
|
|
X |
Medical Provider Authorization Forms |
X |
X |
X |
X |
X |
Pharmacy Authorization Forms |
|
X |
X |
X |
X |
Your Health and Health Opinions (SAQ) |
|
X |
Round 2 follow up |
X |
Round 4 follow up |
Cancer SAQ |
X |
|
X |
|
|
Diabetes Care Supplement (SAQ) |
|
|
X |
|
X |
Institutional History Form |
|
X |
X |
X |
X |
Priority Condition Enumeration |
X |
New RU members Only |
X |
New RU members only |
X |
Return To Table Of Contents
The testing for the Spring 2017 (Rounds 1/3/5)
application was completed in November 2016. Testing of the Fall 2017 instrument
was completed in May 2017.
All instrument testing followed a prescribed,
multi-stage path, with specific testing tasks coordinated across design and
systems groups. Testing began early in the development cycle as programmers
tested their work on a flow basis to ensure that the modifications to the
instruments were in accord with the specifications developed by design staff.
Once programming was completed, design and systems staff at Westat and project
staff from AHRQ tested the full CAPI instrument during alpha and beta test
periods. Required changes identified during testing were implemented before the
CAPI instruments were ‘frozen’, approximately 6 to 8 weeks before the Spring and
Fall data collection cycles began.
Each cycle of testing included components focusing on
specific aspects of the instrument and supporting field management system:
verification of the application against the instrument specifications, testing a
variety of training scenarios to simulate data collection situations, overall
usability of the instrument and supporting systems, focused testing on specific
features such as help screen functionality and medical provider directory
searches, historical testing in which data entered into the revised application
are compared to previously completed cases to ensure the data are captured and
stored as intended, and integration testing of the CAPI application in the
context of the full set of management and support systems needed during active
data collection.
Additional testing components, including enhanced
integration testing and ‘live’ testing, were conducted. The enhanced integration
testing allows project staff to check electronic Face Sheet information, test
the Interviewer Assignment Sheet, and make entries into the electronic record of
calls and refusal evaluation form. The live testing component uses information
derived from actual cases to verify that all management information on the
laptop is being brought forward correctly from previous rounds. Using actual
case data also allows staff to check uncommon paths through the MEPS instrument
so that specific changes to the questionnaire can be thoroughly tested.
Return To Table Of Contents
The manuals and the materials for the 2017 field
effort were updated as needed to reflect changes to the questionnaire and
management systems. Below is a description of the key changes to the materials
and procedures.
Instructional Manuals
The field interviewer manual was updated to address
changes in field procedures and updates to the Interviewer Management System
(IMS).
Electronic Materials
The electronic Face Sheet provides interviewers with
information needed to contact their assigned households and familiarize
themselves with the composition of the household and relevant details about
their prior history with the survey in preparation for coming interviews. The
Interviewer Assignment Sheet supports follow-up for Authorization Forms and SAQs
not completed at the time of the interview.
Advance Contact and Other Case Materials
All respondent letters, monthly planners, and
self-administered questionnaires were updated with the appropriate year
references, and the Income Job Aid and MEPS statistical charts were updated with
2014 data. Respondent letters, the community authorization letter, authorization
form booklet, and the certificate of appreciation were updated with the
signature of the new director for AHRQ.
Return To Table Of Contents
Recruiting of new field interviewers for 2017 began in
November 2016, a month later than usual. This delay in the recruiting start up
was result of changes in the MEPS sample frame for Panel 22. The 2017 MEPS
sample was drawn from the NHIS sample which was re-designed in 2016 resulting in
a significant change in the allocation of MEPS PSUs: 19 new PSUs come into the
MEPS sample and 29 PSUs were identified as leaving the MEPS sample following the
end of Panel 21. Recruiting needs were established by estimating the full
workload for the new panel and adding it to the existing workloads in Panels 20
and 21. The projected total caseload in each PSU was used to estimate the number
of interviewers needed. This number was compared to the number of active
interviewers on staff in each PSU to determine the PSU-level staffing
recommendations. Based on this assessment, Westat planned to recruit 104 new
interviewers, with a goal of having about 450 interviewers actively working
during the Spring 2017 data collection period.
For the 2017 recruiting, MEPS used the Westat
web-based recruitment management system through which applicants apply online. A
total of 93 interviewers attended training and 87 completed the program. With
the addition of these new trainees, the project began 2017 data collection with
a total of 446 interviewers. Of this total, 18 new interviewers and 24
experienced interviewers were lost to attrition during the Spring interviewing
rounds. An additional 10 new interviewers and 44 experienced interviewers were
lost during the Fall rounds. Total attrition for the year was 21.5 percent,
higher than the attrition rates in 2014 - 2016. This increase in the total
attrition for 2017 was based, in large part, on the number of existing PSUs
leaving the sample after Panel 21 Round 5.
The breakdown of interviewer attrition is shown in
Tables 3-1 and 3-2. Table 3-1 shows the overall attrition rate at the end of the
Spring 2017 data collection period, from 2014 through 2017. Note that the total
Spring 2017 attrition rate was 9.4 percent, the lowest in many years. This
reflects a very successful recruiting effort with more reliable hires and close
management of data collection by the field supervisors that aided in retaining
staff.
Table 3-1. Spring attrition rate among new and
experienced interviewers, 2013-2017
Data collection period |
New interviewers lost |
Experienced interviewers lost |
Total interviewers lost |
# |
% |
# |
% |
# |
% |
Spring 2013 |
21 |
21.9% |
46 |
12.0% |
67 |
14.8% |
Spring 2014 |
26 |
17.8% |
30 |
9.0% |
56 |
11.6% |
Spring 2015 |
28 |
34.1% |
35 |
9.0% |
63 |
13.5% |
Spring 2016 |
20 |
26.7% |
28 |
7.7% |
48 |
10.9% |
Spring 2017 |
18 |
20.7% |
24 |
6.7% |
42 |
9.4% |
Return To Table Of Contents
Table 3-2 shows the overall attrition rate during the
Fall data collection period. Note that the total Fall 2017 attrition rate was
13.4 percent, a rate that is comparable to that experienced in 2013 when the
MEPS sample changed in response to the 2010 Census. The annual attrition rate
for 2017 increased to 21.5 percent and can be attributed to the loss of
experienced interviewers in PSUs that are leaving the MEPS sample.
Table 3-2. Fall attrition rate among new and
experienced interviewers, 2013-2017
Table 3-2. Fall attrition rate among new and experienced interviewers, 2013-2016
Data collection period |
New interviewers lost |
Experienced interviewers lost |
Total interviewers lost |
# |
% |
# |
% |
# |
% |
Fall 2013 |
9 |
15.8% |
46 |
13.6% |
55 |
13.9% |
Fall 2014 |
16 |
13.3% |
22 |
7.2% |
38 |
8.9% |
Fall 2015 |
6 |
11.1% |
28 |
7.9% |
34 |
8.3% |
Fall 2016 |
6 |
11.1% |
24 |
7.1% |
30 |
7.7% |
Fall 2017 |
10 |
14.5% |
44 |
13.1% |
54 |
13.4% |
Return To Table Of Contents
The key interviewer training efforts in 2017 included
(1) the annual in-person training for newly hired interviewers, and (2) distance
learning trainings for both new and experienced interviewers. One goal in late
2016 and early 2017 was to continue to provide greater continuity between the
training for new hires and that for experienced staff. This training, developed
in 2015, re-tooled content from the 2014 in-person refresher training on data
quality and packaged it to be administered remotely to interviewers completing
their first year working on MEPS. It employs a hybrid approach, combining
self-paced e-Learning modules accessed through Westat’s Learning Management
System (LMS) with web conferences moderated by field management staff.
Table 3-3. Components of the 2017 training program
Training Efforts |
In person |
Home study/remote |
2017 New |
Experienced |
2017 New |
Experienced |
December 2016 |
Home study for 2017 Spring data collection for experiences interviewers |
|
|
|
X |
Data Quality refresher training for MEPS Field Interviewers Class of 2016 |
|
|
|
X |
January – February 2017 |
Data Quality web conferences for MEPS Field Interviewers Class of 2016 (part of the Data Quality home study beginning in December 2016) |
|
|
|
X |
Home study for new interviewers as introduction to in-person training |
|
|
X |
|
New interviewer training |
X |
|
|
|
Post-classroom home study for new interviewers (part 1) |
|
|
X |
|
March 2017 |
Post-classroom home study for new interviewers (part 2) |
|
|
X |
|
April 2017 |
Newsletter |
|
|
X |
|
June 2017 |
Round 2/4 home study |
|
|
X |
X |
September, 2017 |
Newsletter |
|
|
X |
X |
October – December 2017 |
Additional home studies for Experienced interviewers |
|
|
X |
X |
December 2017 |
Train-the-Trainer Workshop for spring 2018 training team members |
|
|
|
|
Ongoing each month |
Data Quality Coaching, FS led |
|
|
X |
X |
Return To Table Of Contents
The overall structure for training new interviewers in
2017 followed the pattern established in prior years. It began with home study,
followed by an in-person training conducted in Los Angeles, California between
January 26 and February 2, and completion of a two-part post-classroom home
study component.
The pre-classroom home study package included a
project laptop, and an interactive self-paced workbook with exercises and
on-line modules administered through Westat’s Learning Management System (LMS).
Originally designed in 2009 to incorporate materials previously covered in the
longer, 11-day in-person training, the home study had a completion time of about
20 hours. The package was restructured for 2016 to reduce administration time to
an estimated 16 hours and eliminate the need for early distribution of the
Interviewer Field Procedures Manual. The content was consolidated and contained
a series of 17 self-contained job aids and ‘quick start’ guides that could be
easily carried into the field for reference. Newly hired interviewers watched
several online videos and completed quizzes through Westat’s LMS, which
generated regular reports allowing home office and field management staff to
monitor the completion of each trainee’s home study. Only minor revisions were
made to this home study curriculum for 2017. These changes included a new
Checklist for Home Study Completion to help trainees track their completion of
each task, a new Tying It All Together module to assess learning transfer,
revised content about the sample design, and updates to several job aids to
reflect changes in project software and login procedures. New hires received
their home study package between January 12 and January 23, 2017. The shipment
schedule was timed to allow trainees adequate time to complete the package
before the in-person training, but not so early that their introduction to
important study concepts and project terminology would degrade before the
in-person training.
The in-person training component maintained the
emphasis on interviewer behaviors and interviewing techniques that facilitate
complete and accurate reporting. In all mocks and mini-mocks, trainers were
instructed to reinforce good interviewing behaviors (e.g., reading questions
verbatim, training respondents to use records to aid recall, actively engaging
respondents in the use of show cards, and using active listening and probing
skills) by calling attention to instances in which interviewers demonstrated
such behaviors. To enhance trainee awareness of behaviors that affect data
quality, dyad scripts included instructions to take a “time- out” at certain
items in the interview to highlight relevant data quality issues.
The in-person training closely followed the agenda
implemented in Spring 2016 with a few modifications. Several modules were
revised to reflect changes in the Interview Management System (IMS) associated
with the implementation of BFOS6. The administration of some exercises was
enhanced by completing the first question or two as a group to clarify
instructions prior to independent practice by each trainee or small groups of
trainees. A few modules were revised to reduce redundancy, creating time for the
addition of a two new modules. On day 1, a new module reinforced important MEPS
concepts introduced in the pre-classroom home study, such as the types of
Reporting Units (RUs), definitions for the Reference Person and Respondent, the
MEPS study design, Key versus Non-Key RU members, and the MEPS event types. On
day 7, a new Tying It All Together module featured an interactive review focused
on the materials presented on the first six days of the training.
The last ½ day of training utilized an approach first
used in 2016 that included a breakout session where new hires met with their
regional field supervisor and/or field manager to review expectations and plan
for organizing their materials when they got home with the goal of being able to
quickly begin contacting their cases. These modules focused on addressing the
logistics of managing case assignments, efficiently working cases, and reviewing
case materials in preparation for each interview.
With the continued use of the Cancer Self-Administered
Questionnaire (CSAQ) in Panel 21 Round 3 and Panel 22 Round 1, dyada were
adjusted to ensure that trainees got exposure to family members who were
prompted to complete the questionnaire.
For the 7 1/2 days of project-specific training, each
trainee was assigned to one of six training communities staffed by a primary and
support trainer, as well as two or more classroom runners. In addition to
lectures on study procedures and questionnaire content, trainees completed mock
interviews and dyad role plays using the Round 1, Round 3, and Round 5
questionnaires. The mocks and dyads included training on the use of electronic
case materials and completion of the electronic Interviewer Assignment Sheet
(IAS). Multiple ‘mini-mock’ interviews—interviews with data pre-entered to allow
trainees to directly access the specific section to be addressed in a given
session—allowed for in-depth sessions on the more complex sections of the CAPI
questionnaire such as household reenumeration and utilization and charge payment
without necessitating the completion of a full mock interview or dyad practice.
Trainees received instruction and practice in use of the Interviewer Management
System (IMS) and ways of introducing the survey and answering respondent
questions. To ensure training participants had access to additional coaching and
practice, four one-hour structured evening practice labs were scheduled from
6:30-7:30 PM on days 2, 3, 5, and 6 of training. The last of these four labs
focused on refusal conversion and aversion techniques. Two additional evening
help labs were held from 5:45 PM to 6:30 PM on days 1 and 4 of training to
assist trainees with accessing their electronic timesheet to allow for the real
time reporting of time and expenses that is now a corporate requirement by
Westat. Eighty-seven of the 93 new hires successfully completed training.
In 2017, bilingual trainees who had been certified by
Westat for their proficiency in Spanish were trained alongside other new
interviewers from their home region as done in prior years. However, for several
of the dyads administered during main training, bilingual new hires were paired
with other bilingual trainees so that they could conduct these practice
interviews in Spanish. An additional half day of bilingual training was held
following the conclusion of regular project training. This session focused on
procedures and techniques that are of particular importance to interviewing
Spanish-speaking households including practicing refusal aversion/conversion
techniques in Spanish. A total of 15 interviewers successfully completed 2017
bilingual training.
The post-classroom home study was administered in two
parts. New interviewers left in-person training with the first component of the
home study. It contained practice on searching the provider directory, an
exercise on secure messaging (BSM), and an LMS video on ethics. Additional
content added for 2017 included tips for improving data quality from experienced
interviewers, as well as instruction and exercises on locating techniques and
working with proxy respondents. The locating content was incorporated into the
in-person training prior to 2016 and was included in the second parts of the
post-classroom home study for 2016 and 2017. New interviewers were required to
complete this training before beginning their field work. The second component
of the post-classroom home study was sent to new interviewers in mid-March. It
focused on less common interviewing situations including case management of
related RU members who are identified as being institutionalized and handling
NHIS students. Several interactive modules on repeat co-pays and tools and
techniques applied to the data quality continuum were administered through
Westat’s Learning Management System (LMS). A quiz with immediate feedback
functionality was also administered through the LMS. Interviewers were
instructed to complete this second home study component by the end of March.
Daily reports generated by the LMS allowed home office training staff, field
managers, and field supervisors to monitor interviewer progress.
Return To Table Of Contents
The 2017 experienced interviewer home study for the
Spring panels and rounds followed the format of prior years. The self-paced home
study addressed procedural and questionnaire updates discussed in Chapter 2. The
home study also included information about the new sample and associated changes
to the field management structure as well as information about administering the
Cancer SAQ. Other topics included enhancements to the IMS for sorting cases and
tracking completed cases shipped to Westat’s home office. Finally, the home
study addressed interviewer production expectations and reviewed key aspects of
the data quality initiative launched in 2013. The home study package was sent to
all experienced field staff in December 2016 with completion required by the
January 10, 2017, the start date for Panel 20 Round 5.
The home study for the Fall rounds of data collection
in 2017 also followed established formats. The two-hour self-paced program
contained an instructional memo, example materials, and quiz. Data quality
topics included the importance of collection and follow up procedures related to
all self-administered questionnaires and strategies for efficiently working
cases. Interviewers attending the 2017 in-person training were also required to
complete a mock interview with their supervisor, field manager, or designated
senior interviewer before beginning the fall rounds of data collection.
Another aspect of the continuing education program for
late 2016 and early 2017 built on the education model developed in 2015 to
freshen the data quality skills of experienced interviewers. This model
re-tooled several of the videos and other content administered during the
January 2014 in-person training for all experienced interviewers. The 2016-2017
program targeted the group of new interviewers who attended the in-person
training for new hires in January 2016. The 48 hires still active at the end of
December 2016 completed on-line data quality training modules administered
through Westat’s LMS and participated in one of several follow up discussions
over WebEx during January 2017. The sessions reviewed key concepts related to
data quality, watched video clips highlighting examples of issues affecting data
quality, and participated in group discussions moderated by field management
staff.
Additional Distance Learning Opportunities for
Experienced Field Staff In preparation for the rollout of the modernized
CAPI instrument for spring 2018 data collection, a number of ad hoc trainings
were provided to field staff. During the fall of 2017, experienced field
interviewers, as well as their supervisors and field managers, viewed a brief
promotional video to build excitement about upcoming changes to the CAPI
instrument, and then completed two home study programs in November and December
2017. The home study modules introduced major enhancements to the CAPI
instrument, described the benefits of these changes to MEPS staff and study
participants, demonstrated new administration techniques, and provided hands-on
practice administering select portions of the MEPS instrument. Field
interviewers were also required to complete two entire practice interviews
shortly before attending the in-person training in early January 2018.
Home Office/Field Manager/Supervisor Train-the-Trainer
Workshop. Home office field operations and design staff conducted a
four-day, in-person workshop at the Westat home office from December 5-8, 2017.
This workshop was attended by a subset of MEPS home office staff, field
directors, field managers, field supervisors, and field interviewers who would
serve as trainers or in training support roles at the spring 2018 in-person
trainings for new and experienced interviewers. The purpose of the workshop was
to prepare staff for their training roles by increasing their familiarity with
the upgraded CAPI instrument and new training materials. Home office staff
presented a selection of the training modules developed for the spring 2018
interviewer trainings and obtained feedback on the draft materials. These
modules focused on administration of portions of the CAPI instrument with the
most significant changes from the historical CAPI WVS instrument. In addition,
the attending staff engaged in individual, hands-on practice using the updated
CAPI instrument while home office staff were available to answer their
questions.
Return To Table Of Contents
This chapter describes the MEPS-HC data collection
operations and provides selected results for the five rounds of MEPS-HC
interviewing conducted in 2017. Selected comparisons to results of prior years
are also presented. Tables showing results for all years of the study are
provided in Appendix A.
MEPS data collection management relies on a set of
interrelated systems and procedures designed to accomplish three goals,
including efficiency, data quality, and cost containment. The systems include a
the Basic Field Operating System (BFOS) which facilitates case management
through case assignment, case status and hours reporting, data quality
reporting, and interviewer efficiency. Related systems include the Computer
Assisted Recorded Interview (CARI) system and the MEPS supervisor dashboard
which was in development in 2017. The CARI system CARI code allows for review of
recordings for selected interview items to assist in the assessment of
interviewer performance and question assessment. The MEPS supervisor dashboard
provides views into daily and weekly management tasks related to the tracking of
hours per complete, key alerts from casework in the field, the management of
weekly production goals, and a number of metrics designed to facilitate weekly
field calls with interviewers regarding hours worked, production, and interview
quality. These tools, along with the implementation of models designed to
identify cases with a higher propensity for completion, and on-hold procedures
designed to prevent the overwork of cases in the field, form a comprehensive
framework for the management of MEPS data collection.
In most respects, the procedures followed in the 2017
data collection mirrored those of prior years. including the administration of a
Cancer SAQ administered to Round 1 and Round 3 households. As with other hard
copy instruments fielded in previous years, this Cancer SAQ required adjustments
to trigger the interviewer’s request for the SAQ in appropriate households and
to record the results of the request, and, where needed, of followup efforts. A
new version of the BFOS system was developed and implemented in 2017 to
modernize the software and allow for future updates and expansion. The new BFOS
system maintained the core functionality of the previous management system but
also included a number of improvements to report content and formatting and
displays for supervisors and interviewers. These updates allowed for better
integration of the Computer Assisted Recorded Interview (CARI) system and the
supervisor dashboard into the management toolkit.
Other aspects of the data collection protocols
remained largely unchanged in 2017. As in prior years, respondent contact
materials provided respondents with the link to the MEPS website (www.meps.ahrq.gov),
a toll-free number to Alex Scott at Westat, and the link to the Westat website (www.westat.com).
Calls received from the Alex Scott line were logged into the call tracking
system and the appropriate super visor notified so that he/she could take the
proper course of action.
The advance contact calls to Panel 22 Round 1
households were made by a subset of the experienced MEPS interviewers.
For Round 1 households, interviewers were instructed,
with few exceptions, to make initial contact with the household in person. For
later rounds, interviewers were allowed to make initial contacts to set
appointments by telephone, so long as the household had been cooperative in
prior rounds.
Procedures for collecting the medical and pharmacy
authorization forms for the Medical Provider Component and self-administered
questionnaires remained as in recent prior panels.
MEPS field managers, field directors, and the task
leader for field operations continued to manage the field data collection in
collaboration with the field supervisors, reinforcing the importance of
balancing data quality with production and cost goals across regions. Field
staff refer to this collaborative effort as the “No Region Left Behind”
approach.
Throughout the year Westat continued to review data
for all respondents reported to have been institutionalized in order to identify
any individuals who might have been inappropriately classified and, as a result,
treated as out of scope for MEPS data collection.
Data Collection Schedule. The sequence for beginning the Spring rounds of data collection, most recently adjusted in 2014,
was maintained for 2017. Data collection began with Round 5, followed by Round
3, and then by Round 1. Because the reference period for Round 5 ends December
31—before the Round 5 field period actually begins—the earlier start for Round 5
was intended to ‘front-load’ as much of the work as possible and to reduce the
recall period for health care events to be reported in the final Round 5
interview. For the Round 1 respondents, the later starting date allowed several
additional weeks of elapsed time in which respondents could experience health
care events to report in their Round 1 interview, with these additional events
giving them a more realistic understanding of what to expect in the subsequent
rounds of the study.
The field period dates for the five rounds of data
collection conducted in 2017 are shown in Table 4-1.
Table 4-1. Data collection schedule and number of weeks per round of data collection, 2017
Round |
Dates |
No. of weeks in round |
1 |
January 24 – July 14 |
24 |
2 |
July 28 – December 7 |
19 |
3 |
January 17 – June 15 |
21 |
4 |
July 5 – November 30 |
21 |
5 |
January 10 – May 15 |
18 |
Return To Table Of Contents
Data Quality Initiative. The ongoing MEPS data
quality initiative, begun during 2013, continued through 2017 with ongoing
revisions to the field monitoring procedures to enhance the feedback provided to
the interviewers. The initiative included periodic reinforcement of key training
points relating to data quality and supervisor training and reinforcement of the
data quality reports to better enable supervisors to monitor interviewer
performance on key quality indicators.
Data Quality (DQ) Monitoring. The MEPS data
quality field monitoring system and procedures allowed supervisors and field
managers to identify interviewers whose work deviated from quality standards and
who might need additional coaching on methods for getting respondents to more
completely report their health care events. While focusing on the occurrence of
multiple indicators, the reports allowed the supervisor to identify patterns of
interviewer behavior such as shorter than expected interviews, interviews in
which records were consistently not used, and low CARI consent rates. CARI
review was further integrated into weekly monitoring activities with supervisors
listening to portions of over 1,000 interviews per field period. These reviews
were used to reinforce positive interviewing behaviors and techniques, and
listening to CARI has given field supervisors direct exposure to interviewing
behaviors that need to be addressed. In some cases, CARI recording results were
such that interviewers were instructed to stop working until they could receive
some re-training, including administering a practice interview to their field
supervisor. Many of the current monitoring reports were put in place following a
refresher training designed to improve data quality. The reports allowed
targeted remote retraining to maintain the data quality gain experienced. The
tools have proven effective at assisting the training staff to slow the erosion
of these skills despite the need for additional remote and in-person training
experiences to maintain high quality data in the future.
The reports were produced weekly during the Spring and
Fall data collection periods and used the productivity thresholds established in
2014. Interviewers who had completed 5 or more Round 1 cases since their last
report cases and interviewers who had completed 10 or more Round 2-5 cases since
their last report were listed on the weekly summary. Patterns in behavior over
time were documented and systematically discussed with interviewers as part of a
retraining effort.
Case On-hold for Work Plan Procedures. The project
implemented a model designed to detect cases at risk for overwork or in need of
review to determine the viability of a case compared to other pending cases. At
risk cases are automatically placed in an on-hold status for supervisor and
field manager review. Only cases with a supervisor drafted and field manager
approved work plan tailored to achieve a successful interview are removed from
the on-hold status and assigned back to an interviewer for additional targeted
completion attempts. At various points in the round, cases with an on-hold
status are reassessed in the context of remaining pending cases to determine if
any should be released to the field for further work. This practice is designed
to produce completes with fewer attempts and more efficient use of resources for
refusal conversion and locating activities. Poor quality attempts are avoided
and field effort is reduced. The reintroduction of cases with a proper work plan
is designed to allow for a high rate of response by tailoring work for cases
before they are overworked or removed from the field as non-viable.
Case Potential Listing. The project refined a
model detaining a completed interview from a given case (“propensity to
complete”) relative to other pending cases in a region. Timing for
implementation of the model during the field period was adjusted to better fit
operational schedules by panel and round. The model continued to prove useful
for identifying cases with higher potential and enabling supervisors to direct
interviewer effort during a field period toward these cases and away from cases
not likely to result in completion. The model is designed to identify cases with
a high likelihood of completion at that point in the field period relative to
other pending cases. The model is dynamic and is updated weekly based on the
specific conditions for pending cases at that time.
Information from this model is integrated into BFOS
(the system used for case management), providing propensity to complete as part
of a comprehensive view of a case for a given week. Supervisors were to instruct
interviewers—in the absence of other field information that would dictate
otherwise—to attempt these cases during the next production week. Results in
Table 4-2 illustrate the relative success of the model for identifying cases
likely to result in completion or an appointment during the next week of the
field period. The results indicate a higher average rate of completion and
appointments for cases designated as high potential as compared to other pending
cases being worked by the field.
* Note: Success = completion or appointment during the following week.
Institutional Population Case Review. Home office
staff continued to review cases in which one or more RU members were reported as
entering an institution. Weekly reports were generated to identify the specific
RU members in multi-person households who had been coded as institutionalized.
The home office team reviews the living arrangements reported for all these
persons and researches each reported institution to assess its status as a
provider of long-term care and thus of the eligibility of the persons in the
institution for MEPS data collection. Based on the review, feedback is provided
to the field management staff and cases incorrectly coded as institutionalized
are re-fielded for an interview. During 2017, 101 cases were reviewed; of these,
8 (8.0 percent) were determined to have been coded incorrectly and required
refielding.
Return To Table Of Contents
Table 4-3 provides an overview of the data collection
results for Panels 14 through 22, showing sample sizes, average interviewer
hours per completed interview, and response rates. Table 4-4 shows the final
response rates a second time, reformatted to facilitate by-round comparisons
across panels and years.
Of the data collection rounds conducted in 2017, the
response rates for Rounds 2 showed a slight increase over the rates for the
corresponding round of 2016 and Round 4 was on par with 2016. However, response
rates for Rounds 1 and 3 showed decreases from 2016, with Round 1 exhibiting a
1.8 percent decrease from the high-water mark year of 2016, 0.9 percent lower
than the more typical rate of Round 1 in 2015. An estimated 30 percent of the
1.8 percent decrease in Round 1 is attributable to a shift in the distribution
of cases in the domains due to the changes in NHIS sample composition in Panel
22. The lack of minority oversample reduced the proportion of households with a
higher probability to respond as compared to the previous long-standing sample
design.
Table 4-3. MEPS HC data collection results, panels 14 through 22
Panel/round |
Original sample |
Split cases (movers) |
Student cases |
Out-of-scope cases |
Net sample |
Completes |
Average interviewer hours/complete |
Response rate (%) |
Response rate goal |
Panel 14 |
Round 1 |
9,899 |
394 |
74 |
140 |
10,227 |
7,650 |
12.3 |
74.8 |
80.0 |
Round 2 |
7,669 |
212 |
29 |
27 |
7,883 |
7,239 |
8.3 |
91.8 |
95.0 |
Round 3 |
7,226 |
144 |
23 |
34 |
7,359 |
6,980 |
7.3 |
94.9 |
96.0 |
Round 4 |
6,974 |
112 |
23 |
30 |
7,079 |
6,853 |
7.7 |
96.8 |
97.0 |
Round 5 |
6,845 |
55 |
9 |
30 |
6,879 |
6,761 |
6.2 |
98.3 |
98.0 |
Panel 15 |
Round 1 |
8,968 |
374 |
73 |
157 |
9,258 |
6,802 |
13.2 |
73.5 |
80.0 |
Round 2 |
6,811 |
171 |
19 |
21 |
6,980 |
6,435 |
8.9 |
92.2 |
95.0 |
Round 3 |
6,431 |
134 |
23 |
22 |
6,566 |
6,261 |
7.2 |
95.4 |
96.0 |
Round 4 |
6,254 |
116 |
15 |
26 |
6,359 |
6,165 |
7.8 |
97.0 |
97.0 |
Round 5 |
6,156 |
50 |
5 |
19 |
6,192 |
6,078 |
6.0 |
98.2 |
98.0 |
Panel 16* |
Round 1 |
10,417 |
504 |
98 |
555 |
10,940 |
8,553 |
11.4 |
78.2 |
80.0 |
Round 2 |
8,561 |
252 |
42 |
34 |
8,821 |
8,351 |
7.6 |
94.7 |
95.0 |
Round 3 |
8,351 |
232 |
19 |
28 |
8,574 |
8,256 |
6.4 |
96.1 |
96.0 |
Round 4 |
8,232 |
155 |
16 |
13 |
8,390 |
8,162 |
6.6 |
97.3 |
97.0 |
Round 5 |
8,143 |
67 |
13 |
25 |
8,198 |
7,998 |
5.5 |
97.6 |
98.0 |
Panel 17 |
Round 1 |
9,931 |
490 |
92 |
127 |
10,386 |
8,121 |
11.7 |
78.2 |
80.0 |
Round 2 |
8,113 |
230 |
35 |
19 |
8,359 |
7,874 |
7.9 |
94.2 |
95.0 |
Round 3 |
7,869 |
180 |
15 |
15 |
8,049 |
7,663 |
6.3 |
95.2 |
96.0 |
Round 4 |
7,656 |
199 |
19 |
30 |
7,844 |
7,494 |
7.4 |
95.5 |
97.0 |
Round 5 |
7,485 |
87 |
10 |
23 |
7,559 |
7,445 |
6.1 |
98.5 |
98.0 |
Panel 18 |
Round 1 |
9,950 |
435 |
83 |
111 |
10,357 |
7,683 |
12.3 |
74.2 |
80.0 |
Round 2 |
7,691 |
264 |
32 |
16 |
7,971 |
7,402 |
9.2 |
92.9 |
95.0 |
Round 3 |
7,402 |
235 |
21 |
22 |
7,635 |
7,213 |
7.6 |
94.5 |
96.0 |
Round 4 |
7,203 |
189 |
14 |
22 |
7,384 |
7,172 |
7.5 |
97.1 |
97.0 |
Round 5 |
7,163 |
94 |
12 |
15 |
7,254 |
7,138 |
6.2 |
98.4 |
98.0 |
Panel 19 |
Round 1 |
9,970 |
492 |
70 |
115 |
10,417 |
7,475 |
13.5 |
71.8 |
80.0 |
Round 2 |
7,460 |
222 |
23 |
24 |
7,681 |
7,188 |
8.4 |
93.6 |
95.0 |
Round 3 |
7,168 |
187 |
12 |
17 |
7,350 |
6,962 |
7.0 |
94.7 |
96.0 |
Round 4 |
6,946 |
146 |
20 |
23 |
7,089 |
6,858 |
7.4 |
96.7 |
97.0 |
Round 5 |
6,856 |
75 |
7 |
24 |
6,914 |
6,794 |
5.9 |
98.3 |
98.0 |
Panel 20 |
Round 1 |
10,854 |
496 |
85 |
117 |
11,318 |
8,318 |
12.5 |
73.5 |
80.0 |
Round 2 |
8,301 |
243 |
39 |
22 |
8,561 |
7,998 |
8.3 |
93.4 |
95.0 |
Round 3 |
7,987 |
173 |
17 |
26 |
8,151 |
7,753 |
6.8 |
95.1 |
96.0 |
Round 4 |
7,729 |
161 |
19 |
31 |
7,878 |
7,622 |
7.2 |
96.8 |
97.0 |
Round 5 |
7,611 |
99 |
13 |
23 |
7,700 |
7,421 |
6.0 |
96.4 |
98.0 |
Panel 21 |
Round 1 |
9,851 |
462 |
92 |
89 |
10,316 |
7,674 |
12.6 |
74.4 |
80.0 |
Round 2 |
7,661 |
207 |
32 |
17 |
7,883 |
7,327 |
8.5 |
93.0 |
95.0 |
Round 3 |
7,327 |
166 |
14 |
19 |
7,488 |
7,043 |
7.2 |
94.1 |
96.0 |
Round 4 |
7,025 |
119 |
14 |
20 |
7,138 |
6,907 |
7.0 |
96.8 |
97.0 |
Round 5 |
|
|
|
|
|
|
|
|
|
Panel 22 |
Round 1 |
9,835 |
352 |
68 |
86 |
10,169 |
7,381 |
12.8 |
72.6 |
80.0 |
Round 2 |
7,371 |
166 |
19 |
11 |
7,545 |
7,039 |
8.5 |
93.3 |
95.0 |
* Figures in the table are weighted to reflect results of the interim nonresponse subsampling procedure implemented in the first round of Panel 16.
Return To Table Of Contents
Table 4-4. Response rates by data collection year, 2010-2017
Year |
Round 1 |
Round 2 |
Round 3 |
Round 4 |
Round 5 |
2010 |
Panel 15 |
73.5 |
92.2 |
|
|
|
Panel 14 |
|
|
94.9 |
96.8 |
|
Panel 13 |
|
|
|
|
97.9 |
2011 |
Panel 16 |
78.2 |
94.8 |
|
|
|
Panel 15 |
|
|
95.4 |
97.0 |
|
Panel 14 |
|
|
|
|
98.3 |
2012 |
Panel 17 |
78.2 |
94.2 |
|
|
|
Panel 16 |
|
|
96.1 |
97.3 |
|
Panel 15 |
|
|
|
|
98.2 |
2013 |
Panel 18 |
74.2 |
92.9 |
|
|
|
Panel 17 |
|
|
95.2 |
95.5 |
|
Panel 16 |
|
|
|
|
97.6 |
2014 |
Panel 19 |
71.8 |
93.6 |
|
|
|
Panel 18 |
|
|
94.5 |
97.1 |
|
Panel 17 |
|
|
|
|
98.5 |
2015 |
Panel 20 |
73.5 |
93.4 |
|
|
|
Panel 19 |
|
|
94.7 |
96.7 |
|
Panel 18 |
|
|
|
|
98.4 |
2016 |
Panel 21 |
74.4 |
93.0 |
|
|
|
Panel 20 |
|
|
95.1 |
96.8 |
|
Panel 19 |
|
|
|
|
98.3 |
2017 |
Panel 22 |
72.6 |
93.3 |
|
|
|
Panel 21 |
|
|
94.1 |
96.8 |
|
Panel 20 |
|
|
|
|
96.4 |
Hours per completed interview in 2017 showed
relatively little change from 2016. Hours per complete increased by 0.4 hours
for Round 3 but remained within +/- .2 hours for other rounds. An examination of
Spring and Fall 2017 hours per complete combining all active rounds provides a
similar description. In Spring 2017, the average hours per complete across all
rounds was 8.5 while this figure was 8.2 hours per complete in 2016. The
increase in Round 3 drove this difference. Average hours per complete in the
Fall were identical in 2017 and 2016 at 7.8 hours.
Return To Table Of Contents
Components of Response and Nonresponse
Table 4-5 summarizes components of nonresponse
associated with the Round 1 households by panel beginning in 2012. As the table
shows, the components of nonresponse other than refusals—the ‘not located’ and
‘other’—have remained relatively stable over the last five years. The combined
total for the ‘not located’ and the ‘other nonresponse’ has ranged only between
5.8 and 5.4 percent. The larger year to year changes are reflected in the
percentage of refusals, where increases and decreases in the percentage of
refusals align closely with corresponding decreases and increases in the
completion rate: for 2017, the 1.8 percentage point increase in the response
rate pairs aligns with the 1.6 percentage point increase in the refusal rate.
Table 4-5. Summary of MEPS round 1 response and nonresponse, 2012-2017 panels
Year/Panel |
2012 P17R1 |
2013 P18R1 |
2014 P19R1 |
2015 P20R1 |
2016 P21R1 |
2017 P22R1 |
Total Sample |
10,513 |
10,468 |
10,532 |
11,435 |
10,405 |
10,255 |
Out of scope (%) |
1.2 |
1.1 |
1.1 |
1.0 |
0.9 |
0.8 |
Complete (%) |
78.2 |
74.2 |
71.8 |
73.5 |
74.4 |
72.6 |
Nonresponse (%) |
21.8 |
25.8 |
28.2 |
26.5 |
25.6 |
27.4 |
Refusal (%) |
17.1 |
20.1 |
22.4 |
21.0 |
20.2 |
21.8 |
Not located (%) |
3.7 |
4.3 |
4.2 |
4.3 |
3.7 |
3.9 |
Other nonresponse (%) |
1.0 |
1.4 |
1.6 |
1.2 |
1.7 |
1.7 |
Return To Table Of Contents
Tables 4-6 through 4-13 summarize results for additional aspects of the 2017 data collection. Because Round 1 is the most difficult of the 5 rounds, the presentation focuses primarily on Panel 22, Round 1.
Table 4-6. Summary of MEPS round 1 response, 2012-2017 panels, by NHIS completion status
Year |
2012 |
2013 |
2014 |
2015 |
2016 |
2017 |
Original NHIS sample (N) |
9,931 |
9,951 |
9,970 |
10,854 |
9,851 |
9,835 |
Percent complete in NHIS |
80.9 |
78.1 |
81.9 |
80.6 |
77.6 |
81.0 |
Percent partial complete in NHIS |
19.1 |
21.9 |
18.1 |
19.4 |
22.4 |
19.0 |
MEPS Round 1 response rate |
Percent complete for NHIS completes |
80.7 |
76.9 |
74.5 |
75.9 |
77.3 |
75.4 |
Percent complete for NHIS partial completes |
68.2 |
64.5 |
58.9 |
63.1 |
64.8 |
62.0 |
Note: Figures shown are based on original NHIS sample and exclude reporting units added to the sample as “splits” and “students”.
Return To Table Of Contents
NHIS Completion Status
Each year the MEPS sample includes a number of
households classified in the NHIS as ‘partial completes’, in which the
interviewer was able to complete part, but not all, of the full NHIS interview.
The MEPS experience has been that for many of these NHIS cases, the difficulty
experienced by the NHIS interviewer carries over to the MEPS interview: the MEPS
response rate for the NHIS partial completes is substantially lower than for the
NHIS completes. As noted in Chapter 1, for the 2017 sample, AHRQ repeated the
step taken since 2012 of sampling the NHIS partial completes in the
‘White/other’ category at a lower rate than the NHIS completes.
The upper portion of Table 4-6 shows the proportion of
partial completes in the sample over recent years. Across all domains, the
proportion of the 2017 sample classified as partial complete was lower than in
2016 but on par with 2015 (19. 0 percent vs 22.4 percent in 2016 and
19.4 percent in 2015). The lower portion of the table shows the persistent and
substantial difference in response rate between these two components of the
sample. Among the cases originally delivered from the NHIS (that is, with new
reporting units discovered during the MEPS interviewing excluded from the
counts), the response rate for the NHIS partial completes has been at least
12 percentage points less than that for the NHIS completes; in 2017, the
difference was 13.4. The decrease in the 2016 Round 1 response rate compared to
2015 was reflected in the response rates for both the NHIS completes and partial
completes (1.9 percentage points and 2.8 percentage points, respectively).
Sample Domain
Table 4-7 breaks out response information for the NHIS
completes and partial completes by sample domain categories. Table 4-7, unlike
Table 4-6, does include reporting units added to the sample during Round 1 data
collection; it shows the differential in response rates between the NHIS partial
completes and full completes persisting across all of the domains. The
difference across the full 2017 sample was 13.4 percentage points, with NHIS
partial completes responding at a lower rate in all domains. Within the
individual domains the difference between the response rate for the NHIS
completes and the NHIS partials was greatest for the White/Other domain –
18.7 percentage points. The differences were smaller for the other domains: 8.6
percentage points for the Asian domain, 13.2 for the Black domain, and 10.8 for
the Hispanic domain. Across all 4 domains, refusal rates ranged from a low of
13.7 percent for the Black domain to 29.0 percent among the Asian domain. Within
the subdomains, refusals ranged from a low of 11.3 percent for the completes in
the Black domain to a high of 38.8 percent for the partial completes in the
White/Other domain.
Table 4-7. Summary of MEPS panel 22 round 1 response
rates, by sample domain by NHIS completion status
By domain |
Net sample (N) |
Complete (%) |
Refusal (%) |
Not located (%) |
Other nonresponse (%) |
Asian |
704 |
63.8 |
29.0 |
3.7 |
3.6 |
NHIS complete |
543 |
65.7 |
27.8 |
3.5 |
2.9 |
NHIS partial complete |
161 |
57.1 |
32.9 |
4.3 |
5.6 |
Black |
1,504 |
79.1 |
13.7 |
5.6 |
1.7 |
NHIS complete |
1,110 |
82.5 |
11.3 |
4.9 |
1.2 |
NHIS partial complete |
394 |
69.3 |
20.6 |
7.6 |
2.5 |
Hispanic |
1,972 |
76.7 |
16.2 |
5.4 |
1.6 |
NHIS complete |
1,449 |
79.6 |
14.3 |
4.9 |
1.2 |
NHIS partial complete |
523 |
68.8 |
18.9 |
6.9 |
2.9 |
White/other |
5,989 |
70.6 |
24.8 |
3.1 |
1.5 |
NHIS complete |
5,098 |
73.4 |
22.3 |
2.8 |
1.4 |
NHIS partial complete |
891 |
54.7 |
38.8 |
4.5 |
2.0 |
All groups |
10,169 |
72.6 |
21.8 |
3.9 |
1.7 |
NHIS complete |
8,200 |
75.2 |
19.8 |
3.5 |
1.5 |
NHIS partial complete |
1,969 |
61.5 |
30.1 |
5.7 |
2.6 |
Note: Includes reporting units added to sample as “splits” and “students” from original NHIS households, which were given the same ‘complete’ or ‘partial complete’ designation as the original household.
Return To Table Of Contents
Refusals and Refusal Conversion
Table 4-8 summarizes the results of refusal conversion
efforts by panel. The rate of ‘ever refused’ for RUs in Panel 22 was one
percentage point more than in 2016, equaling that of 2014 and 2015, while the
rate of refusal conversion decreased for 2017 by 1.4 percent. The increase in
overall response rate between the two years resulted in part from the larger
percentage of ‘ever refused’ households in 2017.
Table 4-8. Summary of MEPS round 1 results for RUs who ever refused, panels 15-22
Panel |
Net sample (N) |
Ever refused (%) |
Converted (%) |
Final refusal rate (%) |
Final response rate (%) |
Panel 15 |
9,258 |
29.4 |
26.6 |
21.0 |
73.5 |
Panel 16 |
10,940 |
26.3 |
30.9 |
17.6 |
78.2 |
Panel 17 |
10,386 |
25.3 |
30.2 |
17.2 |
78.2 |
Panel 18 |
10,357 |
25.5 |
25.0 |
18.1 |
74.2 |
Panel 19 |
10,418 |
30.1 |
23.3 |
22.4 |
71.8 |
Panel 20 |
11,318 |
30.1 |
29.2 |
21.0 |
73.5 |
Panel 21 |
10,316 |
29.1 |
29.0 |
20.2 |
74.4 |
Panel 22 |
10,169 |
30.1 |
27.6 |
21.8 |
72.6 |
Return To Table Of Contents
Tracing and Locating
Table 4-9 shows results of locating efforts for
households that required tracking during the Round 1 field period by panel. The
percent of households that required some tracing in 2017 (13.0 percent) was
similar to that of 2016; the final rate of households that were not located
after tracing efforts was also similar to 2016 (3.9 percent compared to 3.7
percent). The 2017 ‘not located’ rate was less than 2013-2015, but within the
range of 3.0-4.3 percent for the 8-year period shown in the table.
Table 4-9. Summary of MEPS round 1 results for RUs who were ever traced, panels 15-22
Panel |
Total sample (N) |
Ever traced (%) |
Not located (%) |
Panel 15 |
9,415 |
16.7 |
4.1 |
Panel 16 |
11,019 |
18.2 |
3.0 |
Panel 17 |
10,513 |
18.7 |
3.6 |
Panel 18 |
10,468 |
16.0 |
4.3 |
Panel 19 |
10,532 |
19.5 |
4.1 |
Panel 20 |
11,435 |
14.0 |
4.3 |
Panel 21 |
10,405 |
12.8 |
3.7 |
Panel 22 |
10,228 |
13.0 |
3.9 |
Return To Table Of Contents
Interview Length
Table 4-10 shows the mean length (in minutes) for
interviews conducted without interruption in a single session in Panels 15 – 22.
Timings for all of the rounds of data collection conducted in 2017 show an
increase from the prior year, with the increases being somewhat larger in Rounds
1 and 3 than in Rounds 2, 4, and 5.
Table 4-10. Interview timing comparison, panels 15 through 22 (mean minutes per interview, single-session interviews)
Panel |
Panel 15 |
Panel 16 |
Panel 17 |
Panel 18 |
Panel 19 |
Panel 20 |
Panel 21 |
Panel 22 |
Round 1 |
74.7 |
74.0 |
67.8 |
78.0 |
85.5 |
76.4 |
75.5 |
79.9 |
Round 2 |
87.2 |
88.1 |
90.2 |
102.9 |
92.3 |
86.3 |
85.3 |
88.8 |
Round 3 |
86.4 |
87.2 |
94.3 |
103.1 |
94.5 |
89.7 |
93.4 |
|
Round 4 |
80.2 |
85.9 |
99.6 |
89.0 |
84.6 |
80.5 |
82.7 |
|
Round 5 |
77.6 |
85.4 |
92.2 |
87.4 |
84.1 |
85.3 |
|
|
Return To Table Of Contents
Mean Contact Attempts Per Case
Table 4-11 shows mean contact attempts, by mode and NHIS completion status, for all cases in Round 1 of Panels 20 - 22. Overall, the number of contacts required per case in Panel 22 showed moderate declines from 2016, an overall decrease of .5 attempts per complete that is reflected in varying degrees for both in-person and telephone contacts and among both the NHIS completes and partial completes. This decrease is chiefly attributed to the newly instituted on-hold process for cases at risk of overwork. As in prior years, in Panel 22 the NHIS partial complete cases required substantially greater effort than the NHIS completes, roughly 1.5 additional in-person contacts per household.
Table 4-11. Mean contact attempts by NHIS completion status, round 1 of panels 20-22
Contact type |
Panel 20, Round 1 |
Panel 21, Round 1 |
Panel 22, Round 1 |
All RUs |
Complete |
Partial |
All RUs |
Complete |
Partial |
All RUs |
Complete |
Partial |
N |
10,854 |
8,751 |
2,103 |
9,851 |
7,645 |
2,206 |
9,835 |
7,963 |
1,872 |
% of all RUs |
100 |
81 |
19 |
100 |
77.6 |
22.4 |
100 |
81 |
19 |
In-person |
7.2 |
6.9 |
8.5 |
7.0 |
6.9 |
8.3 |
6.3 |
6.1 |
7.3 |
Telephone |
2.1 |
2.0 |
2.5 |
2.0 |
1.9 |
2.4 |
1.5 |
1.5 |
1.7 |
Total |
9.6 |
9.2 |
11.4 |
9.3 |
8.9 |
11.0 |
8.4 |
8.1 |
9.6 |
Return To Table Of Contents
During the Closing section of the MEPS CAPI interview,
interviewers are prompted to ask respondents to sign the authorization forms
needed to conduct the Medical Provider Component of MEPS. Authorization forms
are requested for each unique person-provider pairing identified during the
interviews as a source of care to a key member of the household. Medical
provider authorization forms are requested for physicians seen in an
office-based setting, for inpatient, outpatient, or emergency room care received
in a hospital, for care received from a home health agency, and for certain
stays in long-term care institutions. Pharmacy authorization forms are requested
for each pharmacy from which a household member obtained prescription medicines.
Table 4-12 shows round by round signing rates for the
medical provider authorization forms for Panels 15 through 22. Authorization
form signing rates for the rounds conducted in 2017 showed mixed results
compared to the previous years:
- Up slightly from the three previous panels, the Round 1 signing rate is 69.2 percent.
- The Round 2 signing rate for Panel 22 continued to increase over the Panel 21 rate, from 75.2 percent to 76.5 percent.
- The Round 3 signing rate, bounced back this year from a decrease to 69.4 percent in Panel 20 to 71.6 percent in Panel 21.
- The Round 4 signing rate increased again this year, from 76.9 percent to 79.5 percent.
- The signing rate for Panel 20 Round 5 (74.4 percent) declined slightly from the level of the preceding year(74.5) and lower than rates from 2014 -2015.
Calculation of the round by round collection rate for
the medical provider authorization forms is based on all forms
requested during a round. The rates calculated for Rounds 2-5 include forms
fielded but not signed in an earlier round (nonresponse) as well as forms that
were fielded in an earlier round and signed, but rendered obsolete because the
person had another health event with the provider after the date on which the
original form was signed.
Table 4-12. Signing rates for medical provider authorization forms for panels 15 through 22
Panel/round |
Authorization forms requested |
Authorization forms signed |
Signing rate (%) |
Panel 15 |
Round 1 |
1,680 |
1,136 |
67.6 |
Round 2 |
18,506 |
13,628 |
73.6 |
Round 3 |
16,686 |
11,652 |
69.8 |
Round 4 |
16,260 |
11,139 |
68.5 |
Round 5 |
13,443 |
8,420 |
62.6 |
Panel 16 |
Round 1 |
1,811 |
1,223 |
67.5 |
Round 2 |
23,718 |
17,566 |
74.1 |
Round 3 |
21,780 |
14,828 |
68.1 |
Round 4 |
21,537 |
16,329 |
75.8 |
Round 5 |
16,688 |
12,028 |
72.1 |
Panel 17 |
Round 1 |
1,655 |
1,117 |
67.5 |
Round 2 |
21,749 |
17,694 |
81.4 |
Round 3 |
19,292 |
15,125 |
78.4 |
Round 4 |
20,086 |
15,691 |
78.1 |
Round 5 |
15,064 |
11,873 |
78.8 |
Panel 18 |
Round 1 |
1,677 |
1,266 |
75.5 |
Round 2 |
22,714 |
18,043 |
79.4 |
Round 3 |
20,728 |
15,827 |
76.4 |
Round 4 |
17,092 |
13,704 |
80.2 |
Round 5 |
15,448 |
11,796 |
76.4 |
Panel 19 |
Round 1 |
2,189 |
1,480 |
67.6 |
Round 2 |
22,671 |
17,190 |
75.8 |
Round 3 |
20,582 |
14,534 |
70.6 |
Round 4 |
17,102 |
13,254 |
77.5 |
Round 5 |
15,330 |
11,425 |
74.5 |
Panel 20 |
Round 1 |
2,354 |
1,603 |
68.1 |
Round 2 |
25,334 |
18,479 |
72.9 |
Round 3 |
22,851 |
15,862 |
69.4 |
Round 4 |
18,234 |
14,026 |
76.9 |
Round 5 |
16,274 |
12,100 |
74.4 |
Panel 21 |
Round 1 |
2,037 |
1,396 |
68.5 |
Round 2 |
22,984 |
17,295 |
75.2 |
Round 3 |
20,802 |
14,898 |
71.6 |
Round 4 |
16,487 |
13,110 |
79.5 |
Panel 22 |
Round 1 |
2,274 |
1,573 |
69.2 |
Round 2 |
22,913 |
17,530 |
76.5 |
Return To Table Of Contents
Table 4-13 shows signing rates for pharmacy
authorization forms for Panels 15 through 22. In early MEPS panels, pharmacy
authorization forms were collected only in Rounds 3 and 5. Beginning in 2009,
the project began requesting pharmacy authorization forms in Rounds 2 through 5,
with follow up for nonresponse in subsequent rounds similar to that for medical
provider authorization forms. The signing rates for the pharmacy authorization
forms have generally shown a pattern of decline since Panel 19; however, all
rounds in 2017 show an increase in the signing rate with rounds 2-4 exhibiting
an increase in excess of one percent.
Table 4-13. Signing rates for pharmacy authorization forms for panels 15 through 22
Panel/round |
Authorization forms requested |
Authorization forms signed |
Signing rate (%) |
Panel 15 |
Round 2 |
9,698 |
7,092 |
73.1 |
Round 3 |
8,684 |
6,189 |
71.3 |
Round 4 |
8,163 |
5,756 |
70.5 |
Round 5 |
7,302 |
4,485 |
66.9 |
Panel 16 |
Round 2 |
12,093 |
8,892 |
73.5
|
Round 3 |
10,959 |
7,591 |
69.3 |
Round 4 |
10,432 |
8,194 |
78.6 |
Round 5 |
8,990 |
6,928 |
77.1 |
Panel 17 |
Round 2 |
14,181 |
12,567 |
88.6 |
Round 3 |
9,715 |
7,580 |
78.0 |
Round 4 |
9,759 |
7,730 |
79.2 |
Round 5 |
8,245 |
6,604 |
80.1 |
Panel 18 |
Round 2 |
10,977 |
8,755 |
79.8 |
Round 3 |
9,757 |
7,573 |
77.6 |
Round 4 |
8,526 |
6,858 |
80.4 |
Round 5 |
7,918 |
6,173 |
78.0 |
Panel 19 |
Round 2 |
10,749 |
8,261 |
76.9 |
Round 3 |
9,618 |
6,902 |
71.8 |
Round 4 |
8,557 |
6,579 |
76.9 |
Round 5 |
7,767 |
5,905 |
76.0 |
Panel 20 |
Round 2 |
12,074 |
8,796 |
72.9 |
Round 3 |
10,577 |
7,432 |
70.3 |
Round 4 |
9,099 |
6,945 |
76.3 |
Round 5 |
8,312 |
6,339 |
76.3 |
Panel 21 |
Round 2 |
10,783 |
7,985 |
74.1 |
Round 3 |
9,540 |
6,847 |
71.8 |
Round 4 |
8,172 |
6,387 |
78.2 |
Panel 22 |
Round 2 |
10,510 |
7,919 |
75.4 |
Return To Table Of Contents
Self-Administered Questionnaires (SAQ) are requested
from adult household members in Rounds 2 and 4. Forms that are not collected in
Rounds 2 and 4 are requested again in Rounds 3 and 5. Table 4-14 shows both the
round-specific response rates and the combined rates after the follow up round
is completed. Prior to Panel 18, persons completing the SAQ received $5.00; the
gift was discontinued with Panel 18. In the years shown prior to Panel 18, the
collection rate for the SAQ, after follow up, had remained relatively steady at
about 92-95 percent. In the three cycles of SAQ collection completed since the
$5.00 gift was discontinued, the response rate after followup has ranged from
86.2 percent (Panel 21, Rounds 2-3) to 89.9 percent (Panel 18, Rounds 2-3).The
initial round of SAQ data collection for Panel 22 (Round 2) also showed a
increase of 3 percent in response compared to prior panels; followup for this
SAQ was still active in early 2018.
Table 4-14. Results of Self-Administered Questionnaire (SAQ) collection for panels 15 through 22
Panel/round |
SAQs requested |
SAQs completed |
SAQs refused |
Other nonresponse |
Response rate (%) |
Panel 15 |
Round 2 |
11,857 |
10,121 |
637 |
1,096 |
85.4 |
Round 3 |
1,491 |
725 |
425 |
341 |
48.6 |
Combined, 2010 |
11,857 |
10,846 |
1,062 |
1,437 |
91.5 |
Round 4 |
11,311 |
9,804 |
572 |
935 |
86.7 |
Round 5 |
1,418 |
678 |
461 |
279 |
47.8 |
Combined, 2011 |
11,311 |
10,482 |
1,033 |
1,214 |
92.6 |
Panel 16 |
Round 2 |
15,026 |
12,926 |
707 |
1393 |
86.0 |
Round 3 |
1,863 |
949 |
465 |
449 |
50.9 |
Combined, 2011 |
15,026 |
13,875 |
1,172 |
728 |
92.3 |
Round 4 |
13,620 |
12,415 |
582 |
623 |
91.2 |
Round 5 |
1,112 |
516 |
442 |
154 |
46.4 |
Combined, 2012 |
13,620 |
12,931 |
1,024 |
777 |
94.9 |
Panel 17 |
Round 2 |
14,181 |
12,567 |
677 |
937 |
88.6 |
Round 3 |
1,395 |
690 |
417 |
288 |
49.5 |
Combined, 2012 |
14,181 |
13,257 |
1,094 |
1,225 |
93.5 |
Round 4 |
13,086 |
11,566 |
602 |
918 |
88.4 |
Round 5 |
1,429 |
655 |
504 |
270 |
45.8 |
Combined, 2013 |
13,086 |
12,221 |
1,106 |
1,188 |
93.4 |
Panel 18 |
Round 2 |
13,158 |
10,805 |
785 |
1,568 |
82.1 |
Round 3 |
2,066 |
1,022 |
547 |
497 |
48.5 |
Combined, 2013 |
13,158 |
11,827 |
1,332 |
2,065 |
89.9 |
Round 4 |
12,243 |
10,050 |
916 |
1,277 |
82.1 |
Round 5 |
2,063 |
936 |
721 |
406 |
45.4 |
Combined, 2014 |
12,243 |
10,986 |
1,637 |
1,683 |
89.7 |
Panel 19 |
Round 2 |
12,664 |
10,047 |
1,014 |
1,603 |
79.3 |
Round 3 |
2,306 |
1,050 |
694 |
615 |
44.5 |
Combined, 2014 |
12,664 |
11,097 |
1,708 |
2,218 |
87.6 |
Round 4 |
11,782 |
9,542 |
1,047 |
1,175 |
81.0 |
Round 5 |
2,131 |
894 |
822 |
414 |
42.0 |
Combined, 2015 |
11,782 |
10,436 |
1,869 |
1,589 |
88.6 |
Panel 20 |
Round 2 |
14,077 |
10,885 |
1,223 |
1,966 |
77.3 |
Round 3 |
2,899 |
1,329 |
921 |
649 |
45.8 |
Combined, 2015 |
14,077 |
12,214 |
2,144 |
2,615 |
86.8 |
Round 4 |
13,068 |
10,572 |
1,127 |
1,371 |
80.9 |
Round 5 |
2,262 |
1,001 |
891 |
370 |
44.3 |
Combined, 2016 |
13,068 |
11,573 |
2,018 |
1,741 |
88.6 |
Panel 21 |
Round 2 |
13,143 |
10,212 |
1,170 |
1,761 |
77.7 |
Round 3 |
2,585 |
1,123 |
893 |
569 |
43.4 |
Combined, 2016 |
13,143 |
11,335 |
2,063 |
2,330 |
86.2 |
Panel 22 |
Round 2 |
12,304 |
9,929 |
1,086 |
1,289 |
80.7 |
Return To Table Of Contents
In Rounds 3 and 5, adult household members who are
reported as having been diagnosed with diabetes are asked to complete a short
self-administered questionnaire, the Diabetes Care Supplement (DCS). Forms not
completed for pickup at the time of the interviewer’s visit are followed up by
telephone in the latter stages of Rounds 3 and 5, but unlike the SAQ, there is
no follow up in the subsequent round for forms not collected in the round when
first requested. Response rates for the Diabetes Care Supplement (DCS) for
Panels 14 through 21 are shown in Table 4-15. Completion rates for the DCS have
declined over the years shown in the table, ending below 83 percent for the
first time in Panel 21, Round 3.
Table 4-15. Results of Diabetes Care Supplement (DCS)
collection for panels 14 through 21
Table 4-15. Results of Diabetes Care Supplement (DCS) collection for panels 14 through 21
Panel/round |
DCSs requested |
DCSs completed |
Response rate (%) |
Panel 14 |
Round 3 |
1,174 |
1,048 |
89.3 |
Round 5 |
1,177 |
1,066 |
90.6 |
Panel 15 |
Round 3 |
1,117 |
1,000 |
89.5 |
Round 5 |
1,097 |
990 |
90.3 |
Panel 16 |
Round 3 |
1,425 |
1,283 |
90.0 |
Round 5 |
1,358 |
1,256 |
92.5 |
Panel 17 |
Round 3 |
1,315 |
1,177 |
89.5 |
Round 5 |
1,308 |
1,174 |
89.8 |
Panel 18 |
Round 3 |
1,362 |
1,182 |
86.8 |
Round 5 |
1,342 |
1,187 |
88.5 |
Panel 19 |
Round 3 |
1,272 |
1,124 |
88.4 |
Round 5 |
1,316 |
1,144 |
87.2 |
Panel 20 |
Round 3 |
1,412 |
1,190 |
84.5 |
Round 5 |
1,3862 |
1,174 |
84.9 |
Panel 21 |
Round 3 |
1,422 |
1,170 |
82.5 |
Return To Table Of Contents
Table 4-16 shows collection rates for the Cancer
Self-Administered Questionnaire (CSAQ), which was first collected in 2016 and
again in 2017 (in Panel 22, Round 1 and Panel 21, Round 3). Key RU members age
18 and older who reported in the MEPS interview that they had cancer were
requested to complete a short questionnaire asking about their experiences with
cancer. The response rate for the targeted respondents in Panel 22 Round 1 was
82.4 percent (almost three percent higher than in Panel 21 Round 1); for those
in Panel 21, Round 3 households, the rate of 77.8 percent was down compared to
83.4 percent in Panel 20, Round 3. The higher cooperation rate in Round 3 is
most likely due to the higher cooperation that MEPS receives from households who
have participated in earlier rounds of the survey.
Table 4-16. Cancer Self-Administered Questionnaire (CSAQ) collection rates
Panel/round |
C-SAQs requested |
C-SAQs completed |
Response rate (%) |
Panel 20 Round 3 |
935 |
780 |
83.4 |
Panel 21 Round 1 |
891 |
709 |
79.6 |
Panel 21 Round 3 |
171 |
133 |
77.8 |
Panel 22 Round 1 |
1,060 |
873 |
82.4 |
Return To Table Of Contents
The pharmacy component of the Medical Provider
Component collects information about MEPS participants’ prescription medicine
usage as captured on the summary ‘patient profile’ form generated by pharmacies.
The patient profile lists the prescriptions filled or refilled for the patient
during the year. Continuing the procedure started several years ago, the project
sends a mailed request to household respondents who have reported using certain
pharmacies, asking them to contact those pharmacies to request a copy of their
patient profile. The procedure serves as a backup effort to obtain information
from pharmacies that do not participate when contacted by representatives from
the Medical Provider Component. In 2017, the collection of profiles by the MEPS
household component operation was limited to respondents in Panel 20, Round 5
households. These respondents had completed their full cycle household
interviews, and no follow up efforts were made for those who did not respond to
the mailed request.
For the 2017 effort, summarized in Table 4-17, patient
profile requests were sent to 2,723 patient-pharmacy pairs in 1,953 reporting
units. Completed profiles were received for 12.0 percent of the pairs, a rate
lower than that of the previous year, but comparable to that of the two years
prior shown in the table. Despite the overall low rate of return, the effort
does allow the study to include some prescription profile data for patients
associated with pharmacies whose data would otherwise be totally unrepresented
in the MEPS pharmacy estimates.
Return To Table Of Contents
Table 4-17. Results of patient profile collection in 2012 through 2017
Pharmacy |
Total number |
Total received |
Percent received |
Total complete |
Completes as a percent of total |
2017 – P19R5 all mail collection |
Total RUs |
1,953 |
342 |
17.5% |
254 |
13.0% |
Total Pairs |
2,723 |
372 |
13.7% |
326 |
12.0% |
2016 – P19R5 all mail collection |
Total RUs |
2,038 |
374 |
18.4% |
285 |
14.0% |
Total Pairs |
2,854 |
430 |
15.1% |
394 |
13.8% |
2015 – P18R5 all mail collection |
Total RUs |
1,404 |
260 |
18.5% |
186 |
13.2% |
Total Pairs |
2,042 |
289 |
14.2% |
255 |
12.5% |
2014 – P17R5 all mail collection |
Total RUs |
2,230 |
372 |
16.7% |
269 |
12.1% |
Total Pairs |
3,233 |
443 |
13.7% |
386 |
11.9% |
2013 – P16R5 all mail collection |
Total RUs |
2,014 |
417 |
20.7% |
316 |
15.6% |
Total Pairs |
2,911 |
486 |
16.6% |
425 |
14.5% |
2012 – P15R5 all mail collection |
Total RUs |
1,390 |
290 |
20.8% |
203 |
14.6% |
Total Pairs |
1,990 |
348 |
17.4% |
290 |
14.5% |
Return To Table Of Contents
As in past years, validation interviews were completed
with a portion of the participating MEPS households. The validation interviews
were conducted primarily by telephone by the MEPS team of seven validation
interviewers. The purpose of validation is to verify that the correct individual
was contacted for the interview and that the interview was conducted according
to MEPS-approved procedures.
At least 10 percent of the interviews completed by
each interviewer on each panel/round are validated during the data collection
period. In addition, for interviewers who travel and complete interviews outside
of their home regions, an additional 10 percent of the interviews they complete
for a given region are validated. Finally, all interviews completed in less than
30 minutes are validated.
Approximately 22.8 percent of the MEPS households who
participated during Spring data collection and 23.7 percent of those who
participated during the Fall period completed a validation interview. This is
about 5 percent less for Spring and Fall data collection than 2016. In addition
MEPS field supervisors and managers observed 77 interviews during the year as
part of a comprehensive mentoring process. Generally, MEPS is developing and
using technical solutions in place of in-person observations; however, there are
specific needs met by specialized observation. This total included observation
of all 65 of the interviewers who were newly hired at the start of the year and
still active at the time of their observation. As much as possible, observations
are conducted in the early weeks of data collection so that problems can be
detected and corrected as quickly as possible and interviewers given feedback on
ways to improve specific interviewing skills. While CARI offers a high-quality
portal for evaluating interviewers on question administration, observations,
particularly of newly hired staff, allow for assessment of the full range of
interviewer skills including respondent contact, trip planning, gaining
cooperation, and interviewer-respondent interactions that cannot be captured
through CARI and other report mechanisms. In addition, the observer serves as an
on-site resource in situations where remedial training is necessary. Observation
forms are processed and reviewed at the home office to determine the need for
individual and field-wide follow-up on specific skills.
Return To Table Of Contents
To comply with the requirement of reporting incidents
involving loss or theft of hard-copy materials with respondent PII or laptops,
field staff continued to use an automated loss reporting system to report
incidents. As before, reported incidents were subsequently tracked through the
use of a documentation log which was provided to AHRQ whenever an entry or
update to an incident occurred. A security incident report was also filed for
each confirmed incident with the Westat IRB.
A total of 11 incidents of lost or stolen laptops or
hardcopy material were reported in 2017. Two MEPS laptops were reported lost or
stolen; however, both were recovered. In both reported laptop thefts/losses the
password-protected laptops were shut down at the time of the loss. Since MEPS
laptops are full disc encrypted, respondent identity was not at risk.
Nine reports included suspected or confirmed loss of
hard copy materials with respondent PII or breach of confidentiality. In three
of these incidents the documents were recovered without compromise of PII.
Authorization forms with PII and respondent signatures, reported lost in six
incidents, were not recovered. The respondents in these households were notified
of the loss and asked to re-sign the forms; all but two respondents agreed to
re-sign forms. The HHS Privacy Incident Response Team (PIRT) was notified of all
incidents involving the loss/compromise of PII.
Return To Table Of Contents
The home office supports the data collection effort in
several important ways. One phase of activity supports the launch of each new
round of data collection; another phase supports the field operation while data
collection is in progress. These two phases of activity are described in this
chapter.
Return To Table Of Contents
As each wave of cases became available for fielding,
clerical staff created RU folders containing the hard-copy materials associated
with each case. Materials included authorization forms, follow-up SAQs, and mini
labels to be affixed to new documents generated during data collection. At the
same time the cases became available, supervisors received a Supervisor
Assignment Log listing all of the cases to be released in their region for that
wave. For the first wave of each round, supervisors used this log to assign
cases to their interviewers. They entered the ID of the interviewer to be
assigned each case and sent the log back to the home office. Home office staff
then shipped the case folders directly to the interviewers. A file with the
assignments was also sent to programming staff to make the electronic
assignments in the field management system, Basic Field Operating System (BFOS).
For later waves, the prepared RU folders were sent to
the field supervisors who made the electronic assignments in their Supervisor
Management System (SMS) and shipped the hard-copy materials to their
interviewers.
Prior to the start of data collection for each period,
interviewers connected remotely to the home office to download the CAPI software
update for the upcoming rounds and received a home study training package to
prepare them for interviewing. Field interviewers also received a replenishment
of supplies at the start of the rounds.
Advance mailings to all respondent households were
prepared and mailed by the home office staff. Addresses were first standardized
and sent through the National Change of Address (NCOA) database to obtain the
most current addresses for mailing. Any mail returned as undeliverable was
recorded and then forwarded to the appropriate supervisor. Requests to re-mail
the Round 1 advance package to households who reported not receiving it were
prepared and mailed by home office staff.
Return To Table Of Contents
Respondent Contacts. Respondent contacts are an important
component of home office support for the MEPS data collection effort. Printed materials mailed to respondents contain an
email address and toll-free telephone number that respondents can use to contact
the project with questions, with requests to make or to cancel interview
appointments, or to decline participation in the study. Home office staff
receive and initiate the response to all respondent contacts. They forward
information received from respondent calls to the field supervisors, who
initiate the appropriate follow up and inform the home office of the results of
their follow up within 24 hours of notification. Table 5-1 shows the number and
percent of RUs who made calls to the respondent hotline in the Spring and Fall
rounds of 2014 – 2017.
Table 5-1. Number and percent of respondents who called the respondent information line, 2014-2017
Calls made |
Original sample size |
Number of calls |
Calls as a percent of sample size |
Round 1 |
2014 – Panel 19 Round 1 |
9,970 |
340 |
3.4% |
2015 – Panel 20 Round 1 |
10,854 |
436 |
4.0% |
2016 – Panel 21 Round 1 |
9,851 |
301 |
3.1% |
2017 – Panel 22 Round 1 |
9,835 |
346 |
3.5% |
Rounds 3/5 |
2014 – Panel 17 Round 5/Panel 18 Round 3 |
14,887 |
639 |
4.3% |
2015 – Panel 18 Round 5/Panel 19 Round 3 |
14,331 |
691 |
4.8% |
2016 – Panel 19 Round 5/Panel 20 Round 3 |
14,844 |
547 |
3.7% |
2017 – Panel 20 Round 5/Panel 21 Round 3 |
14,939 |
533 |
3.6% |
Rounds 2/4 |
2014 – Panel 18 Round 4/Panel 19 Round 2 |
14,667 |
737 |
5.0% |
2015 – Panel 19 Round 4/Panel 20 Round 2 |
15,249 |
570 |
3.7% |
2016 – Panel 20 Round 4/Panel 21 Round 2 |
15,392 |
605 |
3.9% |
2017 – Panel 21 Round 4/Panel 22 Round 2 |
14,395 |
518 |
3.6% |
Table 5-2 shows the number and types of calls received
on the respondent hotline during 2016 and 2017. As in prior years, a substantial
portion of the Round 1 calls were from refusals, with a much smaller proportion
of refusals and a higher proportion of appointment requests in the later rounds.
Table 5-2. Calls to the respondent information line, 2016 and 2017
Reason for call |
Spring 2016(Panel 21 Round 1, Panel 20 Round 3, Panel 19 Round 5) |
Fall 2016(Panel 21 Round 2, Panel 20 Round 4) |
Round 1 |
Rounds 3 and 5 |
Rounds 2 and 4 |
N |
% |
N |
% |
N |
% |
Address/telephone change |
8 |
2.7 |
64 |
11.7 |
48 |
7.9 |
Appointment |
93 |
30.9 |
362 |
66.2 |
373 |
61.7 |
Request callback |
47 |
15.6 |
59 |
10.8 |
83 |
13.7 |
No message |
1 |
0.3 |
7 |
1.3 |
6 |
1.0 |
Other |
2 |
0.7 |
1 |
0.2 |
3 |
0.5 |
Proxy Needed |
0 |
0.0 |
5 |
0.9 |
6 |
1.0 |
Request SAQ help |
0 |
0.0 |
3 |
0.5 |
11 |
1.8 |
SAQ refusal |
0 |
0.0 |
0 |
0.0 |
0 |
0.0 |
Special needs |
1 |
0.3 |
0 |
0.0 |
0 |
0.0 |
Refusal |
139 |
46.2 |
46 |
8.4 |
75 |
12.4 |
Willing to participate |
10 |
3.3 |
0 |
0.0 |
0 |
0.0 |
Total |
301 |
|
547 |
|
605 |
|
Reason for call |
Spring 2017(Panel 22 Round 1, Panel 21 Round 3, Panel 20 Round 5) |
Fall 2017(Panel 22 Round 2, Panel 21 Round 4) |
Round 1 |
Rounds 3 and 5 |
Rounds 2 and 4 |
N |
% |
N |
% |
N |
% |
Address/telephone change |
10 |
2.9 |
51 |
9.6 |
35 |
6.8 |
Appointment |
86 |
24.9 |
355 |
66.6 |
318 |
61.4 |
Request callback |
59 |
17.1 |
90 |
16.9 |
64 |
12.4 |
No message |
1 |
0.3 |
2 |
0.4 |
5 |
1.0 |
Other |
2 |
0.6 |
3 |
0.6 |
4 |
0.8 |
Proxy Needed |
1 |
0.3 |
7 |
1.3 |
5 |
1.0 |
Request SAQ help |
1 |
0.3 |
0 |
0.0 |
15 |
2.9 |
SAQ refusal |
0 |
0.0 |
0 |
0.0 |
0 |
0.0 |
Special needs |
0 |
0.0 |
1 |
0.2 |
1 |
0.2 |
Refusal |
172 |
49.7 |
23 |
4.3 |
70 |
13.5 |
Willing to participate |
14 |
4.0 |
1 |
0.2 |
1 |
0.2 |
Total |
346 |
|
533 |
|
518 |
|
Monitoring Production. Home office staff monitored
production, cost, and data quality and provided reports and feedback to field
managers and supervisors for review and follow up. Each week
they generated and distributed reports to AHRQ showing weekly and cumulative
figures on field production, response rate, and costs.
Home Office Support. Validation abstracts, which
contain information from the interview to be used during interview validation
calls, were generated electronically and sent via BFOS Secure Messaging (BSM) to
the validators. Refusal letters were generated and mailed by home office staff
as requested by the field. Home office staff also responded to supply requests
from the field, replenishing interviewer and supervisor stocks of materials as
needed.
Receipt Control. As interviewers completed cases,
they transmitted the data electronically and shipped the case folders containing
any hard-copy documents to the home office receipt operation. All material
containing personally identifiable information (PII) was shipped via Federal
Express which facilitates tracking of late or lost shipments. Details of each
FedEx shipment, including the FedEx tracking number and RUIDs of cases contained
in the package were entered by the sender in a FedEx notification module in the
field management system (BFOS) which generated a BSM message to alert the
recipient of the expected package. Contents of the cases received at the home
office were reviewed and recorded in the receipt system. Authorization forms
were edited for completeness and scanned into an image database. When a problem
was found in an authorization form, the problem was documented and feedback was
sent to the field supervisor to review with the interviewer. All
self-administered questionnaires, including SAQs, DCSs and Cancer SAQs were
receipted and sent for TeleForm scanning. The receipt department also tracked
the hardcopy receipts against dates interviews had been reported as completed
and transmitted; they alerted the field whenever the materials for a completed
interview did not arrive within 10 days of the interview date.
Helpdesk Support. The MEPS CAPI Helpdesk again
provided technical support for field interviewing activities during 2017.
Helpdesk staff were available 7 days a week to help field staff resolve CAPI,
Field Management System, transmission, and laptop problems. Incoming calls are
documented for follow up as needed to resolve individual issues and to identify
issues reported by multiple interviewers. The CAPI Helpdesk serves as the
coordinating point for tracking and shipping all field laptops, monitoring field
laptop assignment, and coordinating laptop repair.
Return To Table Of Contents
This chapter briefly describes the activities that
supported Westat’s data delivery work during the year and identifies the
principal files delivered in 2017.
Return To Table Of Contents
The primary objective of MEPS is to produce a series
of data files for public release each calendar year. The interround processing,
editing, and variable construction tasks all serve to prepare these public use
files. Each file addresses one or more aspects of the U.S. civilian
non-institutional population’s access to, use of, and payments for health care.
Public Use File Deliveries
The principal files delivered during calendar year
2017 are listed below.
- The 2016 Point in Time File;
- Full Year 2015 Use and Insurance File;
- Full Year 2015 Use and Expenditure File;
- Full Year 2015 Expenditure Event Files for events included
in the Medical Provider Component data collection including
hospital inpatient, outpatient, and emergency room events,
office-based physician visits, and home health agency events;
- Full Year 2015 Expenditure Event files for events not
included in the Medical Provider Component data collection,
including dental events, office-based non-physician events, and
other medical expenses;
- Full Year 2015 Prescribed Medicines Expenditure File;
- Full Year 2015 Medical Conditions File;
- Full Year 2015 Jobs File;
- Full Year 2015 Appendix to MEPS Event Files;
- 2015 Person Round Plan File.
Ancillary File Deliveries
In addition to the principal data files delivered for
public release each year, the project also produces a number of ancillary files
for delivery to AHRQ. These include an extensive series of person and
family-level weights, “raw” data files reflecting MEPS data at intermediate
stages of capture and editing, and files generated at the end of each round or
as needed to support analysis of both substantive and methodological topics. A
comprehensive list of the files delivered during 2017 appears in the appendix A.
Medical Provider Component (MPC) Files
During each year’s processing cycle, Westat also
creates files for the MPC contractor and, in turn, receives data files back from
the MPC. As in prior years, Westat provided sample files for the MPC in three
waves, with the first two waves delivered while HC data collection was still in
progress. In preparing the sample files delivered in 2016 for MPC collection of
data about 2015 health events, Westat again applied the program developed in
2014 for unduplicating the sample of providers. This process, developed in
consultation with AHRQ, was designed to reduce the number of duplicate providers
reported from the household data collection.
Early in 2017, following completion of MPC data
collection and processing for 2015 events, Westat received the files containing
data collected in the MPC with linkages to matching events collected in the MPC
with events collected in the HC. In processing at Westat, matched events from
the MPC served as the primary source for imputing expenditure variables for the
2015 events. A similar file of prescribed medicines was also delivered to
support matching and imputation of expenditures for the prescribed medicines at
AHRQ. Timely and well-coordinated data handoffs between Westat and the MPC are
critical to the timely delivery of the full year expenditure files. With each
additional year of interaction and cooperation, the handoffs between the MPC and
HC have gone more and more smoothly.
Return To Table Of Contents
Schedules for Data Delivery
Adhering to the schedule for delivery of the key MEPS
public use files is of paramount importance to the project. Throughout 2017,
data processing activities to support the major file deliveries for the year
proceeded simultaneously along several different delivery paths, with activity
focused separately on each of the panels for the annual Point-in-Time and
Full-Year Files. As in past years, the project used a set of comprehensive data
delivery schedules to guide management of the effort. The schedules integrate
key dates for the data collection, data capture, coding, editing and imputation,
weights construction, and documentation production tasks. These schedules
provide a framework for assessing the potential impact of proposed changes at
the start of each processing cycle and for coordinating the succession of
processes that comprise the delivery effort.
At several points in recent years AHRQ has accelerated
the schedule for delivery of the annual files in order to make MEPS data more
quickly available to users. Given the interconnections among the many processes
involved in preparing the files, acceleration requires coordination not only
among the analytic groups who prepare the files, but also with schedules for
Household data collection and for file transfers with the Medical Provider
Component. The MEPS contract called for an acceleration of one month for files
to be delivered in 2017 (for data year 2015). The project began planning to meet
that requirement during 2014, increased these efforts in 2015, and tested the
new schedule for files delivered in 2016. Table 6-1 shows the pattern of
acceleration for the file deliveries for data years 2013-2015. The dates shown
reflect the end point of file preparation – delivery for web release.
Return To Table Of Contents
Table 6-1. Delivery schedule for major MEPS files, 2015-2017
Year |
2015 (FY 13) |
2016 (FY 14) |
2017 (FY 15) |
Point in Time File |
5/15/15 |
4/15/16 |
4/15/17 |
Full Year Use File |
2/13/15 |
2/12/16 |
2/10/17 |
Non-MPC Event Files |
6/12/15 |
6/10/16 |
5/12/17 |
MPC Event Files |
7/10/15 |
7/14/16 |
6/9/17 |
Prescribed Medicine File |
8/14/15 |
8/12/16 |
7/14/17 |
Full Year Expenditure File |
9/11/15 |
9/9/16 |
8/11/17 |
Return To Table Of Contents
Support Activities
A number of discrete activities, described briefly
below, contribute to the delivery effort.
Return To Table Of Contents
TeleForm/Data Editing of Scanned Forms
TeleForm, a commercial off-the-shelf (COTS) software
system for intelligent data capture and image processing, was used in 2016 to
capture data collected in the Diabetes Care Supplement (DCS), the
Self-Administered Questionnaire (SAQ), and the Cancer Self-Administered
Questionnaire (CSAQ). TeleForm software reads the form image files and extracts
data according to the project specifications. Supporting software checks the
data for conformity with project specifications and flags data values that
violate the validation rules for review and resolution. The edits and the flags
settings used with TeleForm replicate those embedded in the Computer Assisted
Data Entry (CADE) programs used in prior years for the DCS and SAQ so that the
TeleForm data have the same edits and flags as in previous years.
Return To Table Of Contents
Coding
Coding refers to the process of converting data items
collected in text format to pre-specified numeric codes. The plan for the 2017
coding effort (for items collected during data year 2016) was described in
Deliverables 17.506, .507, and .508 ). For the MEPS-HC, 5 types of information
require coding including:
- Medical conditions;
- Prescribed medicines;
- Source of Payment for medical events and prescriptions;
- Industry/Occupation; and
- Geographic identifiers.
Condition and Prescribed Medicine Coding
In 2017, coding was performed on the conditions and
prescribed medicine text strings reported by household respondents for calendar
year 2016. An automated system enabled coders to easily search for and assign
the appropriate ICD-10-CM code (for conditions) or GPI code (for medicines). The
system supports the verifier’s review of all codes and, as needed, correction of
the coder’s initial decision. For the prescribed medicine coding, a pharmacist
provided a further review of text strings questioned by the verifier, uncodeable
text strings, foreign medicines, and compound drugs. All coding actions were
tracked in the system and error rates calculated weekly. Both the condition and
prescribed medicine coding efforts were staffed by 3 coders.
The 2017 coding cycle was the first year, medical
conditions were coded using ICD-10-CM codes. Westat and AHRQ began planning for
this transition in 2015 with decisions for implementation of the transition
finalized in August 2016. Given the limited specificity of condition information
reported by household respondents and the greatly increased specificity required
for coding to ICD-10, the decision was made to limit coding to the first three
bytes of the fully detailed seven byte ICD-10 codes. Westat used the General
Equivalence Mappings (GEM), a crosswalk developed by CMS, for associating
ICD-9-CM codes with their ICD-10 equivalent, for converting the existing history
from ICD-9 to ICD-10. Additionally, the history table was truncated to include
only the five most recent years of data. Due to the projected increase in coding
workload, the schedule for the 2017 coding cycle was revised so that condition
coding for DY2016 began early in March, approximately six weeks earlier than
prior years.
For data year 2016, a total of 21,517 text strings
were manually coded for health conditions, representing an increase of 8,563
strings when compared to the previous year. This total was the net after two
stages of file unduplication. In the first stage of unduplication, a program was
run to identify exact duplicate text strings among the 120,461 condition strings
reported by household respondents; the program identified 29,173 unique strings.
In the second stage the unique strings were compared to the history table
consisting of strings coded in the past 5 years (2011-2015) that had
successfully been converted to ICD-10-CM codes using the GEM crosswalk.. Where
strings in this comparison matched, the ICD-10 code assigned previously was
assigned to the new text string; this step reduced the workload to the 21,517
strings actually coded in 2017. However, this represented a 60 percent increase
in the workload for manual coding.
In addition to the fact that this was the first year
coding to ICD-10-CM, household reported descriptions of medical conditions are
frequently incomplete or ambiguous and historically difficult to code with
precision. The individual coders’ error rates in 2017 (3 percent on average) was
comparable to the previous year although remaining above the contract target of
2 percent. To ensure the quality of coding, all of the coding work was reviewed
by a highly experienced and credentialed verifier whose error rate is below the
contract target. One-on-one training was conducted with individual coders as
needed.
Prescription medicine text strings were coded to the
set of Generic Product Identifier (GPI) codes, associated with the Master Drug
Data Base (MDDB) maintained by Medi-Span, a part of Wolters Kluwer. The codes
characterize medicines by therapeutic class, form, and dosage. To augment the
assignment of codes to less specified and ambiguous text strings, ARHQ developed
procedures for assigning partial GPI codes and higher level drug categories that
were implemented in 2017 for coding 2016 data. AHRQ also developed a set of
exact and inexact matching programs to reduce the number of prescribed medicine
strings sent for manual coding. Westat’s implementation of these matching
programs reduced the number of prescribed medicine text strings sent for manual
coding significantly; 10,352 strings were coded from 2016 data compared to the
24,634 strings coded previously from 2015 data. Like the condition text strings,
the prescription medicine text strings undergo two rounds of unduplication to
identify the unique strings to be coded. AHRQ’s exact and inexact matching
programs are then run to further reduce the number of strings to be coded. The
initial total of 213,256 strings was reduced to 56,239 in the first stage of
unduplication and to the 29,173 strings in the second stage with AHRQ’s matching
programs further reducing the number of strings to 10,352. The overall coding
error rate (across all three coders) was 3 percent, 1 percent higher than the
contractual goal of 2 percent. As with the conditions, all prescription text
strings/codes were reviewed by a verifier, with additional review of selected
strings provided by a pharmacist.
Source of Payment Coding
Source of payment information (SOP) is collected in
both the household and the medical provider components. In the HC charge payment
section of the CAPI instrument, the names of the sources of payment are
collected in three places: When the bill was paid by a source identified in
response to a direct question about payment (REIMNAM), when the bill was sent to
a source other than the respondent and the respondent names that source
(WHOBILL#), and in response to a question about a direct payment source for
prescription medicines(SRCNAME). The responses are coded to one of the ten
source of payment options in which health care expenditures are reported in the
MEPS public use files. These payment sources include:
- Out of pocket;
- Medicare;
- Medicaid;
- Private Health Insurance;
- Veterans Administration, including CHAMPVA;
- Tricare or CHAMPUS;
- Other federal;
- Other state and local;
- Workers’ Compensation; and
- Other.
The SOP Coding Guidelines Manual, with the schema from
the Coding Plan, is updated each year before the start of the annual coding
cycle, submitted for AHRQ approval, and distributed to the coders. Since the
Medical Provider Component of MEPS uses the same set of source of payment codes
as the Household Component, coding rules and decisions are coordinated with the
MPC contractor to ensure consistency in the coding.
Each year, the source of payment text strings
extracted from the reference year data are matched to a historical file of
previously coded SOP text strings to create a file of matched strings with
suggested or “matched” codes. These match-coded strings are reviewed by coders
and verified or modified as needed. This review is required because insurance
companies change their product lines and coverage offerings very frequently, and
as a result, the source of payment code for a given text string (e.g., the name
of an insurance company or plan) can change from year to year. For example, from
one year to the next an insurer or insurance product may participate in or drop
out of state exchanges; may offer Part D or dental or vision insurance or may
drop it; may add Medicare Advantage plans in addition to Medicaid HMOs; or may
gain or lose state contracts as Medicaid service providers. As a result of these
changes, the appropriate code for a company or specific plan may also change
from year to year. Strings that do not match to a string in the history table
are researched and have an appropriate SOP code assigned by coding staff.
SOP coding during 2017 was for the payment sources
reported for 2016 events. For the bill was paid by a source identified in
response to a direct question about payment (REIMNAM), a total of 2,987
previously coded source of payment text strings were reviewed and updated as
needed. After unduplication of the strings reported for 2016, coders reviewed
and coded 1,277 strings. For the bill was sent to a source other than the
respondent and the respondent names that source (WHOBILL#,) coders reviewed and
coded 2,290 strings. For text strings reported as direct payers for prescription
medicine (SRCNAME), 460 previously coded strings were reviewed and updated as
needed. And 318 new text strings were reviewed and coded by coders.
Industry and Occupation Coding
Industry and Occupation coding is performed for MEPS
by the Census Bureau using the Census Bureau’s Demographic Surveys Division’s
(DSD) Computer-Assisted Industry and Occupation (I&O) codes, which can be
cross-walked to the 2007 North American Industrial Classification (NAIC) coding
system, and 2010 Standard Occupational Classifications (SOC). The codes
characterize the jobs reported by household respondents and are released
annually on the FY JOBS file. During 2017, 18,133 jobs were coded for the 2016
JOBS file.
Return To Table Of Contents
GEO Coding
The Westat Geographic Information Systems (GIS)
division GEO codes household addresses, assigning the latitude and longitude
coordinates, as well as other variables such as County and State Federal
Information Processing Standards (FIPS) codes, Metropolitan Statistical Area
(MSA) status, Designated Market Area, Census Place, and county. RU-level data
are expanded to the person level and delivered to AHRQ as part of the set of
‘Master Files’ sent yearly. These data are not included in a PUF, but some
variables are used for the FY weights processing.
During the FY2016 coding cycle, 18,659 unique address
records for full year reporting units were processed, as well as 7,277 records
for point-in-time households.
Return To Table Of Contents
Table A-1. Data collection periods and starting RU-level sample sizes, all panels
Spring/Fall Data Collection |
Sample Size |
January-June 1996 |
10,799 |
Panel 1 Round 1 |
10,799 |
July-December 1996 |
9,485 |
Panel 1 Round 2 |
9,485 |
January-June 1997 |
15,689 |
Panel 1 Round 3 |
9,228 |
Panel 2 Round 1 |
6,461 |
July-December 1997 |
14,657 |
Panel 1 Round 4 |
9,019 |
Panel 2 Round 2 |
5,638 |
January-June 1998 |
19,269 |
Panel 1 Round 5 |
8,477 |
Panel 2 Round 3 |
5,382 |
Panel 3 Round 1 |
5,410 |
July-December 1998 |
9,871 |
Panel 2 Round 4 |
5,290 |
Panel 3 Round 2 |
4,581 |
January-June 1999 |
17,612 |
Panel 2 Round 5 |
5,127 |
Panel 3 Round 3 |
5,382 |
Panel 4 Round 1 |
7,103 |
July-December 1999 |
10,161 |
Panel 3 Round 4 |
4,243 |
Panel 4 Round 2 |
5,918 |
January-June 2000 |
15,447 |
Panel 3 Round 5 |
4,183 |
Panel 4 Round 3 |
5,731 |
Panel 5 Round 1 |
5,533 |
July-December 2000 |
10,222 |
Panel 4 Round 4 |
5,567 |
Panel 5 Round 2 |
4,655 |
January-June 2001 |
21,069 |
Panel 4 Round 5 |
5,547 |
Panel 5 Round 3 |
4,496 |
Panel 6 Round 1 |
11,026 |
July-December 2001 |
13,777 |
Panel 5 Round 4 |
4,426 |
Panel 6 Round 2 |
9,351 |
January-June 2002 |
21,915 |
Panel 5 Round 5 |
4,393 |
Panel 6 Round 3 |
9,183 |
Panel 7 Round 1 |
8,339 |
July-December 2002 |
15,968 |
Panel 6 Round 4 |
8,977 |
Panel 7 Round 2 |
6,991 |
January-June 2003 |
24,315 |
Panel 6 Round 5 |
8,830 |
Panel 7 Round 3 |
6,779 |
Panel 8 Round 1 |
8,706 |
July-December 2003 |
13,814 |
Panel 7, Round 4 |
6,655 |
Panel 8, Round 2 |
7,159 |
January-June 2004 |
22,552 |
Panel 7 Round 5 |
6,578 |
Panel 8 Round 3 |
7,035 |
Panel 9 Round 1 |
8,939 |
July-December 2004 |
14,068 |
Panel 8, Round 4 |
6,878 |
Panel 9, Round 2 |
7,190 |
January-June 2005 |
22,548 |
Panel 8 Round 5 |
6,795 |
Panel 9 Round 3 |
7,005 |
Panel 10 Round 1 |
8,748 |
July-December 2005 |
13,991 |
Panel 9, Round 4 |
6,843 |
Panel 10, Round 2 |
7,148 |
January-June 2006 |
23,278 |
Panel 9 Round 5 |
6,703 |
Panel 10 Round 3 |
6,921 |
Panel 11 Round 1 |
9,654 |
July-December 2006 |
14,280 |
Panel 10 Round 4 |
6,708 |
Panel 11 Round 2 |
7,572 |
January-June 2007 |
21,326 |
Panel 10 Round 5 |
6,596 |
Panel 11 Round 3 |
7,263 |
Panel 12 Round 1 |
7,467 |
July-December 2007 |
12,906 |
Panel 11 Round 4 |
7,005 |
Panel 12 Round 2 |
5,901 |
January-June 2008 |
22,414 |
Panel 11 Round 5 |
6,895 |
Panel 12 Round 3 |
5,580 |
Panel 13 Round 1 |
9,939 |
July-December 2008 |
13,384 |
Panel 12 Round 4 |
5,376 |
Panel 13 Round 2 |
8,008 |
January-June 2009 |
22,960 |
Panel 12 Round 5 |
5,261 |
Panel 13 Round 3 |
7,800 |
Panel 14 Round 1 |
9,899 |
July-December 2009 |
15,339 |
Panel 13 Round 4 |
7,670 |
Panel 14 Round 2 |
7,669 |
January-June 2010 |
23,770 |
Panel 13 Round 5 |
7,576 |
Panel 14 Round 3 |
7,226 |
Panel 15 Round 1 |
8,968 |
July-December 2010 |
13,785 |
Panel 14 Round 4 |
6,974 |
Panel 15 Round 2 |
6,811 |
January-June 2011 |
23,693 |
Panel 14 Round 5 |
6,845 |
Panel 15 Round 3 |
6,431 |
Panel 16 Round 1 |
10,417 |
July-December 2011 |
14,802 |
Panel 15 Round 4 |
6,254 |
Panel 16 Round 2 |
8,548 |
January-June 2012 |
24,247 |
Panel 15 Round 5 |
6,156 |
Panel 16 Round 3 |
8,160 |
Panel 17 Round 1 |
9,931 |
July-December 2012 |
16,161 |
Panel 16 Round 4 |
8,048 |
Panel 17 Round 2 |
8,113 |
January-June 2013 |
25,788 |
Panel 16 Round 5 |
7,969 |
Panel 17 Round 3 |
7,869 |
Panel 18 Round 1 |
9,950 |
July-December 2013 |
15,347 |
Panel 17 Round 4 |
7,656 |
Panel 18 Round 2 |
7,691 |
January-June 2014 |
24,857 |
Panel 17 Round 5 |
7,485 |
Panel 18 Round 3 |
7,402 |
Panel 19 Round 1 |
9,970 |
July-December 2014 |
14,665 |
Panel 18 Round 4 |
7,203 |
Panel 19 Round 2 |
7,462 |
January-June 2015 |
25,185 |
Panel 18 Round 5 |
7,163 |
Panel 19 Round 3 |
7,168 |
Panel 20 Round 1 |
10,854 |
July-December 2015 |
15,247 |
Panel 19 Round 4 |
6,946 |
Panel 20 Round 2 |
8,301 |
January-June 2016 |
24,694 |
Panel 19 Round 5 |
6,856 |
Panel 20 Round 3 |
7,987 |
Panel 21 Round 1 |
9,851 |
July-December 2016 |
15,390 |
Panel 20 Round 4 |
7,729 |
Panel 21 Round 2 |
7,661 |
January-June 2017 |
25,485 |
Panel 20 Round 5 |
7,723 |
Panel 21 Round 3 |
7,507 |
Panel 22 Round 1 |
10,255 |
July-December 2017 |
14,714 |
Panel 21 Round 4 |
7,158 |
Panel 22 Round 2 |
7,556 |
Return To Table Of Contents
Table A-2. MEPS household survey data collection results, all panels
Panel/round |
Original sample |
Split cases (movers) |
Student cases |
Out-of-scope cases |
Net sample |
Completes |
Average interviewer hours/ complete |
Response rate (%) |
Panel 1 |
Round 1 |
10,799 |
675 |
125 |
165 |
11,434 |
9,496 |
10.4 |
83.1 |
Round 2 |
9,485 |
310 |
74 |
101 |
9,768 |
9,239 |
8.7 |
94.6 |
Round 3 |
9,228 |
250 |
28 |
78 |
9,428 |
9,031 |
8.6 |
95.8 |
Round 4 |
9,019 |
261 |
33 |
89 |
9,224 |
8,487 |
8.5 |
92.0 |
Round 5 |
8,477 |
80 |
5 |
66 |
8,496 |
8,369 |
6.5 |
98.5 |
Panel 2 |
Round 1 |
6,461 |
431 |
71 |
151 |
6,812 |
5,660 |
12.9 |
83.1 |
Round 2 |
5,638 |
204 |
27 |
54 |
5,815 |
5,395 |
9.1 |
92.8 |
Round 3 |
5,382 |
166 |
15 |
52 |
5,511 |
5,296 |
8.5 |
96.1 |
Round 4 |
5,290 |
105 |
27 |
65 |
5,357 |
5,129 |
8.3 |
95.7 |
Round 5 |
5,127 |
38 |
2 |
56 |
5,111 |
5,049 |
6.7 |
98.8 |
Panel 3 |
Round 1 |
5,410 |
349 |
44 |
200 |
5,603 |
4,599 |
12.7 |
82.1 |
Round 2 |
4,581 |
106 |
25 |
39 |
4,673 |
4,388 |
8.3 |
93.9 |
Round 3 |
4,382 |
102 |
4 |
42 |
4,446 |
4,249 |
7.3 |
95.5 |
Round 4 |
4,243 |
86 |
17 |
33 |
4,313 |
4,184 |
6.7 |
97.0 |
Round 5 |
4,183 |
23 |
1 |
26 |
4,181 |
4,114 |
5.6 |
98.4 |
Panel 4 |
Round 1 |
7,103 |
371 |
64 |
134 |
7,404 |
5,948 |
10.9 |
80.3 |
Round 2 |
5,918 |
197 |
47 |
40 |
6,122 |
5,737 |
7.2 |
93.7 |
Round 3 |
5,731 |
145 |
10 |
39 |
5,847 |
5,574 |
6.9 |
95.3 |
Round 4 |
5,567 |
133 |
35 |
39 |
5,696 |
5,540 |
6.8 |
97.3 |
Round 5 |
5,547 |
52 |
4 |
47 |
5,556 |
5500 |
6.0 |
99.0 |
Panel 5 |
Round 1 |
5,533 |
258 |
62 |
103 |
5,750 |
4,670 |
11.1 |
81.2 |
Round 2 |
4,655 |
119 |
27 |
27 |
4,774 |
4,510 |
7.7 |
94.5 |
Round 3 |
4,496 |
108 |
17 |
24 |
4,597 |
4,437 |
7.2 |
96.5 |
Round 4 |
4,426 |
117 |
20 |
41 |
4,522 |
4,396 |
7.0 |
97.2 |
Round 5 |
4,393 |
47 |
12 |
32 |
4,420 |
4,357 |
5.5 |
98.6 |
Panel 6 |
Round 1 |
11,026 |
595 |
135 |
200 |
11,556 |
9,382 |
10.8 |
81.2 |
Round 2 |
9,351 |
316 |
49 |
50 |
9,666 |
9,222 |
7.2 |
95.4 |
Round 3 |
9,183 |
215 |
23 |
41 |
9,380 |
9,001 |
6.5 |
96.0 |
Round 4 |
8,977 |
174 |
32 |
66 |
9,117 |
8,843 |
6.6 |
97.0 |
Round 5 |
8,830 |
94 |
14 |
46 |
8,892 |
8,781 |
5.6 |
98.8 |
Panel 7 |
Round 1 |
8,339 |
417 |
76 |
122 |
8,710 |
7,008 |
10.0 |
80.5 |
Round 2 |
6,991 |
190 |
40 |
24 |
7,197 |
6,802 |
7.2 |
94.5 |
Round 3 |
6,779 |
169 |
21 |
32 |
6,937 |
6,673 |
6.5 |
96.2 |
Round 4 |
6,655 |
133 |
17 |
34 |
6,771 |
6,593 |
7.0 |
97.4 |
Round 5 |
6,578 |
79 |
11 |
39 |
6629 |
6529 |
5.7 |
98.5 |
Panel 8 |
Round 1 |
8,706 |
441 |
73 |
175 |
9,045 |
7,177 |
10.0 |
79.3 |
Round 2 |
7,159 |
218 |
52 |
36 |
7,393 |
7,049 |
7.2 |
95.4 |
Round 3 |
7,035 |
150 |
13 |
33 |
7,165 |
6,892 |
6.5 |
96.2 |
Round 4 |
6,878 |
149 |
27 |
53 |
7,001 |
6,799 |
7.3 |
97.1 |
Round 5 |
6,795 |
71 |
8 |
41 |
6,833 |
6,726 |
6.0 |
98.4 |
Panel 9 |
Round 1 |
8,939 |
417 |
73 |
179 |
9,250 |
7,205 |
10.5 |
77.9 |
Round 2 |
7,190 |
237 |
40 |
40 |
7,427 |
7,027 |
7.7 |
94.6 |
Round 3 |
7,005 |
189 |
24 |
31 |
7,187 |
6,861 |
7.1 |
95.5 |
Round 4 |
6,843 |
142 |
23 |
44 |
6,964 |
6,716 |
7.4 |
96.5 |
Round 5 |
6,703 |
60 |
8 |
43 |
6,728 |
6,627 |
6.1 |
98.5 |
Panel 10 |
Round 1 |
8,748 |
430 |
77 |
169 |
9,086 |
7,175 |
11.0 |
79.0 |
Round 2 |
7,148 |
219 |
36 |
22 |
7,381 |
6,940 |
7.8 |
94.0 |
Round 3 |
6,921 |
156 |
10 |
31 |
7,056 |
6,727 |
6.8 |
95.3 |
Round 4 |
6,708 |
155 |
13 |
34 |
6,842 |
6,590 |
7.3 |
96.3 |
Round 5 |
6,596 |
55 |
9 |
38 |
6,622 |
6,461 |
6.2 |
97.6 |
Panel 11 |
Round 1 |
9,654 |
399 |
81 |
162 |
9,972 |
7,585 |
11.5 |
76.1 |
Round 2 |
7,572 |
244 |
42 |
24 |
7,834 |
7,276 |
7.8 |
92.9 |
Round 3 |
7,263 |
170 |
15 |
25 |
7,423 |
7,007 |
6.9 |
94.4 |
Round 4 |
7,005 |
139 |
14 |
36 |
7,122 |
6,898 |
7.2 |
96.9 |
Round 5 |
6,895 |
51 |
7 |
44 |
6,905 |
6,781 |
5.5 |
98.2 |
Panel 12 |
Round 1 |
7,467 |
331 |
86 |
172 |
7,712 |
5,901 |
14.2 |
76.5 |
Round 2 |
5,901 |
157 |
27 |
27 |
6,058 |
5,584 |
9.1 |
92.2 |
Round 3 |
5,580 |
105 |
13 |
12 |
5,686 |
5,383 |
8.1 |
94.7 |
Round 4 |
5,376 |
102 |
12 |
16 |
5,474 |
5,267 |
8.8 |
96.2 |
Round 5 |
5,261 |
50 |
8 |
21 |
5,298 |
5,182 |
6.4 |
97.8 |
Panel 13 |
Round 1 |
9,939 |
502 |
97 |
213 |
10,325 |
8,017 |
12.2 |
77.6 |
Round 2 |
8,008 |
220 |
47 |
23 |
8,252 |
7,809 |
9.0 |
94.6 |
Round 3 |
7,802 |
204 |
14 |
38 |
7,982 |
7,684 |
7.2 |
96.2 |
Round 4 |
7,670 |
162 |
17 |
40 |
7,809 |
7,576 |
7.5 |
97.0 |
Round 5 |
7,576 |
70 |
15 |
38 |
7,623 |
7,461 |
6.1 |
97.9 |
Panel 14 |
Round 1 |
9,899 |
394 |
74 |
140 |
10,227 |
7,650 |
12.3 |
74.8 |
Round 2 |
7,669 |
212 |
29 |
27 |
7,883 |
7,239 |
8.3 |
91.8 |
Round 3 |
7,226 |
144 |
23 |
34 |
7,359 |
6,980 |
7.3 |
94.9 |
Round 4 |
6,974 |
112 |
23 |
30 |
7,079 |
6,853 |
7.7 |
96.8 |
Round 5 |
6,845 |
55 |
9 |
30 |
6,879 |
6,761 |
6.2 |
98.3 |
Panel 15 |
Round 1 |
8,968 |
374 |
73 |
157 |
9,258 |
6,802 |
13.2 |
73.5 |
Round 2 |
6,811 |
171 |
19 |
21 |
6,980 |
6,435 |
8.9 |
92.2 |
Round 3 |
6,431 |
134 |
23 |
22 |
6,566 |
6,261 |
7.2 |
95.4 |
Round 4 |
6,254 |
116 |
15 |
26 |
6,359 |
6,165 |
7.8 |
97.0 |
Round 5 |
6,156 |
50 |
5 |
19 |
6,192 |
6,078 |
6.0 |
98.2 |
Panel 16 |
Round 1 |
10,417 |
504 |
98 |
555 |
10,940 |
8,553 |
11.4 |
78.2 |
Round 2 |
8,353 |
248 |
40 |
32 |
8,821 |
8,351 |
7.6 |
94.7 |
Round 3 |
8,160 |
223 |
19 |
27 |
8,375 |
8,236 |
6.4 |
96.1 |
Round 4 |
8,048 |
151 |
16 |
13 |
8,390 |
8,162 |
6.6 |
97.3 |
Round 5 |
7,969 |
66 |
13 |
25 |
8,198 |
7,998 |
5.5 |
97.6 |
Panel 17 |
Round 1 |
9,931 |
490 |
92 |
127 |
10,386 |
8,121 |
11.7 |
78.2 |
Round 2 |
8,113 |
230 |
35 |
19 |
8,359 |
7,874 |
7.9 |
94.2 |
Round 3 |
7,869 |
180 |
15 |
15 |
8,049 |
7,663 |
6.3 |
95.2 |
Round 4 |
7,656 |
199 |
19 |
30 |
7,844 |
7,494 |
7.4 |
95.5 |
Round 5 |
7,485 |
87 |
10 |
23 |
7,559 |
7,445 |
6.1 |
98.5 |
Panel 18 |
Round 1 |
9,950 |
435 |
83 |
111 |
10,357 |
7,683 |
12.3 |
74.2 |
Round 2 |
7,691 |
264 |
32 |
16 |
7,971 |
7,402 |
9.2 |
92.9 |
Round 3 |
7,402 |
235 |
21 |
22 |
7,635 |
7,213 |
7.6 |
94.5 |
Round 4 |
7,203 |
189 |
14 |
22 |
7,384 |
7,172 |
7.5 |
97.1 |
Round 5 |
7,163 |
94 |
12 |
15 |
7,254 |
7,138 |
6.2 |
98.4 |
Panel 19 |
Round 1 |
9,970 |
492 |
70 |
115 |
10,417 |
7,475 |
13.5 |
71.8 |
Round 2 |
7,460 |
222 |
23 |
24 |
7,681 |
7,188 |
8.4 |
93.6 |
Round 3 |
7,168 |
187 |
12 |
17 |
7,350 |
6,962 |
7.0 |
94.7 |
Round 4 |
6,946 |
146 |
20 |
23 |
7,089 |
6,858 |
7.4 |
96.7 |
Round 5 |
6,856 |
75 |
7 |
24 |
6,914 |
6,794 |
5.9 |
98.3 |
Panel 20 |
Round 1 |
10,854 |
496 |
85 |
117 |
11,318 |
8,318 |
12.5 |
73.5 |
Round 2 |
8,301 |
243 |
39 |
22 |
8,561 |
7,998 |
8.3 |
93.4 |
Round 3 |
7,987 |
173 |
17 |
26 |
8,151 |
7,753 |
6.8 |
95.1 |
Round 4 |
7,729 |
161 |
19 |
31 |
7,878 |
7,622 |
7.2 |
96.8 |
Round 5 |
7,611 |
99 |
13 |
23 |
7,700 |
7,421 |
6.0 |
96.4 |
Panel 21 |
Round 1 |
9,851 |
462 |
92 |
89 |
10,316 |
7,674 |
12.6 |
74.4 |
Round 2 |
7,661 |
207 |
32 |
17 |
7,883 |
7,327 |
8.5 |
93.0 |
Round 3 |
7,327 |
166 |
14 |
19 |
7,488 |
7,043 |
7.2 |
94.1 |
Round 4 |
7,025 |
119 |
14 |
20 |
7,138 |
6,907 |
7.0 |
96.8 |
Panel 22 |
Round 1 |
9,835 |
352 |
68 |
86 |
10,169 |
7,381 |
12.8 |
72.7 |
Round 2 |
7,371 |
166 |
19 |
11 |
7,545 |
7,039 |
8.5 |
93.3 |
* Figures in the table are weighted to reflect results of the interim nonresponse subsampling procedure implemented in the first round of Panel 16.
Return To Table Of Contents
Table A-3 Signing rates for medical provider authorization forms
Panel/round |
Authorization forms requested |
Authorization forms signed |
Signing rate (%) |
Panel 1 |
Round 1 |
3,562 |
2,624 |
73.7 |
Round 2 |
19,874 |
14,145 |
71.2 |
Round 3 |
17,722 |
12,062 |
68.1 |
Round 4 |
17,133 |
10,542 |
61.5 |
Round 5 |
12,544 |
6,763 |
53.9 |
Panel 2 |
Round 1 |
2,735 |
1,788 |
65.4 |
Round 2 |
13,461 |
9,433 |
70.1 |
Round 3 |
11,901 |
7,537 |
63.3 |
Round 4 |
11,164 |
6,485 |
58.1 |
Round 5 |
8,104 |
4,244 |
52.4 |
Panel 3 |
Round 1 |
2,078 |
1,349 |
64.9 |
Round 2 |
10,335 |
6,463 |
62.5 |
Round 3 |
8,716 |
4,797 |
55.0 |
Round 4 |
8,761 |
4,246 |
48.5 |
Round 5 |
6,913 |
2,911 |
42.1 |
Panel 4 |
Round 1 |
2,400 |
1,607 |
67.0 |
Round 2 |
12,711 |
8,434 |
66.4 |
Round 3 |
11,078 |
6,642 |
60.0 |
Round 4 |
11,047 |
6,888 |
62.4 |
Round 5 |
8,684 |
5,096 |
58.7 |
Panel 5 |
Round 1 |
1,243 |
834 |
67.1 |
Round 2 |
14,008 |
9,618 |
68.7 |
Round 3 |
12,869 |
8,301 |
64.5 |
Round 4 |
13,464 |
9,170 |
68.1 |
Round 5 |
10,888 |
7,025 |
64.5 |
Panel 6 |
Round 1 |
2,783 |
2,012 |
72.3 |
Round 2 |
29,861 |
22,872 |
76.6 |
Round 3 |
26,068 |
18,219 |
69.9 |
Round 4 |
27,146 |
20,082 |
74.0 |
Round 5 |
21,022 |
14,581 |
69.4 |
Panel 7 |
Round 1 |
2,298 |
1,723 |
75.0 |
Round 2 |
22,302 |
17,557 |
78.7 |
Round 3 |
19,312 |
13,896 |
72.0 |
Round 4 |
16,934 |
13,725 |
81.1 |
Round 5 |
14,577 |
11,099 |
76.1 |
Panel 8 |
Round 1 |
2,287 |
1,773 |
77.5 |
Round 2 |
22,533 |
17,802 |
79.0 |
Round 3 |
19,530 |
14,064 |
72.0 |
Round 4 |
19,718 |
14,599 |
74.0 |
Round 5 |
15,856 |
11,106 |
70.0 |
Panel 9 |
Round 1 |
2,253 |
1,681 |
74.6 |
Round 2 |
22,668 |
17,522 |
77.3 |
Round 3 |
19,601 |
13,672 |
69.8 |
Round 4 |
20,147 |
14,527 |
72.1 |
Round 5 |
15,963 |
10,720 |
67.2 |
Panel 10 |
Round 1 |
2,068 |
1,443 |
69.8 |
Round 2 |
22,582 |
17,090 |
75.7 |
Round 3 |
18,967 |
13,396 |
70.6 |
Round 4 |
19,087 |
13,296 |
69.7 |
Round 5 |
15,787 |
10,476 |
66.4 |
Panel 11 |
Round 1 |
2,154 |
1,498 |
69.5 |
Round 2 |
23,957 |
17,742 |
74.1 |
Round 3 |
20,756 |
13,400 |
64.6 |
Round 4 |
21,260 |
14,808 |
69.7 |
Round 5 |
16,793 |
11,482 |
68.4 |
Panel 12 |
Round 1 |
1,695 |
1,066 |
62.9 |
Round 2 |
17,787 |
12,524 |
70.4 |
Round 3 |
15,291 |
10,006 |
65.4 |
Round 4 |
15,692 |
10,717 |
68.3 |
Round 5 |
12,780 |
8,367 |
65.5 |
Panel 13 |
Round 1 |
2,217 |
1,603 |
72.3 |
Round 2 |
24,357 |
18,566 |
76.2 |
Round 3 |
21,058 |
14,826 |
70.4 |
Round 4 |
21,673 |
15,632 |
72.1 |
Round 5 |
17,158 |
11,779 |
68.7 |
Panel 14 |
Round 1 |
2,128 |
1,498 |
70.4 |
Round 2 |
23,138 |
17,739 |
76.7 |
Round 3 |
19,024 |
13,673 |
71.9 |
Round 4 |
18,532 |
12,824 |
69.2 |
Round 5 |
15,444 |
10,201 |
66.1 |
Panel 15 |
Round 1 |
1,680 |
1,136 |
67.6 |
Round 2 |
18,506 |
13,628 |
73.6 |
Round 3 |
16,686 |
11,652 |
69.8 |
Round 4 |
16,260 |
11,139 |
68.5 |
Round 5 |
13,443 |
8,420 |
62.6 |
Panel 16 |
Round 1 |
1,811 |
1,223 |
67.5 |
Round 2 |
23,718 |
17,566 |
74.1 |
Round 3 |
21,780 |
14,828 |
68.1 |
Round 4 |
21,537 |
16,329 |
75.8 |
Round 5 |
16,688 |
12,028 |
72.1 |
Panel 17 |
Round 1 |
1,655 |
1,117 |
67.5 |
Round 2 |
21,749 |
17,694 |
81.4 |
Round 3 |
19,292 |
15,125 |
78.4 |
Round 4 |
20,086 |
15,691 |
78.1 |
Round 5 |
15,064 |
11,873 |
78.8 |
Panel 18 |
Round 1 |
1,677 |
1,266 |
75.5 |
Round 2 |
22,714 |
18,043 |
79.4 |
Round 3 |
20,728 |
15,827 |
76.4 |
Round 4 |
17,092 |
13,704 |
80.2 |
Round 5 |
15,448 |
11,796 |
76.4 |
Panel 19 |
Round 1 |
2,189 |
1,480 |
67.6 |
Round 2 |
22,671 |
17,190 |
75.8 |
Round 3 |
20,582 |
14,534 |
70.6 |
Round 4 |
17,102 |
13,254 |
77.5 |
Round 5 |
15,330 |
11,425 |
74.5 |
Panel 20 |
Round 1 |
2,354 |
1,603 |
68.1 |
Round 2 |
25,334 |
18,479 |
72.9 |
Round 3 |
22,851 |
15,862 |
69.4 |
Round 4 |
18,234 |
14,026 |
76.9 |
Round 5 |
16,274 |
12,100 |
74.4 |
Panel 21 |
Round 1 |
2,037 |
1,396 |
68.5 |
Round 2 |
22,984 |
17,295 |
75.2 |
Round 3 |
20,802 |
14,898 |
71.6 |
Round 4 |
16,487 |
13,110 |
79.5 |
Panel 22 |
Round 1 |
2,274 |
1,573 |
69.2 |
Round 2 |
22,913 |
17,530 |
76.5 |
Return To Table Of Contents
Table A-4 Signing rates for pharmacy authorization
Panel/round |
Permission forms requested |
Permission forms signed |
Signing rate (%) |
Panel 1 |
Round 3 |
19,913 |
14,468 |
72.7 |
Round 5 |
8,685 |
6,002 |
69.1 |
Panel 2 |
Round 3 |
12,241 |
8,694 |
71.0 |
Round 5 |
8,640 |
6,297 |
72.9 |
Panel 3 |
Round 3 |
9,016 |
5,929 |
65.8 |
Round 5 |
7,569 |
5,200 |
68.7 |
Panel 4 |
Round 3 |
11,856 |
8,280 |
69.8 |
Round 5 |
10,688 |
8,318 |
77.8 |
Panel 5 |
Round 3 |
9,248 |
6,852 |
74.1 |
Round 5 |
8,955 |
7,174 |
80.1 |
Panel 6 |
Round 3 |
19,305 |
15,313 |
79.3 |
Round 5 |
17,981 |
14,864 |
82.7 |
Panel 7 |
Round 3 |
14,456 |
11,611 |
80.3 |
Round 5 |
13,428 |
11,210 |
83.5 |
Panel 8 |
Round 3 |
14,391 |
11,533 |
80.1 |
Round 5 |
13,422 |
11,049 |
82.3 |
Panel 9 |
Round 3 |
14,334 |
11,189 |
78.1 |
Round 5 |
13,416 |
10,893 |
81.2 |
Panel 10 |
Round 3 |
13,928 |
10,706 |
76.9 |
Round 5 |
12,869 |
10,260 |
79.7 |
Panel 11 |
Round 3 |
14,937 |
11,328 |
75.8 |
Round 5 |
13,778 |
11,332 |
82.3 |
Panel 12 |
Round 3 |
10,840 |
8,242 |
76.0 |
Round 5 |
9,930 |
8,015 |
80.7 |
Panel 13 |
Round 3 |
15,379 |
12,165 |
79.1 |
Round 4 |
10,782 |
7,795 |
72.3 |
Round 5 |
9,451 |
6,635 |
70.2 |
Panel 14 |
Round 2 |
11,841 |
9,151 |
77.3 |
Round 3 |
9,686 |
7,091 |
73.2 |
Round 4 |
9,298 |
6,623 |
71.2 |
Round 5 |
8,415 |
6,011 |
71.4 |
Panel 15 |
Round 2 |
9,698 |
7,092 |
73.1 |
Round 3 |
8,684 |
6,189 |
71.3 |
Round 4 |
8,163 |
5,756 |
70.5 |
Round 5 |
7,302 |
4,485 |
66.9 |
Panel 16 |
Round 2 |
12,093 |
8,892 |
73.5 |
Round 3 |
10,959 |
7,591 |
69.3 |
Round 4 |
10,432 |
8,194 |
78.6 |
Round 5 |
8,990 |
6,928 |
77.1 |
Panel 17 |
Round 2 |
14,181 |
12,567 |
88.6 |
Round 3 |
9,715 |
7,580 |
78.0 |
Round 4 |
9,759 |
7,730 |
79.2 |
Round 5 |
8,245 |
6,604 |
80.1 |
Panel 18 |
Round 2 |
10,977 |
8,755 |
79.8 |
Round 3 |
9,757 |
7,573 |
77.6 |
Round 4 |
8,526 |
6,858 |
80.4 |
Round 5 |
7,918 |
6,173 |
78.0 |
Panel 19 |
Round 2 |
10,749 |
8,261 |
76.9 |
Round 3 |
9,618 |
6,902 |
71.8 |
Round 4 |
8,557 |
6,579 |
76.9 |
Round 5 |
7,767 |
5,905 |
76.0 |
Panel 20 |
Round 2 |
12,074 |
8,796 |
72.9 |
Round 3 |
10,577 |
7,432 |
70.3 |
Round 4 |
9,0994 |
6,945 |
76.3 |
Round 5 |
8,312 |
6,339 |
76.3 |
Panel 21 |
Round 2 |
10,783 |
7,985 |
74.1 |
Round 3 |
9,540 |
6,847 |
71.8 |
Round 4 |
8,172 |
6,387 |
78.2 |
Panel 22 |
Round 2 |
10,510 |
7,919 |
75.4 |
Return To Table Of Contents
Table A-5 Results of Self-Administered Questionnaire (SAQ) collection
Panel/round |
SAQs requested |
SAQs completed |
SAQs refused |
Other nonresponse |
Response rate (%) |
Panel 1 |
Round 2 |
16,577 |
9,910 |
|
|
59.8 |
Round 3 |
6,032 |
1,469 |
840 |
3,723 |
24.3 |
Combined, 1996 |
16,577 |
11,379 |
|
|
68.6 |
Panel 4* |
Round 4 |
13,936 |
12,265 |
288 |
1,367 |
87.9 |
Round 5 |
1,683 |
947 |
314 |
422 |
56.3 |
Combined, 2000 |
13,936 |
13,212 |
|
|
94.8 |
Panel 5* |
Round 2 |
11,239 |
9,833 |
191 |
1,213 |
86.9 |
Round 3 |
1,314 |
717 |
180 |
417 |
54.6 |
Combined, 2000 |
11,239 |
10,550 |
|
|
93.9 |
Round 4 |
7,812 |
6,790 |
198 |
824 |
86.9 |
Round 5 |
1,022 |
483 |
182 |
357 |
47.3 |
Combined, 2001 |
7,812 |
7,273 |
380 |
1,181 |
93.1 |
Panel 6 |
Round 2 |
16,577 |
14,233 |
412 |
1,932 |
85.9 |
Round 3 |
2,143 |
1,213 |
230 |
700 |
56.6 |
Combined, 2001 |
16,577 |
15,446 |
642 |
2,632 |
93.2 |
Round 4 |
15,687 |
13,898 |
362 |
1,427 |
88.6 |
Round 5 |
1,852 |
967 |
377 |
508 |
52.2 |
Combined, 2002 |
15,687 |
14,865 |
739 |
1,935 |
94.8 |
Panel 7 |
Round 2 |
12,093 |
10,478 |
196 |
1,419 |
86.6 |
Round 3 |
1,559 |
894 |
206 |
459 |
57.3 |
Combined, 2002 |
12,093 |
11,372 |
402 |
1,878 |
94.0 |
Round 4 |
11,703 |
10,125 |
285 |
1,292 |
86.5 |
Round 5 |
1,493 |
786 |
273 |
434 |
52.7 |
Combined, 2003 |
11,703 |
10,911 |
558 |
1,726 |
93.2 |
Panel 8 |
Round 2 |
12,533 |
10,765 |
203 |
1,565 |
85.9 |
Round 3 |
1,568 |
846 |
234 |
488 |
54.0 |
Combined, 2003 |
12,533 |
11,611 |
437 |
2,053 |
92.6 |
Round 4 |
11,996 |
10,534 |
357 |
1,105 |
87.8 |
Round 5 |
1,400 |
675 |
344 |
381 |
48.2 |
Combined, 2004 |
11,996 |
11,209 |
701 |
1,486 |
93.4 |
Panel 9 |
Round 2 |
12,541 |
10,631 |
381 |
1,529 |
84.8 |
Round 3 |
1,670 |
886 |
287 |
496 |
53.1 |
Combined, 2004 |
12,541 |
11,517 |
668 |
2,025 |
91.9 |
Round 4 |
11,913 |
10,357 |
379 |
1,177 |
86.9 |
Round 5 |
1,478 |
751 |
324 |
403 |
50.8 |
Combined, 2005 |
11,913 |
11,108 |
703 |
1,580 |
93.2 |
Panel 10 |
Round 2 |
12,360 |
10,503 |
391 |
1,466 |
85.0 |
Round 3 |
1,626 |
787 |
280 |
559 |
48.4 |
Combined, 2005 |
12,360 |
11,290 |
671 |
2025 |
91.3 |
Round 4 |
11,726 |
10,081 |
415 |
1,230 |
86.0 |
Round 5 |
1,516 |
696 |
417 |
403 |
45.9 |
Combined, 2006 |
11,726 |
10,777 |
832 |
1,633 |
91.9 |
Panel 11 |
Round 2 |
13,146 |
10,924 |
452 |
1,770 |
83.1 |
Round 3 |
1,908 |
948 |
349 |
611 |
49.7 |
Combined, 2006 |
13,146 |
11,872 |
801 |
2,381 |
90.3 |
Round 4 |
12,479 |
10,771 |
622 |
1,086 |
86.3 |
Round 5 |
1,621 |
790 |
539 |
292 |
48.7 |
Combined, 2007 |
12,479 |
11,561 |
1,161 |
1,378 |
92.6 |
Panel 12 |
Round 2 |
10,061 |
8,419 |
502 |
1,140 |
83.7 |
Round 3 |
1,460 |
711 |
402 |
347 |
48.7 |
Combined, 2007 |
10,061 |
9,130 |
904 |
1,487 |
90.7 |
Round 4 |
9,550 |
8,303 |
577 |
670 |
86.9 |
Round 5 |
1,145 |
541 |
415 |
189 |
47.3 |
Combined, 2008 |
9,550 |
8,844 |
992 |
859 |
92.6 |
Panel 13 |
Round 2 |
14,410 |
12,541 |
707 |
1,162 |
87.0 |
Round 3 |
1,630 |
829 |
439 |
362 |
50.9 |
Combined, 2008 |
14,410 |
13,370 |
1,146 |
1,524 |
92.8 |
Round 4 |
13,822 |
12,311 |
559 |
952 |
89.1 |
Round 5 |
1,364 |
635 |
476 |
253 |
46.6 |
Combined, 2009 |
13,822 |
12,946 |
1,705 |
1205 |
93.7 |
Panel 14 |
Round 2 |
13,335 |
11,528 |
616 |
1,191 |
86.5 |
Round 3 |
1,542 |
818 |
426 |
298 |
53.1 |
Combined, 2009 |
13,335 |
12,346 |
1042 |
1,489 |
92.6 |
Round 4 |
12,527 |
11,041 |
644 |
839 |
88.1 |
Round 5 |
1,403 |
645 |
497 |
261 |
46.0 |
Combined, 2010 |
12,527 |
11,686 |
1,141 |
1,100 |
93.3 |
Panel 15 |
Round 2 |
11,857 |
10,121 |
637 |
1,096 |
85.4 |
Round 3 |
1,491 |
725 |
425 |
341 |
48.6 |
Combined, 2010 |
11,857 |
10,846 |
1,062 |
1,437 |
91.5 |
Round 4 |
11,311 |
9,804 |
572 |
935 |
86.7 |
Round 5 |
1,418 |
678 |
461 |
279 |
47.8 |
Combined, 2011 |
11,311 |
10,482 |
1,033 |
1,214 |
92.6 |
Panel 16 |
Round 2 |
15,026 |
12,926 |
707 |
1393 |
86.0 |
Round 3 |
1,863 |
949 |
465 |
449 |
50.9 |
Combined, 2011 |
15,026 |
13,875 |
1,172 |
728 |
92.3 |
Round 4 |
13,620 |
12,415 |
582 |
623 |
91.2 |
Round 5 |
1,112 |
516 |
442 |
154 |
46.4 |
Combined, 2012 |
13,620 |
12,931 |
1,024 |
777 |
94.9 |
Panel 17 |
Round 2 |
14,181 |
12,567 |
677 |
937 |
88.6 |
Round 3 |
1,395 |
690 |
417 |
288 |
49.5 |
Combined, 2012 |
14,181 |
13,257 |
1,094 |
1,225 |
93.5 |
Round 4 |
13,086 |
11,566 |
602 |
918 |
88.4 |
Round 5 |
1,429 |
655 |
504 |
270 |
45.8 |
Combined, 2013 |
13,086 |
12,221 |
1,106 |
1,188 |
93.4 |
Panel 18 |
Round 2 |
13,158 |
10,805 |
785 |
1,568 |
82.1 |
Round 3 |
2,066 |
1,022 |
547 |
497 |
48.5 |
Combined, 2013 |
13,158 |
11,827 |
1,332 |
2,065 |
89.9 |
Round 4 |
12,243 |
10,050 |
916 |
1,277 |
82.1 |
Round 5 |
2,063 |
936 |
721 |
406 |
45.4 |
Combined, 2014 |
12,243 |
10,986 |
1,637 |
1,683 |
89.7 |
Panel 19 |
Round 2 |
12,664 |
10,047 |
1,014 |
1,603 |
79.3 |
Round 3 |
2,306 |
1,050 |
694 |
615 |
44.5 |
Combined, 2014 |
12,664 |
11,097 |
1,708 |
2,218 |
87.6 |
Round 4 |
11,782 |
9,542 |
1,047 |
1,175 |
81.0 |
Round 5 |
2,131 |
894 |
822 |
414 |
42.0 |
Combined, 2015 |
11,782 |
10,436 |
1,869 |
1,589 |
88.6 |
Panel 20 |
Round 2 |
14,077 |
10,885 |
1,223 |
1,966 |
77.3 |
Round 3 |
2,899 |
1,329 |
921 |
649 |
45.8 |
Combined, 2015 |
14,077 |
12,214 |
2,144 |
2,615 |
86.8 |
Round 4 |
13,068 |
10,572 |
1,127 |
1,371 |
80.9 |
Round 5 |
2,262 |
1,001 |
891 |
370 |
44.3 |
Combined, 2016 |
13,068 |
11,573 |
2,018 |
1,741 |
88.6 |
Panel 21 |
Round 2 |
13,143 |
10,212 |
1,170 |
1,761 |
77.7 |
Round 3 |
2,585 |
1,123 |
893 |
569 |
43.4 |
Combined, 2016 |
13,143 |
11,335 |
2,063 |
2,330 |
86.2 |
Panel 22 |
Round 2 |
12,304 |
9,929 |
1,086 |
1,289 |
80.7 |
* Totals represent combined collection of the SAQ and the parent-administered questionnaire (PAQ).
Return To Table Of Contents
Table A-6 Results of Diabetes Care Supplement (DCS) collection*
Panel/round |
DCSs requested |
DCSs completed |
Response rate (%) |
Panel 4 |
Round 5 |
696 |
631 |
90.7 |
Panel 5 |
Round 3 |
550 |
508 |
92.4 |
Round 5 |
570 |
500 |
87.7 |
Panel 6 |
Round 3 |
1,166 |
1,000 |
85.8 |
Round 5 |
1,202 |
1,166 |
97.0 |
Panel 7 |
Round 3 |
870 |
848 |
97.5 |
Round 5 |
869 |
820 |
94.4 |
Panel 8 |
Round 3 |
971 |
885 |
91.1 |
Round 5 |
977 |
894 |
91.5 |
Panel 9 |
Round 3 |
1,003 |
909 |
90.6 |
Round 5 |
904 |
806 |
89.2 |
Panel 10 |
Round 3 |
1,060 |
939 |
88.6 |
Round 5 |
1,078 |
965 |
89.5 |
Panel 11 |
Round 3 |
1,188 |
1,030 |
86.7 |
Round 5 |
1,182 |
1,053 |
89.1 |
Panel 12 |
Round 3 |
917 |
825 |
90.0 |
Round 5 |
883 |
815 |
92.3 |
Panel 13 |
Round 3 |
1,278 |
1,182 |
92.5 |
Round 5 |
1,278 |
1,154 |
90.3 |
Panel 14 |
Round 3 |
1,174 |
1,048 |
89.3 |
Round 5 |
1,177 |
1,066 |
90.6 |
Panel 15 |
Round 3 |
1,117 |
1,000 |
89.5 |
Round 5 |
1,097 |
990 |
90.3 |
Panel 16 |
Round 3 |
1,425 |
1,283 |
90.0 |
Round 5 |
1,358 |
1,256 |
92.5 |
Panel 17 |
Round 3 |
1,315 |
1,177 |
89.5 |
Round 5 |
1,308 |
1,174 |
89.8 |
Panel 18 |
Round 3 |
1,362 |
1,182 |
86.8 |
Round 5 |
1,342 |
1,187 |
88.5 |
Panel 19 |
Round 3 |
1,272 |
1,124 |
88.4 |
Round 5 |
1,316 |
1,144 |
87.2 |
Panel 20 |
Round 3 |
1,412 |
1,190 |
84.5 |
Round 5 |
1,386 |
1,174 |
84.9 |
Panel 21 |
Round 3 |
1,422 |
1,170 |
82.5 |
* Tables represent combined DCS/proxy DCS collection.
Return To Table Of Contents
Table A-7. Calls to respondent information line
Reason for call |
Spring 2000 (Panel 5 Round 1, Panel 4 Round 3, Panel 3 Round 5) |
Fall 2000 (Panel 5 Round 2, Panel 4 Round 4) |
Round 1 |
Rounds 3 and 5 |
Rounds 2 and 4 |
N |
% |
N |
% |
N |
% |
Address change |
23 |
4.0 |
13 |
8.3 |
8 |
5.7 |
Appointment |
37 |
6.5 |
26 |
16.7 |
28 |
19.9 |
Request callback |
146 |
25.7 |
58 |
37.2 |
69 |
48.9 |
Refusal |
183 |
32.2 |
20 |
12.8 |
12 |
8.5 |
Willing to participate |
10 |
1.8 |
2 |
1.3 |
0 |
0.0 |
Other |
157 |
27.6 |
35 |
22.4 |
8 |
5.7 |
Report a respondent deceased |
5 |
0.9 |
1 |
0.6 |
0 |
0.0 |
Request a Spanish-speaking interview |
8 |
1.4 |
1 |
0.6 |
0 |
0.0 |
Request SAQ help |
0 |
0.0 |
0 |
0.0 |
16 |
11.3 |
Total |
569 |
|
156 |
|
141 |
|
Reason for call |
Spring 2001 (Panel 6 Round 1, Panel 5 Round 3, Panel 4 Round 5) |
Fall 2001 (Panel 6 Round 2, Panel 5 Round 4) |
Round 1 |
Rounds 3 and 5 |
Rounds 2 and 4 |
N |
% |
N |
% |
N |
% |
Address/telephone change |
27 |
3.7 |
17 |
12.7 |
56 |
15.7 |
Appointment |
119 |
16.2 |
56 |
41.8 |
134 |
37.5 |
Request callback |
259 |
35.3 |
36 |
26.9 |
92 |
25.8 |
No message |
8 |
1.1 |
3 |
2.2 |
0 |
0.0 |
Other |
29 |
4.0 |
7 |
5.2 |
31 |
8.7 |
Request SAQ help |
0 |
0.0 |
2 |
1.5 |
10 |
2.8 |
Special needs |
5 |
0.7 |
3 |
2.2 |
0 |
0.0 |
Refusal |
278 |
37.9 |
10 |
7.5 |
25 |
7.0 |
Willing to participate |
8 |
1.1 |
0 |
0.0 |
9 |
2.5 |
Total |
733 |
|
134 |
|
357 |
|
Reason for call |
Spring 2002 (Panel 7 Round 1, Panel 6 Round 3, Panel 5 Round 5) |
Fall 2002 (Panel 7 Round 2, Panel 6 Round 4) |
Round 1 |
Rounds 3 and 5 |
Rounds 2 and 4 |
N |
% |
N |
% |
N |
% |
Address/telephone change |
28 |
4.5 |
29 |
13.9 |
66 |
16.7 |
Appointment |
77 |
12.5 |
71 |
34.1 |
147 |
37.1 |
Request callback |
210 |
34.0 |
69 |
33.2 |
99 |
25.0 |
No message |
6 |
1.0 |
3 |
1.4 |
5 |
1.3 |
Other |
41 |
6.6 |
17 |
8.2 |
10 |
2.5 |
Request SAQ help |
0 |
0.0 |
0 |
0.0 |
30 |
7.6 |
Special needs |
1 |
0.2 |
0 |
0.0 |
3 |
0.8 |
Refusal |
232 |
37.6 |
14 |
6.7 |
29 |
7.3 |
Willing to participate |
22 |
3.6 |
5 |
2.4 |
7 |
1.8 |
Total |
617 |
|
208 |
|
396 |
|
Reason for call |
Spring 2003 (Panel 8 Round 1, Panel 7 Round 3, Panel 6 Round 5) |
Fall 2003 (Panel 8 Round 2, Panel 7 Round 4) |
Round 1 |
Rounds 3 and 5 |
Rounds 2 and 4 |
N |
% |
N |
% |
N |
% |
Address/telephone change |
20 |
4.2 |
33 |
13.7 |
42 |
17.9 |
Appointment |
83 |
17.5 |
87 |
36.1 |
79 |
33.8 |
Request callback |
165 |
34.9 |
100 |
41.5 |
97 |
41.5 |
No message |
16 |
3.4 |
7 |
2.9 |
6 |
2.6 |
Other |
9 |
1.9 |
8 |
3.3 |
3 |
1.3 |
Request SAQ help |
0 |
0.0 |
0 |
0.0 |
1 |
0.4 |
Special needs |
5 |
1.1 |
0 |
0.0 |
0 |
0.0 |
Refusal |
158 |
33.4 |
6 |
2.5 |
6 |
2.6 |
Willing to participate |
17 |
3.6 |
0 |
0.0 |
0 |
0.0 |
Total |
473 |
|
241 |
|
234 |
|
Reason for call |
Spring 2004 (Panel 9 Round 1, Panel 8 Round 3, Panel 7 Round 5) |
Fall 2004 (Panel 9 Round 2, Panel 8 Round 4) |
Round 1 |
Rounds 3 and 5 |
Rounds 2 and 4 |
N |
% |
N |
% |
N |
% |
Address/telephone change |
8 |
1.6 |
26 |
13.2 |
42 |
10.9 |
Appointment |
67 |
13.3 |
76 |
38.6 |
153 |
39.7 |
Request callback |
158 |
31.5 |
77 |
39.1 |
139 |
36.1 |
No message |
9 |
1.8 |
5 |
2.5 |
16 |
4.2 |
Other |
8 |
1.6 |
5 |
2.5 |
5 |
1.3 |
Proxy needed |
5 |
1.0 |
2 |
1.0 |
0 |
0.0 |
Request SAQ help |
0 |
0.0 |
0 |
0.0 |
2 |
0.5 |
Special needs |
0 |
0.0 |
0 |
0.0 |
0 |
0.0 |
Refusal |
228 |
45.4 |
6 |
3.0 |
27 |
7.0 |
Willing to participate |
19 |
3.8 |
0 |
0.0 |
1 |
0.3 |
Total |
502 |
|
197 |
|
385 |
|
Reason for call |
Spring 2005 (Panel 10 Round 1, Panel 9 Round 3, Panel 8 Round 5) |
Fall 2005 (Panel 10 Round 2, Panel 9 Round 4) |
Round 1 |
Rounds 3 and 5 |
Rounds 2 and 4 |
N |
% |
N |
% |
N |
% |
Address/telephone change |
16 |
3.3 |
23 |
8.7 |
27 |
6.8 |
Appointment |
77 |
15.7 |
117 |
44.3 |
177 |
44.4 |
Request callback |
154 |
31.4 |
88 |
33.3 |
126 |
31.6 |
No message |
14 |
2.9 |
11 |
4.2 |
28 |
7.0 |
Other |
13 |
2.7 |
1 |
0.4 |
8 |
2.0 |
Proxy needed |
0 |
0.0 |
0 |
0.0 |
0 |
0.0 |
Request SAQ help |
0 |
0.0 |
0 |
0.0 |
1 |
0.3 |
Special needs |
1 |
0.2 |
1 |
0.4 |
0 |
0.0 |
Refusal |
195 |
39.8 |
20 |
7.6 |
30 |
7.5 |
Willing to participate |
20 |
4.1 |
3 |
1.1 |
2 |
0.5 |
Total |
490 |
|
264 |
|
399 |
|
Reason for call |
Spring 2006 (Panel 11 Round 1, Panel 10 Round 3, Panel 9 Round 5) |
Fall 2006 (Panel 11 Round 2, Panel 10 Round 4) |
Round 1 |
Rounds 3 and 5 |
Rounds 2 and 4 |
N |
% |
N |
% |
N |
% |
Address/telephone change |
7 |
1.3 |
24 |
7.5 |
11 |
4.1 |
Appointment |
61 |
11.3 |
124 |
39.0 |
103 |
38.1 |
Request callback |
146 |
27.1 |
96 |
30.2 |
101 |
37.4 |
No message |
72 |
13.4 |
46 |
14.5 |
21 |
7.8 |
Other |
16 |
3.0 |
12 |
3.8 |
8 |
3.0 |
Proxy needed |
0 |
0.0 |
0 |
0.0 |
0 |
0.0 |
Request SAQ help |
0 |
0.0 |
0 |
0.0 |
0 |
0.0 |
Special needs |
4 |
0.7 |
0 |
0.0 |
0 |
0.0 |
Refusal |
216 |
40.1 |
15 |
4.7 |
26 |
9.6 |
Willing to participate |
17 |
3.2 |
1 |
0.3 |
0 |
0.0 |
Total |
539 |
|
318 |
|
270 |
|
Reason for call |
Spring 2007 (Panel 12 Round 1, Panel 11 Round 3, Panel 10 Round 5) |
Fall 2007 (Panel 12 Round 2, Panel 11 Round 4) |
Round 1 |
Rounds 3 and 5 |
Rounds 2 and 4 |
N |
% |
N |
% |
N |
% |
Address/telephone change |
8 |
2.1 |
21 |
7.3 |
23 |
7.6 |
Appointment |
56 |
14.6 |
129 |
44.8 |
129 |
42.6 |
Request callback |
72 |
18.8 |
75 |
26.0 |
88 |
29.0 |
No message |
56 |
14.6 |
37 |
12.8 |
33 |
10.9 |
Other |
20 |
5.2 |
15 |
5.2 |
6 |
2.0 |
Proxy needed |
0 |
0.0 |
0 |
0.0 |
0 |
0.0 |
Request SAQ help |
0 |
0.0 |
0 |
0.0 |
0 |
0.0 |
Special needs |
5 |
1.3 |
0 |
0.0 |
1 |
0.3 |
Refusal |
160 |
41.8 |
10 |
3.5 |
21 |
6.9 |
Willing to participate |
6 |
1.6 |
1 |
0.3 |
2 |
0.7 |
Total |
383 |
|
288 |
|
303 |
|
Reason for call |
Spring 2008 (Panel 13 Round 1, Panel 12 Round 3, Panel 11 Round 5) |
Fall 2008 (Panel 13 Round 2, Panel 12 Round 4) |
Round 1 |
Rounds 3 and 5 |
Rounds 2 and 4 |
N |
% |
N |
% |
N |
% |
Address/telephone change |
20 |
3.4 |
12 |
4.7 |
21 |
5.7 |
Appointment |
92 |
15.5 |
117 |
45.9 |
148 |
39.9 |
Request callback |
164 |
27.6 |
81 |
31.8 |
154 |
41.5 |
No message |
82 |
13.8 |
20 |
7.8 |
22 |
5.9 |
Other |
13 |
2.2 |
12 |
4.7 |
8 |
2.2 |
Proxy needed |
0 |
0.0 |
0 |
0.0 |
0 |
0.0 |
Request SAQ help |
0 |
0.0 |
0 |
0.0 |
0 |
0.0 |
Special needs |
4 |
0.7 |
0 |
0.0 |
0 |
0.0 |
Refusal |
196 |
32.9 |
13 |
5.1 |
18 |
4.9 |
Willing to participate |
24 |
4.0 |
0 |
0.0 |
0 |
0.0 |
Total |
595 |
|
255 |
|
371 |
|
Reason for call |
Spring 2009 (Panel 14 Round 1, Panel 13 Round 3, Panel 12 Round 5) |
Fall 2009 (Panel 14 Round 2, Panel 13 Round 4) |
Round 1 |
Rounds 3 and 5 |
Rounds 2 and 4 |
N |
% |
N |
% |
N |
% |
Address/telephone change |
10 |
2.2 |
13 |
4.3 |
19 |
5.1 |
Appointment |
49 |
10.8 |
87 |
29.0 |
153 |
41.1 |
Request callback |
156 |
34.4 |
157 |
52.3 |
153 |
41.1 |
No message |
48 |
10.6 |
23 |
7.7 |
20 |
5.4 |
Other |
3 |
0.7 |
8 |
2.7 |
3 |
0.8 |
Proxy needed |
0 |
0.0 |
0 |
0.0 |
0 |
0.0 |
Request SAQ help |
0 |
0.0 |
0 |
0.0 |
0 |
0.0 |
Special needs |
4 |
0.9 |
0 |
0.0 |
0 |
0.0 |
Refusal |
183 |
40.3 |
11 |
3.7 |
24 |
6.5 |
Willing to participate |
1 |
0.2 |
1 |
0.3 |
0 |
0.0 |
Total |
454 |
|
300 |
|
372 |
|
Reason for call |
Spring 2010 (Panel 15 Round 1, Panel 14 Round 3, Panel 13 Round 5) |
Fall 2010 (Panel 15 Round 2, Panel 14 Round 4) |
Round 1 |
Rounds 3 and 5 |
Rounds 2 and 4 |
N |
% |
N |
% |
N |
% |
Address/telephone change |
2 |
0.8 |
42 |
8.2 |
25 |
5.3 |
Appointment |
44 |
18.0 |
214 |
41.6 |
309 |
66.0 |
Request callback |
87 |
35.7 |
196 |
38.1 |
46 |
9.8 |
No message |
17 |
7.0 |
33 |
6.4 |
17 |
3.6 |
Other |
7 |
2.9 |
8 |
1.6 |
14 |
3.0 |
Request SAQ help |
0 |
0.0 |
0 |
0.0 |
12 |
2.6 |
SAQ refusal |
0 |
0.0 |
0 |
0.0 |
1 |
0.2 |
Special needs |
1 |
0.4 |
1 |
0.2 |
1 |
0.2 |
Refusal |
86 |
35.2 |
20 |
3.9 |
43 |
9.2 |
Willing to participate |
0 |
0.0 |
0 |
0.0 |
0 |
0.0 |
Total |
244 |
|
514 |
|
468 |
|
Reason for call |
Spring 2011 (Panel 16 Round 1, Panel 15 Round 3, Panel 14 Round 5) |
Fall 2011 (Panel 16 Round 2, Panel 15 Round 4) |
Round 1 |
Rounds 3 and 5 |
Rounds 2 and 4 |
N |
% |
N |
% |
N |
% |
Address/telephone change |
16 |
3.4 |
46 |
8.0 |
72 |
9.8 |
Appointment |
175 |
37.6 |
407 |
71.0 |
466 |
63.5 |
Request callback |
81 |
17.4 |
63 |
11.0 |
69 |
9.4 |
No message |
24 |
5.2 |
26 |
4.5 |
23 |
3.1 |
Other |
12 |
2.6 |
8 |
1.4 |
25 |
3.4 |
Request SAQ help |
1 |
0.2 |
2 |
0.3 |
32 |
4.4 |
SAQ refusal |
0 |
0.0 |
0 |
0.0 |
46 |
6.3 |
Special needs |
0 |
0.0 |
0 |
0.0 |
1 |
0.1 |
Refusal |
157 |
33.7 |
21 |
3.7 |
0 |
0.0 |
Willing to participate |
0 |
0.0 |
0 |
|
0 |
0.0 |
Total |
466 |
|
573 |
|
734 |
|
Reason for call |
Spring 2012 (Panel 17 Round 1, Panel 16 Round 3, Panel 15 Round 5) |
Fall 2012 (Panel 17 Round 2, Panel 16 Round 4) |
Round 1 |
Rounds 3 and 5 |
Rounds 2 and 4 |
N |
% |
N |
% |
N |
% |
Address/telephone change |
18 |
5.0 |
107 |
13.4 |
108 |
12.2 |
Appointment |
130 |
36.1 |
517 |
64.9 |
584 |
65.8 |
Request callback |
60 |
16.7 |
94 |
11.8 |
57 |
6.4 |
No message |
21 |
5.8 |
17 |
2.1 |
18 |
2.0 |
Other |
10 |
2.8 |
25 |
3.1 |
16 |
1.8 |
Proxy needed |
0 |
0.0 |
1 |
0.1 |
2 |
0.2 |
Request SAQ help |
2 |
0.6 |
6 |
0.8 |
42 |
4.7 |
SAQ refusal |
0 |
0.0 |
0 |
0.0 |
0 |
0.0 |
Special needs |
1 |
0.3 |
0 |
0.0 |
0 |
0.0 |
Refusal |
117 |
32.5 |
30 |
3.8 |
60 |
6.8 |
Willing to participate |
1 |
0.3 |
0 |
0.0 |
0 |
0.0 |
Total |
360 |
|
797 |
|
887 |
|
Reason for call |
Spring 2013 (Panel 18 Round 1, Panel 17 Round 3, Panel 16 Round 5) |
Fall 2013 (Panel 18 Round 2, Panel 17 Round 4) |
Round 1 |
Rounds 3 and 5 |
Rounds 2 and 4 |
N |
% |
N |
% |
N |
% |
Address/telephone change |
18 |
4.4 |
82 |
10.8 |
53 |
9.0 |
Appointment |
143 |
35.0 |
558 |
73.0 |
370 |
62.6 |
Request callback |
71 |
17.4 |
88 |
11.5 |
70 |
11.8 |
No message |
8 |
2.0 |
11 |
1.4 |
16 |
2.8 |
Other |
2 |
0.5 |
4 |
.5 |
5 |
0.9 |
Proxy needed |
1 |
0.2 |
1 |
0.1 |
1 |
0.2 |
Request SAQ help |
1 |
0.2 |
0 |
0.0 |
31 |
5.3 |
SAQ refusal |
0 |
0.0 |
0 |
0.0 |
0 |
0.0 |
Special needs |
2 |
0.5 |
0 |
0.0 |
2 |
0.3 |
Refusal |
162 |
39.5 |
19 |
2.5 |
43 |
7.3 |
Willing to participate |
1 |
0.2 |
1 |
0.1 |
0 |
0.0 |
Total |
409 |
|
764 |
|
591 |
|
Reason for call |
Spring 2014 (Panel 19 Round 1, Panel 18 Round 3, Panel 17 Round 5) |
Fall 2014 (Panel 19 Round 2, Panel 18 Round 4) |
Round 1 |
Rounds 3 and 5 |
Rounds 2 and 4 |
N |
% |
N |
% |
N |
% |
Address/telephone change |
11 |
3.2 |
71 |
11.1 |
62 |
8.4 |
Appointment |
75 |
22.1 |
393 |
61.5 |
490 |
66.5 |
Request callback |
70 |
20.6 |
113 |
17.7 |
70 |
9.5 |
No message |
11 |
3.2 |
12 |
1.9 |
28 |
3.9 |
Other |
0 |
0.0 |
5 |
0.8 |
7 |
0.9 |
Proxy needed |
0 |
0.0 |
0 |
0.0 |
1 |
0.1 |
Request SAQ help |
0 |
0.0 |
1 |
0.2 |
4 |
0.5 |
SAQ refusal |
0 |
0.0 |
0 |
0.0 |
0 |
0.0 |
Special needs |
0 |
0.0 |
0 |
0.0 |
0 |
0.0 |
Refusal |
165 |
48.5 |
44 |
6.9 |
74 |
10.0 |
Willing to participate |
8 |
2.4 |
0 |
0.0 |
1 |
0.1 |
Total |
340
|
|
639 |
|
737 |
|
Reason for call |
Spring 2015 (Panel 20 Round 1, Panel 19 Round 3, Panel 18 Round 5) |
Fall 2015 (Panel 20 Round 2, Panel 19 Round 4) |
Round 1 |
Rounds 3 and 5 |
Rounds 2 and 4 |
N |
% |
N |
% |
N |
% |
Address/telephone change |
10 |
2.3 |
61 |
8.8 |
55 |
9.6 |
Appointment |
95 |
21.8 |
438 |
63.4 |
346 |
60.7 |
Request callback |
85 |
19.5 |
112 |
16.2 |
52 |
9.1 |
No message |
14 |
3.2 |
17 |
2.5 |
4 |
0.7 |
Other |
2 |
0.5 |
3 |
0.4 |
3 |
0.5 |
Proxy needed |
1 |
0.2 |
7 |
1.0 |
8 |
1.4 |
Request SAQ help |
1 |
0.2 |
3 |
0.4 |
11 |
1.9 |
SAQ refusal |
0 |
0.0 |
0 |
0.0 |
0 |
0.0 |
Special needs |
0 |
0.0 |
0 |
0.0 |
0 |
0.0 |
Refusal |
206 |
47.2 |
47 |
6.8 |
91 |
16.0 |
Willing to participate |
22 |
5.0 |
3 |
0.4 |
0 |
0.0 |
Total |
436 |
|
691 |
|
570 |
|
Reason for call |
Spring 2016 (Panel 21 Round 1, Panel 20 Round 3, Panel 19 Round 5) |
Fall 2016 (Panel 21 Round 2, Panel 20 Round 4) |
Round 1 |
Rounds 3 and 5 |
Rounds 2 and 4 |
N |
% |
N |
% |
N |
% |
Address/telephone change |
8 |
2.7 |
64 |
11.7 |
48 |
7.9 |
Appointment |
93 |
30.9 |
362 |
66.2 |
373 |
61.7 |
Request callback |
47 |
15.6 |
59 |
10.8 |
83 |
13.7 |
No message |
1 |
0.3 |
7 |
1.3 |
6 |
1.0 |
Other |
2 |
0.7 |
1 |
0.2 |
3 |
0.5 |
Proxy needed |
0 |
0.0 |
5 |
0.9 |
6 |
1.0 |
Request SAQ help |
0 |
0.0 |
3 |
0.5 |
11 |
1.8 |
SAQ refusal |
0 |
0.0 |
0 |
0.0 |
0 |
0.0 |
Special needs |
1 |
0.3 |
0 |
0.0 |
0 |
0.0 |
Refusal |
139 |
46.2 |
46 |
8.4 |
75 |
12.4 |
Willing to participate |
10 |
3.3 |
0 |
0.0 |
0 |
0.0 |
Total |
301 |
|
547 |
|
605 |
|
Reason for call |
Spring 2017 (Panel 22 Round 1, Panel 21 Round 3, Panel 20 Round 5) |
Fall 2017 (Panel 22 Round 2, Panel 21 Round 4) |
Round 1 |
Rounds 3 and 5 |
Rounds 2 and 4 |
N |
% |
N |
% |
N |
% |
Address/telephone change |
10 |
2.9 |
51 |
9.6 |
35 |
6.8 |
Appointment |
86 |
24.9 |
355 |
66.6 |
318 |
61.4 |
Request callback |
59 |
17.1 |
90 |
16.9 |
64 |
12.4 |
No message |
1 |
0.3 |
2 |
0.4 |
5 |
1.0 |
Other |
2 |
0.6 |
3 |
0.6 |
4 |
0.8 |
Proxy needed |
1 |
0.3 |
7 |
1.3 |
5 |
1.0 |
Request SAQ help |
1 |
0.3 |
0 |
0.0 |
15 |
2.9 |
SAQ refusal |
0 |
0.0 |
0 |
0.0 |
0 |
0.0 |
Special needs |
0 |
0.0 |
1 |
0.2 |
1 |
0.2 |
Refusal |
172 |
49.7 |
23 |
4.3 |
70 |
13.5 |
Willing to participate |
14 |
4.0 |
1 |
0.2 |
1 |
0.2 |
Total |
346 |
|
533 |
|
518 |
|
Return To Table Of Contents
Table A-8. Files delivered during 2017
Date |
Delivery |
Group |
Description |
1/3/2017 |
2015 |
UEGN |
Delivery of the 2015 Pre-Imputation Files |
1/4/2017 |
2015 |
HINS |
Delivery of the 2015 HINS Building Block Variables and COVERM Tables for Panel 19 Rounds 3 – 5 and Panel 20 Rounds 1 – 3 |
1/4/2017 |
2015 |
HINS |
Delivery of the 2015 HINS Month-by-Month, Tricare plan, Private, Medicare, and Medicaid HMO/Gatekeeper, and PMEDIN/DENTIN variables |
1/4/2017 |
2015 |
HINS |
Delivery of the FY 2015 HINS Medicare Part D supplemental variables |
1/4/2017 |
2015 |
PRPL |
FY15 PRPL Specifications Coverage Record and HMO Variables, JOBS Link and Variable Editing, and Variable Editing: Post JOBS Linking |
1/5/2017 |
2017 |
DOCM |
2017 NPI provider file uploaded to RTI |
1/6/2017 |
2016 |
EMPL |
Unweighted Medians for the 2016 Point-in-Time Hourly Wage Variable |
1/6/2017 |
2015 |
HLTH |
Delivery of Adult and Child Height and Weight for the MEPS Master Files for FY 2015 |
1/6/2017 |
2015 |
PCND |
2015 Person-Level Priority Conditions Cross-Tabulations |
1/9/2017 |
2016 |
DOCM |
Delivery of the 2016 MPC files for Sample selection - Wave 1 |
1/9/2017 |
2016 |
DOCM |
Delivery of the 2016 PC Sample file - Wave 1 |
1/9/2017 |
2016 |
DOCM |
Delivery of the 2016 Provider file for NPI coding - Wave 1 |
1/9/2017 |
2016 |
DOCM |
Delivery of the 2016 MOS Sample file - Wave 1 |
1/9/2017 |
2015 |
GNRL |
Delivery of End-Of-Round files (RU level and Person level) -P20R4 |
1/9/2017 |
2015 |
UEGN |
Specifications for the 2015 Non-MPC Expenditure Event Files |
1/10/2017 |
2015 |
UEGN |
The 2015 Utilization Standard Error Benchmarking Tables Using Person Use PUF Weights- PERWT15P |
1/11/2017 |
2015 |
WGTS |
Create the Full Year 2015 Person Use SAQ Weights Delivery File |
1/16/2016 |
2015 |
GNRL |
Delivery of End-Of-Round files (RU level and Person level) – P21R2 |
1/17/2017 |
2015 |
ADMN |
Delivery of 2015 FAMID Variables and CPS Family Identifier |
1/17/2017 |
2015 |
DEMO |
Delivery of the Output Listings for Final Case Review of the MOPID and DAPID Variables’ Construction for FY2015 |
1/17/2017 |
2015 |
COND |
2015 Preliminary Conditions File and Associated Documents for AHRQ Review |
1/18/2017 |
2015 |
GNRL |
Preliminary Version of the 2015 JOBS File Codebook and Delivery |
1/18/2017 |
2015 |
GNRL |
FY2015 Use PUF Preliminary Codebook and Delivery Document for AHRQ and NCHS Review |
1/18/2017 |
2015 |
GNRL |
FY 2015 (Panel 19 and Panel 20) Delivery Database Snapshots JOBS Files with Industry and Occupation Codes, COND Files with Condition Codes, and CCS Codes |
1/25/2017 |
2015 |
UEPD |
2015 INSURC15 variable for use in the Prescribed Medicines Imputation |
1/26/2017 |
2015 |
DEMO |
Delivery of the MOPID and DAPID Variables for FY2015 |
2/1/2017 |
2016 |
EMPL |
PIT2016 Panel 21 Round 1 Editing of High Wage Outliers – Request for Approval |
2/1/2017 |
2015 |
UEGN |
2015 MPC HHA provider reported low charge events |
2/1/2017 |
2016 |
WGTS |
Panel 21 Round 1 DU weights review output |
2/1/2017 |
2016 |
WGTS |
Panel 21 Round 1 Family weights review output |
2/2/2017 |
2016 |
WGTS |
Panel 21 Round 1 Person weights review output |
2/2/2017 |
2016 |
WGTS |
MEPS: Establishing Variance Estimation Strata and PSUs for the 2016 Point-in-Time PUF Panel 21 Round 1, and Panel 20, Round 3 |
2/3/2017 |
2015 |
WGTS |
Panel 20/Round 3 Person weights review output |
2/6/2017 |
2015 |
WGTS |
Panel 20/Round 3 family weights review output |
2/10/2017 |
2015 |
GNRL |
HC-176: 2015 Jobs Public Use File Delivery for Web Release |
2/10/2017 |
2015 |
GNRL |
HC-174: Delivery of the Full Year 2015 Use PUF for Web Release |
2/13/2017 |
2016 |
WGTS |
MEPS Panel 20 Round 3 - Creation of Family-Level Weights |
2/13/2017 |
2016 |
WGTS |
Creation of the Delivery File for the 2016 PIT P20R3/P21R1 Preliminary Individual Panel Person and Family Weights and Preliminary (draft) Variance Strata and PSU |
2/14/2017 |
2016 |
WGTS |
MEPS Panel 21 Round 1 – DU Level Weight |
2/16/2017 |
2015 |
COND |
FY 2015 Preliminary CLNK File |
2/16/2017 |
2016 |
EMPL |
PIT2016 Panel 21 Round 1 Editing of High Wage Outliers – Request for Approval |
2/16/2017 |
2016 |
WGTS |
MEPS Panel 21 Round 1 – Family-Level Weights |
2/16/2017 |
2016 |
WGTS |
MEPS Panel 21 Round 1 – Person-Level Weights |
2/16/2017 |
2016 |
WGTS |
Preliminary run for Estimating Standard Errors Using SUDAAN for the Panel 21, Round 1 and Panel 20, Round 3 PIT 2016 PUF Data—Checking the Variance Strata and PSUs |
2/17/2017 |
2015 |
UEGN |
Deliver the variable List for the 2015 Non-MPC Expenditure Event PUF Files (DN, OM and HH) |
2/21/2017 |
2015 |
UEPD |
2015 Prescribed Medicines price outlier review |
2/21/2017 |
2016 |
WGTS |
2016 P20R3/P21R1 Family weights review output |
2/22/2017 |
2016 |
GNRL |
Preliminary Version of the 2016 Point-in-Time File |
2/22/2017 |
2016 |
WGTS |
2016 P20R3/P21R1 Person weights review output |
2/24/2017 |
2015 |
WGTS |
MEPS Panels 19 and 20 Full Year 2015: Combine and Rake the P19 and P20 Weights to Obtain the P19P20FY15 Person-Level USE Weights |
2/24/2017 |
2015 |
WGTS |
Establishing Variance Estimation Strata and PSUs, and Estimating Standard Errors Using SUDAAN for the Full Year 2015 PUF, Panel 19, Rounds 3-5 and Panel 20, Rounds 1-3 |
3/2/2017 |
2016 |
EMPL |
Point-In-Time 2016 Hourly Wage Top Code Value |
3/2/2017 |
2015 |
MEPS |
Updated SOP Coding Static Table from the 2015 MPC, for transfer to Westat |
3/3/2017 |
2016 |
WGTS |
Delivery of 2016 Point-in-Time Person-Level and Family-Level Weights |
3/3/2017 |
2016 |
WGTS |
Internal Use File Used for the Weights Development for 2016 Point-in-Time |
3/6/2017 |
2016 |
WGTS |
Final: Estimating Standard Errors Using SUDAAN for the Panel 21, Round 1 and Panel 20, Round 3 PIT 2016 PUF Data—Checking the Variance Strata and PSUs |
3/6/2017 |
2016 |
WGTS |
(Point-in-Time) Complete construction and hand off all Point-in-Time weight variables (person weight, family weight, variance PSU, variance strata, MSA13, and REGION13) - P20/P21 |
3/6/2017 |
2015 |
WGTS |
MEPS, Combined Panel 20/Round 3 and Panel 21/Round 1, Computation of the Composite Family Weights |
3/7/2017 |
2016 |
HINS |
2016 HINS Point in Time Delivery Preliminary Data File for Benchmarking |
3/7/2017 |
2015 |
UEGN |
The 2015 DN/HHP/OM/HHA Events Final Imputation Files |
3/7/2017 |
2015 |
UEGN |
2015 Benchmark Tables for DN, OM, HHP, and HHA |
3/8/2017 |
2016 |
INCO |
Delivery of the 2016 NHIS Link File |
3/9/2017 |
2016 |
WGTS |
Delivery of MEPS Panel 21 DU Weighting Master File |
3/10/2017 |
2015 |
CODE |
2015 File of GEO Coded Addresses – for the MEPS Master Files |
3/10/2017 |
2016 |
EMPL |
Delivery of the Pre-Top Coded Version of the Point-in-Time Hourly Wage Variables for 2016 Point-in-Time |
3/14/2017 |
2015 |
GNRL |
Preliminary Version of the 2015 Conditions File Delivery Document and Recode Materials for Review |
3/14/2017 |
2016 |
GNRL |
Preliminary Version of the 2016 Point-in-Time Delivery Document for AHRQ and NCHS Review (Deliverable #: 16.611 (Draft) ) |
3/14/2017 |
2015 |
UEGN |
2015 Benchmark Tables for MVN Events |
3/14/2017 |
2015 |
UEGN |
The 2015 MVN Final Imputation File |
3/16/2017 |
2015 |
GNRL |
Redelivery: Preliminary Version of the 2015 Conditions File Delivery Document and Recode Materials for Review |
3/17/2017 |
2015 |
WGTS |
Delivery File Providing a Linkage between the Person Records Sampled for MEPS Panel 21 and the Person Records in the 2015 NHIS Weights File |
3/17/2017 |
2016 |
WGTS |
MEPS, Combined Panel 20/Round 3 and Panel 21/Round 1, Computation of the Composite Person Weights |
3/20/2017 |
2016 |
WGTS |
MEPS Panel 21 Round 1 – Creation of DU Weighting Master File Delivery |
3/22/2017 |
2015 |
GNRL |
2015 Preliminary Conditions File and Associated Documents for NCHS and AHRQ Review |
3/22/2017 |
2016 |
GNRL |
Preliminary Version of the 2016 Point-in-Time Delivery Document and Codebook for AHRQ and NCHS Review (16.611 (Draft)) |
3/27/2017 |
2015 |
UEGN |
Deliver the variable List for the 2015 MPC Expenditure Event PUF Files (OP, OB, ER and IP) |
3/29/2017 |
2015 |
WGTS |
FY2015 Combined Panels Expenditure Person Weight (PERWT15F) review output – delivery |
3/29/2017 |
2015 |
WGTS |
FY2015 Combined Panels Expenditure Person Weight (PERWT15F) review output - delivery |
3/29/2017 |
2015 |
WGTS |
FY2015 Individual Panels Expenditure Person Weights (WTP19P15F and WTP20P15F) review output, digital delivery |
4/3/2017 |
2015 |
GNRL |
Delivery of the File Containing Variables Recoded or Dropped from the USE PUF after DRB Review - P19/P20 |
4/7/2017 |
2016 |
DOCM |
Delivery of the 2016 MPC files for Sample selection - Wave 2 |
4/7/2017 |
2016 |
DOCM |
Delivery of the 2016 PC Sample file - Wave 2 |
4/7/2017 |
2016 |
DOCM |
Delivery of the 2016 Provider file for NPI coding - Wave 2 |
4/7/2017 |
2016 |
DOCM |
Delivery of the 2016 MOS Sample file - Wave 2 |
4/8/2017 |
2015 |
UEGN |
2015 Benchmark Tables for MPC Events |
4/10/2017 |
2015 |
UEGN |
The 2015 Final Imputation Files: ER, HS, MVE, OP and SBD |
4/10/2017 |
2015 |
WGTS |
Delivery of the Individual Panel Raked Person Weights for P19P20FY15 |
4/10/2017 |
2015 |
WGTS |
Delivery of the FY 2015 Expenditure File Original Person Weight |
4/12/2017 |
2015 |
GNRL |
Delivery of the File Containing Variables Recoded or Dropped from the USE PUF after DRB Review - P19/P20 |
4/13/2017 |
2015 |
EMPL |
Delivery of 2015 Covered Person Records for Employment Variable Imputation |
4/13/2017 |
2015 |
PRPL |
Delivery of the FY 2015 OOPELIG2 Dataset for Approval |
4/14/2017 |
2015 |
GNRL |
NCHS Checklists and Preliminary Versions of Documents for the FY 2015 Non-MPC Event (DV, OM, and HH) PUF |
4/14/2017 |
2016 |
GNRL |
HC177: 2016 Point-in-Time PUF Delivery for Web Release |
4/17/2017 |
2016 |
GNRL |
Delivery of the Person-Level Sample Crosswalk Files for Panel 21 Round 1 - Round 2 |
4/17/2017 |
2016 |
HINS |
Changes to the FY 2016 HINS Basic and Inter-Round |
4/19/2017 |
2015 |
GNRL |
Preliminary Versions of the 2015 Non-MPC Event (DV, OM, and HH) PUF Codebooks and Documents for Use in AHRQ and NCHS Review |
4/19/2017 |
2014 2015 |
PCND |
2014 and 2015 Remission and Cancer Age of Diagnosis Variable Datasets |
4/20/2017 |
2016 |
GNRL |
Delivery of the File Containing Variables Recoded or Dropped from the 2016 Point-In-Time PUF after DRB Review – P20/P21 |
4/20/2017 |
2015 |
WGTS |
P19P20 FY2015 Person-level SAQ Expenditure Weights |
4/20/2017 |
2015 |
WGTS |
Developing Sample Weights for the MEPS Diabetes Questionnaire Component (DCS) for the Panels 19 and 20 Full Year 2015 Expenditure File (PUF) |
4/24/2017 |
2015 |
UEGN |
The 2015 Utilization Standard Error Benchmarking Tables Using Expenditure File Person Original Weight- PERWT15F_ORIG |
4/26/2017 |
2015 |
WGTS |
Delivery of the Poverty-Adjusted Family Level Weight, CPS-Like Family Level Weight, Poverty-Adjusted DCS and SAQ Weights for FY2015 |
5/3/2017 |
2015 |
PRPL |
Delivery of the FY 2015 PRPL Hot Deck Imputation Results for Approval |
5/3/2017 |
2015 |
WGTS |
Delivery of the Individual Panel 19 and Panel 20 SAQ Expenditure Weight for FY2015 |
5/5/2017 |
2015 |
CODE |
Weekly report #5 for condition coding - DY2016 |
5/5/2017 |
2015 |
UEPD |
Delivery of 2015 PMED PUF (RX15V01) |
5/8/2017 |
2015 |
WGTS |
Panel 19 and Panel 20 Combined, Full Year 2015: Raking Person Weights Including the Poverty Status to Obtain the Expenditure Person Weights. |
5/10/2017 |
2015 |
UEPD |
Delivery of 2015 PMED PUF (RX15V02) |
5/11/2017 |
2015 |
UEGN |
Predictive Mean Matching Imputation Method Applied to the Expenditure Imputation of the MPC Event Types for the Full Year 2015 Data |
5/11/2017 |
2015 |
UEGN |
Predictive Mean Matching Imputation Method Applied to the Expenditure Imputation of the non-MPC Event Types for the Full Year 2015 Data |
5/11/2017 |
2015 |
WGTS |
P19 FY2015 Person-level SAQ Expenditure Weights |
5/11/2017 |
2015 |
WGTS |
P20 FY2015 Person-level SAQ Expenditure Weights |
5/11/2017 |
2015 |
WGTS |
Derivation of the 2015 Full Year Expenditure Family Weight, MEPS and CPS-Like, for Panel 19 and Panel 20 Combined |
5/12/2017 |
2015 |
GNRL |
HC-178b, HC-178c, and HC-178h: 2015 Expenditure Event PUFs for Non-MPC Event Types (DV, OM, and HH) and All Related Files for Web Release |
5/12/2017 |
2015 |
GNRL |
NCHS Checklists and Preliminary Versions of Documents for the FY 2015 MPC Event (IP, ER, OP, OB) PUFs |
5/15/2017 |
2015 |
GNRL |
FY15 Office-Based Medical Provider Visits Dataset (HC178G) |
5/16/2017 |
2015 |
PRPL |
Delivery of the FY 2015 OOPELIG2 Dataset for Approval |
5/17/2017 |
2015 |
GNRL |
Preliminary Versions of the 2015 MPC Event (IP, ER, OP, OB) PUF Codebooks for Use in AHRQ and NCHS Review |
5/17/2017 |
2015 |
UEPD |
Delivery of 2015 PMED PUF (RXNAME_CANNABIS_REVIEW.xlsx - A Complete Listing of all Cases for Cannabis Related Confidentiality Risks) |
5/17/2017 |
2015 |
UEPD |
Delivery of 2015 PMED PUF (TC15XTABS.lst, TC15XTABS.xlsx) |
5/22/2017 |
2015 |
UEPD |
Delivery of 2015 PMED PUF (RX15V03) |
5/25/2017 |
2015 |
WGTS |
Delivery of the FY 2015 Expenditure File Final Person Weight – PERWT15F |
5/26/2017 |
2016 |
DOCM |
DY2016 Wave III Estimates for MPC Sample File |
5/31/2017 |
2015 |
DOCM |
Delivery of the Updated 2015 MOS Sample file - Wave 1 USCP |
6/5/2017 |
2015 |
GNRL |
NCHS/DRB Review of FY 2015 Event PUFs (IP, ER, OP, & OB) |
6/5/2017 |
2015 |
UEPD |
Delivery of 2015 PMED PUF (RX15V05.LST, RX15V06.LST, RX15V05X.LST, TOP10RX15_USE.LST, TOP10TC15_USE.LST, TOP10TC15_EXP.LST, TOP25RX15_EXP.LST and the preliminary version of the Prescription Drug Estimates Tables |
6/6/2017 |
2016 |
GNRL |
FY2016 Person-Level Use PUF Variable List Changes for AHRQ Review |
6/9/2017 |
2015 |
GNRL |
HC-178d, HC-178e, HC-178f, and HC-178g: 2015 Expenditure Event PUFs for MPC Event Types (IP, ER, OP, and OB) and All Related Files for Web Release |
6/9/2017 |
2015 |
PRPL |
Delivery of the FY 2015 OOPELIG3 Dataset, Benchmarking results, POSTIMPFIN results for final approval of OOPPREM variables, and Preliminary Delivery Dataset |
6/12/2017 |
2015 |
GNRL |
Addendum to the FY 2015 (Panel 19 & Panel 20) Delivery Database Snapshots: Edited Segments since the Previous Delivery of 1/18/17 |
6/12/2017 |
2015 |
UEPD |
Delivery of the 2015 PMED PUF (final version of the Prescription Drug Estimates Tables) |
6/13/2017 |
2015 |
EMPL |
Delivery of 2015 Covered Person Records for Employment Variable Imputation |
6/13/2017 |
2015 |
GNRL |
NCHS Checklist and Preliminary Version of Delivery Document for the FY 2015 Prescribed Medicines (PMED) PUF |
6/13/2017 |
2015 |
UEPD |
Delivery of 2015 PMED PUF (RX15V05X) SAS dataset and the format files (RX15V05X.sas7bcat and rxexpf2.sas) |
6/16/2017 |
2015 |
DOCM |
Delivery of the Updated 2015 MOS Sample file - Wave 1 USCP |
6/16/2017 |
2016 |
DSDY |
Delivery of the DSDY "Missed Days” top code values for AHRQ approval |
6/16/2017 |
2015 |
PCND |
2015 Priority Conditions Benchmarking Table |
6/16/2017 |
2015 |
UEPD |
2015 PMED PUF data (RX15V06.sas7bdat) and the format files ((RX15V06.sas7bcat, rxexpv06f.sas and rxexpv06f2.sas) |
6/20/2017 |
2016 |
DOCM |
Delivery of the Updated 2016 MOS Sample file - Wave 1 USCP |
6/20/2017 |
2016 |
GNRL |
Delivery of End-Of-Round files (RU level and Person level) -P20R5 |
6/21/2017 |
2015 |
GNRL |
Preliminary Version of the 2015 Prescribed Medicines (PMED) Event PUF Codebook for Use in AHRQ and NCHS Review |
6/22/2017 |
2015 |
UEGN |
Delivery of the Dropped Variables Due to DRB Review – FY15 EXP PUF files for DV, OM, ER, OP, OB, IP and RX |
6/23/2017 |
2015 |
DOCM |
Delivery of the Updated 2015 MOS Sample file - Wave 2 USCP |
6/23/2017 |
2016 |
DOCM |
Delivery of the Updated 2016 MOS Sample file - Wave 2 USCP |
6/23/2017 |
2016 |
GNRL |
Full-Year 2016 Annotated and Consolidated Specifications, F1 Help Text, and Overall Context Flow for Web Release |
6/30/2017 |
2015 |
GNRL |
Addendum to the FY 2015 (Panel 19 & Panel 20) Delivery Database Snapshots: Edited Segments since the Previous Delivery of 6/12/17 |
7/5/2017 |
2016 |
DOCM |
Delivery of 2016 MPC Sample file - Wave 3 |
7/5/2017 |
2016 |
DOCM |
Delivery of the 2016 PC Sample file |
7/5/2017 |
2016 |
DOCM |
Delivery of the 2016 Provider file for NPI coding - Wave 3 |
7/6/2017 |
2015 |
GNRL |
NCHS Checklist and Preliminary Version of the Document for the FY 2015 Consolidated Data PUF |
7/6/2017 |
2015 |
GNRL |
NCHS Checklist and Preliminary Version of Delivery Document for the FY 2015 Person Round Plan (PRPL) PUF |
7/7/2017 |
2015 |
UEGN |
The 2015/2015 QC Finding Tables of PUF Event Expenditures |
7/10/2017 |
2017 |
HINS |
Changes to the HINS Point-In-Time 2017 specifications |
7/11/2017 |
2017 |
GNRL |
Point-in-Time 2017 PUF Variable List Changes for AHRQ Review |
7/14/2017 |
2016 |
CODE |
Output of Matching program for Marc |
7/14/2017 |
2015 |
GNRL |
Delivery of the 2015 Prescribed Medicines (PMED) PUF and all Related Files for Web Release |
7/17/2017 |
2016 |
CODE |
Output of Matching program for Marc |
7/17/2017 |
2015 |
GNRL |
Preliminary Version of the 2015 Consolidated File |
7/18/2017 |
2015 |
GNRL |
Revised NCHS Checklist for the FY 2015 Consolidated Data PUF for Use in AHRQ and NCHS Review |
7/18/2017 |
2014 |
UEGN |
2014 Re-matching Review Summary |
7/19/2017 |
2015 |
GNRL |
FY 2015 Conditions PUF Preliminary Versions of Codebook and Delivery Document for Use in AHRQ Review |
7/19/2017 |
2015 |
GNRL |
FY 2015 Conditions PUF Preliminary Versions of Codebook and Delivery Document for Use in AHRQ Review |
7/19/2017 |
2015 |
GNRL |
Preliminary Versions of the Codebook and Document for the FY 2015 Consolidated PUF for Use in AHRQ and NCHS Review |
7/19/2017 |
2015 |
GNRL |
FY 2015 Person Round Plan PUF Preliminary Versions of Codebook and Delivery Document for Use in AHRQ and NCHS Review |
7/19/2017 |
2015 |
GNRL |
Preliminary Version of the 2015 Appendix to the Event PUFs Delivery Document, Codebooks, and Table 1 for Review |
7/24/2017 |
2016 |
GNRL |
Delivery of End-Of-Round files (RU level and Person level) -P21R3 |
7/25/2017 |
2015 |
GNRL |
Preliminary Versions of the Codebook and Document for the FY 2015 Consolidated Data PUF for Use in AHRQ and NCHS Review – Edited |
7/26/2017 |
2016 |
UEGN |
The DN Text Strings Recoding for FY2016 |
7/27/2017 |
2016 |
GNRL |
Delivery of the Person-Level Sample Crosswalk Files for Panel 20 Round 1 - Round 5 |
7/28/2017 |
2015 |
GNRL |
HC179: Preliminary Version of the 2015 PRPL File |
7/31/2017 |
2017 |
GNRL |
MEPS P20R5 raw data files |
7/31/2017 |
2015 |
PRPL |
Ad Hoc Delivery of the FY 2015 Preliminary Unencrypted PRPL File |
8/7/2017 |
2016 |
DOCM |
Delivery of the File of Provider Names for FY 2016 |
8/11/2017 |
2015 |
GNRL |
HC-181: Full Year 2015 Consolidated Use, Expense, and Insurance PUF Delivery for Web Release |
8/11/2017 |
2015 |
GNRL |
H178I: Delivery of the Final Appendix to the 2015 Event Files and all Related Files for Web Release |
8/11/2017 |
2015 |
GNRL |
HC-179: Delivery of the 2015 Person Round Plan (PRPL) PUF and Related Files for Web Release |
8/11/2017 |
2015 |
GNRL |
HC-180: Delivery of the Final 2015 Conditions File and All Related Files for Web Release |
8/15/2017 |
2017 |
GNRL |
MEPS P21R3 raw data files |
8/18/2017 |
2017 |
GNRL |
Delivery of End-Of-Round files (RU level and Person level) -P22R1 |
8/21/2014 |
2016 |
ACCS |
2016 ACCS Other Specify Text String Recoding |
8/25/2017 |
2016 |
CSAQ |
Cancer SAQ- Panel 20 Round 3/Panel 21 Round 1-Constructed Variables |
8/28/2017 |
2016 |
ACCS |
2016 ACCS Other Specify Text String Recoding |
9/5/2017 |
2016 |
DOCM |
MEPS 2016 Static table for conditions after the 2016 condition coding cycle |
9/8/2017 |
2016 |
HINS |
Delivery of the P2116 EPCP Cross-tabs and Editing Results Documents |
9/15/2017 |
2016 |
DEMO |
MEPS Race Programming Specifications for FY2016 |
9/15/2017 |
2017 |
GNRL |
MEPS P22R1 raw data files |
9/20/2017 |
2014 |
UEGN |
2014 MPC Rematched Files: Delivery of Final Imputed Files and Benchmark Tables |
9/20/2017 |
2014 |
UEGN |
2014 MPC Rematched Files: Findings on RTI SBD_SUM2 File |
9/22/2017 |
2016 |
HINS |
Delivery of the P2016 EPCP Cross-tabs and Editing Results |
9/28/2017 |
2016 |
DOCM |
Delivery of the 2016 SOP and SRCS Static Tables to RTI |
9/29/2017 |
2017 |
EMPL |
PIT2017 Unweighted Establishment Size Medians |
9/29/2017 |
2016 |
HINS |
Delivery of HINS Panel 21 Rounds 1 - 3 At Any Time/At Interview Date/At 12/31/16 Variables |
10/2/2017 |
2016 |
PRPL |
Full Year 2016 PRPL File Revisions |
10/16/2017 |
2016 |
EMPL |
FY2016 Panel 21 Editing of High Wage Outliers or Substantially Different Wages – Request for Approval |
10/16/2017 |
2016 |
EMPL |
FY2016 Panel 21 Editing of Low Wage Outliers or Wages that Do No Change – Request for Approval |
10/17/2017 |
2016 |
WGTS |
Delivery of the ADMN/DEMO Variables Used for Weights Development for P20P21FY16 |
10/18/2017 |
2017 |
WGTS |
March 2017 CPS and December 2016 control totals output, digital delivery |
10/19/2017 |
2015 |
WGTS |
Panel 20 Full Year 2015: Derivation of Eligibility and Response Indicators for the CPS-like Families |
10/19/2017 |
2017 |
WGTS |
MEPS Computation of the Person and Family Poststratification Control Totals for March 2017 from the March 2017 CPS (including the poverty level variable) |
10/23/2017 |
2016 |
DOCM |
Delivery of the 2016 MPC Pre-Matching Household Component Production File – RTI |
10/23/2017 |
2016 |
HINS |
Delivery of HINS Panel 20 Rounds 3 - 5 At Any Time/At Interview Date/At 12/31/16 Variables |
10/24/2017 |
2016 |
WGTS |
MEPS Computation of the Person and Family Poststratification Control Totals for December 2016 from the March 2017 CPS (including the poverty level variable.) |
11/2/2017 |
2016 |
WGTS |
Derivation of the Annualized MEPS Families and Identification of the Responding MEPS Families for MEPS Panel 21 Full Year 2016 |
11/3/2017 |
2016 |
DOCM |
Delivery of Person-Level Base Weight and Family Pseudo Weight for FY2016 |
11/3/2017 |
2016 |
WGTS |
Delivery of Person-Level Base Weight, Individual Panel Base Weight, Family Membership Flag, and MSA variables for FY2016 |
11/6/2017 |
2016 |
CODE |
Revised schedule and delivery of final coded files for DY2016 |
11/6/2017 |
2016 |
HINS |
Results of the QC Cross Tabs for the HINS 2016 HMO/Gatekeeper FY variables |
11/7/2017 |
2015 |
WGTS |
Developing Panel 20 Self-Administered Questionnaire (SAQ) Use Weights for Full Year 2015 |
11/8/2017 |
2016 |
EMPL |
FY2016 Panel 20 Editing of High Wage Outliers or Substantially Different Wages – Request for Approval |
11/8/2017 |
2016 |
EMPL |
FY2016 Panel 20 Editing of Low Wage Outliers or Wages that Do No Change – Request for Approval |
11/13/2017 |
2016 |
WGTS |
Panel 20 Full Year 2016 person Weight Review Output, digital delivery |
11/14/2017 |
2016 |
EMPL |
Approval of Weighted NUMEMP Medians for P20 R3-5 and P21 R1-3 of FY 2016 |
11/14/2017 |
2016 |
WGTS |
Panel 21 Full Year 2016 person Weight Review Output, digital delivery |
11/15/2017 |
2016 |
UEGN |
The 2016 HHA Same Person/Provider for the Same Month Duplicate Counts |
11/20/2017 |
2016 |
EMPL |
FY 2016 Hourly Wage Imputation Output for Approval |
11/22/2017 |
2016 |
DSDY |
Delivery of the DSDY QC cross tabs for persons with a positive weight |
11/27/2017 |
2016 |
FOOD |
FY 2016 Food Security PUF Constructed Variables and Labels |
11/27/2017 |
2016 |
HLTH |
2016 BMI Cross-tabulations and Frequencies |
11/28/2017 |
2016 |
UEPD |
2016 (Panel 20 & 21) Household Prescribed Medicine and Associated Files - Set 1 |
11/29/2017 |
2016 |
DSDY |
Delivery of the DSDY "Missed Days” top code values for AHRQ approval |
11/30/2017 |
2014 |
WGTS |
Deriving location variables (Region and MSA) for Panel 21 Round 1, based on Geo FIPS Codes, using the OMB MSA definitions of both year 2014 and the most recent OMB MSA updates |
12/1/2017 |
2016 |
EMPL |
Full Year 2016 Wage Top Code Value for AHRQ Approval |
12/4/2017 |
2016 |
WGTS |
Full Year 2016 person weights Nursing Home and Mortality adjustments review output to AHRQ, digital delivery |
12/5/2017 |
2016 |
WGTS |
Panel 20 Full Year 2016 SAQ Use person weight review output |
12/5/2017 |
2016 |
WGTS |
Panel 21 Full Year 2016 SAQ Use person weight review output |
12/5/2017 |
2016 |
WGTS |
Full Year 2016 SAQ person weight for the Use PUF review output |
12/8/2017 |
2016 |
EMPL |
Delivery of the Full Year 2016 Pre-Top-Coded Hourly Wage Variables and Person-Level, Uncondensed Industry and Occupation Codes |
12/8/2017 |
2016 |
WGTS |
Full Year 2016 combined panels Use file person weight review output |
12/12/2017 |
2016 |
WGTS |
Delivery of the Variance Strata and PSU Variables for FY2016 |
12/13/2017 |
2016 |
EMPL |
Full Year 2016 JOBS File Establishment Size Top Code Value and Extent of JOBS Wage Top Coding for AHRQ Approval |
12/13/2017 |
2016 |
WGTS |
Delivery of the SAQ Use PUF Weight and Individual Panel SAQ Weight Variables for FY2016 |
12/14/2017 |
2016 |
GNRL |
Preliminary Version of the 2016 Full-Year Use PUF Dataset |
12/15/2017 |
2017 |
DOCM |
2017 MPC sample file specs |
12/15/2017 |
2017 |
DOCM |
2017 PC sample file specs |
12/15/2017 |
2017 |
DOCM |
2017 provider file for NPI coding specs |
12/15/2017 |
2015 |
UEGN |
The 2015 MEPS Master Files (File2 Files and Post Imputation Files) |
12/15/2017 |
2015 |
UEGN |
Delivery of PDF files for the 2015 Post-Imputation Files and the 2015 Post-Edited, Pre-Imputed Files for the MEPS Master Files |
12/15/2017 |
2016 |
WGTS |
Delivery of Person-Level Use PUF Weight, Single Panel Person Weight, and MSA16_13 Variables for FY16 |
12/18/2017 |
2016 |
UEGN |
2016 Pre-Editing Expenditures QCs |
12/21/2017 |
2016 |
EMPL |
Full Year 2016 Wage Top Coding Results |
12/22/2017 |
2016 |
WGTS |
Derivation of MEPS Panel 20 Full Year 2016 Person Use Weights (Rounds 3-5) |
12/21/2017 |
2016 |
UEPD |
2016 (Panel 20 & 21) PMED Supplemental File - Set 2: Person-Level File and Additional 3 Segment Variable Files |
12/27/2017 |
2016 |
NCO |
Delivery of the 2016 (Panel 20 & 21) INCOME File |
12/27/2017 |
2016 |
UEGN |
2016 Pre-Editing Expenditure QCs |
12/27/2017 |
2016 |
UEPD |
2016 (Panel 20 & 21) PMED Supplemental File - Set 3: Person/Round-Level Files |
12/28/2017 |
2016 |
DEMO |
Delivery of the Output Listings for Case Review of the MOPID and DAPID Variables’ Construction for FY2016 |
12/28/2017 |
2016 |
HINS |
Delivery of the HINS Ever Insured in FY 2016 variables LASTAGE and INSCV916 to be added to the internal “MEPS Master Files" |
Abbreviations used in table: ADMN, Administrative
Analytical Group; ADW, Administrative/Demographics/Weights Analytical Group;
BMI, body mass index; CAPI, Computer Assisted Personal Interview; CLNK,
Condition-Event Link File; CODE, analytic group containing codes such as ICD,
CCS, and CPT; COND, Conditions Analytical Group; COVERM, Oracle table that holds
health insurance building block variables; CPS, Current Population Survey; DAPID,
person ID of the person’s dad; DCS, Diabetes Care Supplement; DEMO, Demographics
Analytical Group; DN, dental; EMPL, Employment Analytical Group; ER, emergency
room; FAMID, family ID; GNRL, General Analytical Group; GEO, geographic coding;
HC, Household Component; HH, home health; HHA, home health agency; HHP, home
health paid independent; HINS, Household Insurance Analytical Group; HLTH,
Health Analytical Group; INCO, Income Analytical Group; IP, inpatient; JOBS,
Jobs File; MOPID, person ID of the person’s mother; MPC, Medical Provider
Component; MVE, medical visit— MPC eligible; MVN, medical visit—non-MPC
eligible; NHIS, National Health Interview Survey; OB, office-based; OM, other
medical events; OP, outpatient; PCND, Person-Level Conditions Analytical Group;
PMED, Prescribed Medicines File; PRPL, Person Round Plan File; PSU, primary
sampling unit; RXLK, Prescribed Medicines-Event Link File; SAQ,
Self-Administered Questionnaire; UEGN, Use and Expenditure Analytical Group;
WGTS, Weight Analytical Group.
Return To Table Of Contents
|